Migrating Netflix to GraphQL Safely | by Netflix Know-how Weblog | Jun, 2023

By Jennifer Shin, Tejas Shikhare, Will Emmanuel

In 2022, a serious change was made to Netflix’s iOS and Android functions. We migrated Netflix’s cellular apps to GraphQL with zero downtime, which concerned a complete overhaul from the shopper to the API layer.

Till lately, an inside API framework, Falcor, powered our cellular apps. They’re now backed by Federated GraphQL, a distributed strategy to APIs the place area groups can independently handle and personal particular sections of the API.

Doing this safely for 100s of tens of millions of shoppers with out disruption is exceptionally difficult, particularly contemplating the numerous dimensions of change concerned. This weblog submit will share broadly-applicable methods (past GraphQL) we used to carry out this migration. The three methods we are going to focus on right now are AB Testing, Replay Testing, and Sticky Canaries.

Earlier than diving into these methods, let’s briefly study the migration plan.

Earlier than GraphQL: Monolithic Falcor API carried out and maintained by the API Workforce

Earlier than transferring to GraphQL, our API layer consisted of a monolithic server constructed with Falcor. A single API crew maintained each the Java implementation of the Falcor framework and the API Server.

Created a GraphQL Shim Service on prime of our present Monolith Falcor API.

By the summer season of 2020, many UI engineers had been prepared to maneuver to GraphQL. As a substitute of embarking on a full-fledged migration prime to backside, we created a GraphQL shim on prime of our present Falcor API. The GraphQL shim enabled shopper engineers to maneuver rapidly onto GraphQL, work out client-side issues like cache normalization, experiment with totally different GraphQL purchasers, and examine shopper efficiency with out being blocked by server-side migrations. To launch Part 1 safely, we used AB Testing.

Deprecate the GraphQL Shim Service and Legacy API Monolith in favor of GraphQL companies owned by the area groups.

We didn’t need the legacy Falcor API to linger without end, so we leaned into Federated GraphQL to energy a single GraphQL API with a number of GraphQL servers.

We may additionally swap out the implementation of a discipline from GraphQL Shim to Video API with federation directives. To launch Part 2 safely, we used Replay Testing and Sticky Canaries.

Two key elements decided our testing methods:

  • Purposeful vs. non-functional necessities
  • Idempotency

If we had been testing purposeful necessities like information accuracy, and if the request was idempotent, we relied on Replay Testing. We knew we may check the identical question with the identical inputs and constantly count on the identical outcomes.

We couldn’t replay check GraphQL queries or mutations that requested non-idempotent fields.

And we undoubtedly couldn’t replay check non-functional necessities like caching and logging consumer interplay. In such circumstances, we weren’t testing for response information however total conduct. So, we relied on higher-level metrics-based testing: AB Testing and Sticky Canaries.

Let’s focus on the three testing methods in additional element.

Netflix historically makes use of AB Testing to guage whether or not new product options resonate with prospects. In Part 1, we leveraged the AB testing framework to isolate a consumer phase into two teams totaling 1 million customers. The management group’s visitors utilized the legacy Falcor stack, whereas the experiment inhabitants leveraged the brand new GraphQL shopper and was directed to the GraphQL Shim. To find out buyer influence, we may evaluate varied metrics corresponding to error charges, latencies, and time to render.

We arrange a client-side AB experiment that examined Falcor versus GraphQL and reported coarse-grained high quality of expertise metrics (QoE). The AB experiment outcomes hinted that GraphQL’s correctness was less than par with the legacy system. We spent the following few months diving into these high-level metrics and fixing points corresponding to cache TTLs, flawed shopper assumptions, and so on.

Wins

Excessive-Stage Well being Metrics: AB Testing offered the reassurance we wanted in our total client-side GraphQL implementation. This helped us efficiently migrate 100% of the visitors on the cellular homepage canvas to GraphQL in 6 months.

Gotchas

Error Analysis: With an AB check, we may see coarse-grained metrics which pointed to potential points, however it was difficult to diagnose the precise points.

The subsequent section within the migration was to reimplement our present Falcor API in a GraphQL-first server (Video API Service). The Falcor API had turn into a logic-heavy monolith with over a decade of tech debt. So we had to make sure that the reimplemented Video API server was bug-free and equivalent to the already productized Shim service.

We developed a Replay Testing instrument to confirm that idempotent APIs had been migrated appropriately from the GraphQL Shim to the Video API service.

The Replay Testing framework leverages the @override directive out there in GraphQL Federation. This directive tells the GraphQL Gateway to route to 1 GraphQL server over one other. Take, for example, the next two GraphQL schemas outlined by the Shim Service and the Video Service:

The GraphQL Shim first outlined the certificationRating discipline (issues like Rated R or PG-13) in Part 1. In Part 2, we stood up the VideoService and outlined the identical certificationRating discipline marked with the @override directive. The presence of the equivalent discipline with the @override directive knowledgeable the GraphQL Gateway to route the decision of this discipline to the brand new Video Service moderately than the previous Shim Service.

The Replay Tester instrument samples uncooked visitors streams from Mantis. With these sampled occasions, the instrument can seize a stay request from manufacturing and run an equivalent GraphQL question towards each the GraphQL Shim and the brand new Video API service. The instrument then compares the outcomes and outputs any variations in response payloads.

Word: We don’t replay check Personally Identifiable Info. It’s used just for non-sensitive product options on the Netflix UI.

As soon as the check is accomplished, the engineer can view the diffs displayed as a flattened JSON node. You possibly can see the management worth on the left aspect of the comma in parentheses and the experiment worth on the best.

/information/movies/0/tags/3/id: (81496962, null)
/information/movies/0/tags/5/displayName: (Série, worth: “S303251rie”)

We captured two diffs above, the primary had lacking information for an ID discipline within the experiment, and the second had an encoding distinction. We additionally noticed variations in localization, date precisions, and floating level accuracy. It gave us confidence in replicated enterprise logic, the place subscriber plans and consumer geographic location decided the shopper’s catalog availability.

Wins

  • Confidence in parity between the 2 GraphQL Implementations
  • Enabled tuning configs in circumstances the place information was lacking attributable to over-eager timeouts
  • Examined enterprise logic that required many (unknown) inputs and the place correctness might be exhausting to eyeball

Gotchas

  • PII and non-idempotent APIs ought to not be examined utilizing Replay Assessments, and it might be invaluable to have a mechanism to stop that.
  • Manually constructed queries are solely pretty much as good because the options the developer remembers to check. We ended up with untested fields just because we forgot about them.
  • Correctness: The concept of correctness might be complicated too. For instance, is it extra right for an array to be empty or null, or is it simply noise? Finally, we matched the prevailing conduct as a lot as attainable as a result of verifying the robustness of the shopper’s error dealing with was tough.

Regardless of these shortcomings, Replay Testing was a key indicator that we had achieved purposeful correctness of most idempotent queries.

Whereas Replay Testing validates the purposeful correctness of the brand new GraphQL APIs, it doesn’t present any efficiency or enterprise metric perception, such because the total perceived well being of consumer interplay. Are customers clicking play on the similar charges? Are issues loading in time earlier than the consumer loses curiosity? Replay Testing additionally can’t be used for non-idempotent API validation. We reached for a Netflix instrument referred to as the Sticky Canary to construct confidence.

A Sticky Canary is an infrastructure experiment the place prospects are assigned both to a canary or baseline host for the complete period of an experiment. All incoming visitors is allotted to an experimental or baseline host primarily based on their machine and profile, much like a bucket hash. The experimental host deployment serves all the shoppers assigned to the experiment. Watch our Chaos Engineering discuss from AWS Reinvent to be taught extra about Sticky Canaries.

Within the case of our GraphQL APIs, we used a Sticky Canary experiment to run two situations of our GraphQL gateway. The baseline gateway used the prevailing schema, which routes all visitors to the GraphQL Shim. The experimental gateway used the brand new proposed schema, which routes visitors to the newest Video API service. Zuul, our main edge gateway, assigns visitors to both cluster primarily based on the experiment parameters.

We then gather and analyze the efficiency of the 2 clusters. Some KPIs we monitor intently embody:

  • Median and tail latencies
  • Error charges
  • Logs
  • Useful resource utilization–CPU, community visitors, reminiscence, disk
  • Gadget QoE (High quality of Expertise) metrics
  • Streaming well being metrics

We began small, with tiny buyer allocations for hour-long experiments. After validating efficiency, we slowly constructed up scope. We elevated the proportion of buyer allocations, launched multi-region checks, and ultimately 12-hour or day-long experiments. Validating alongside the best way is important since Sticky Canaries influence stay manufacturing visitors and are assigned persistently to a buyer.

After a number of sticky canary experiments, we had assurance that section 2 of the migration improved all core metrics, and we may dial up GraphQL globally with confidence.

Wins

Sticky Canaries was important to construct confidence in our new GraphQL companies.

  • Non-Idempotent APIs: these checks are appropriate with mutating or non-idempotent APIs
  • Enterprise metrics: Sticky Canaries validated our core Netflix enterprise metrics had improved after the migration
  • System efficiency: Insights into latency and useful resource utilization assist us perceive how scaling profiles change after migration

Gotchas

  • Damaging Buyer Influence: Sticky Canaries can influence actual customers. We wanted confidence in our new companies earlier than persistently routing some prospects to them. That is partially mitigated by real-time influence detection, which can routinely cancel experiments.
  • Brief-lived: Sticky Canaries are meant for short-lived experiments. For longer-lived checks, a full-blown AB check needs to be used.

Know-how is consistently altering, and we, as engineers, spend a big a part of our careers performing migrations. The query will not be whether or not we’re migrating however whether or not we’re migrating safely, with zero downtime, in a well timed method.

At Netflix, now we have developed instruments that guarantee confidence in these migrations, focused towards every particular use case being examined. We lined three instruments, AB testing, Replay Testing, and Sticky Canaries that we used for the GraphQL Migration.

This weblog submit is a part of our Migrating Essential Site visitors collection. Additionally, take a look at: Migrating Essential Site visitors at Scale (half 1, part 2) and Making certain the Profitable Launch of Adverts.