Improved Alerting with Atlas Streaming Eval | by Netflix Know-how Weblog | Apr, 2023

Ruchir Jha, Brian Harrington, Yingwu Zhao
TL;DR
- Streaming alert analysis scales a lot better than the standard method of polling time-series databases.
- It permits us to beat excessive dimensionality/cardinality limitations of the time-series database.
- It opens doorways to assist extra thrilling use-cases.
Engineers need their alerting system to be realtime, dependable, and actionable. Whereas actionability is subjective and will differ by use-case, reliability is non-negotiable. In different phrases, false positives are dangerous however false negatives are absolutely the worst!
A number of years in the past, we had been paged by our SRE workforce attributable to our Metrics Alerting System falling behind — essential software well being alerts reached engineers 45 minutes late! As we investigated the alerting delay, we discovered that the variety of configured alerts had lately elevated dramatically, by 5 occasions! The alerting system queried Atlas, our time sequence database on a cron for every configured alert question, and was seeing an elevated throttle charge and extreme retries with backoffs. This, in flip, elevated the time between two consecutive checks for an alert, inflicting a worldwide slowdown for all alerts. On additional investigation, we found that one consumer had programmatically created tens of 1000’s of recent alerts. This consumer represented a platform workforce at Netflix, and their purpose was to construct alerting automation for his or her customers.
Whereas we had been in a position to put out the instant hearth by disabling the newly created alerts, this incident raised some essential considerations across the scalability of our alerting system. We additionally heard from different platform groups at Netflix who wished to construct related automation for his or her customers who, given our state on the time, wouldn’t have been in a position to take action with out impacting Imply Time To Detect (MTTD) for all others. Relatively, we had been an order of magnitude enhance within the variety of alert queries simply over the subsequent 6 months!
Since querying Atlas was the bottleneck, our first intuition was to scale it as much as meet the elevated alert question demand; nonetheless, we quickly realized that might enhance Atlas price prohibitively. Atlas is an in-memory time-series database that ingests a number of billions of time-series per day and retains the final two weeks of information. It’s already one of many largest companies at Netflix each in dimension and price. Whereas Atlas is architected round compute & storage separation, and we might theoretically simply scale the question layer to satisfy the elevated question demand, each question, no matter its kind, has an information element that must be pushed right down to the storage layer. To serve the rising variety of push down queries, the in-memory storage layer would want to scale up as nicely, and it turned clear that this may push the already costly storage prices far greater. Furthermore, widespread database optimizations like caching lately queried information don’t actually work for alerting queries as a result of, typically talking, the final obtained datapoint is required for correctness. Take for instance, this alert question that checks if errors as a % of complete RPS exceeds a threshold of fifty% for 4 out of the final 5 minutes:
identify,errors,:eq,:sum,
identify,rps,:eq,:sum,
:div,
100,:mul,
50,:gt,
5,:rolling-count,4,:gt,
Say if the datapoint obtained for the final time interval results in a constructive analysis for this question, counting on stale/cached information would both enhance MTTD or outcome within the notion of a false damaging, at the least till the lacking information is fetched and evaluated. It turned clear to us that we wanted to resolve the scalability downside with a essentially completely different method. Therefore, we began down the trail of alert analysis through real-time streaming metrics.
Excessive Degree Structure
The concept, at a excessive degree, was to keep away from the necessity to question the Atlas database nearly totally and transition most alert queries to streaming analysis.
Alert queries are submitted both through our Alerting UI or by API purchasers, that are then saved to a customized config database that helps streaming config updates (full snapshot + replace notifications). The Alerting Service receives these config updates and hashes each new or up to date alert question for analysis to one in every of its nodes by leveraging Edda Slots. The node answerable for evaluating a question, begins by breaking it down right into a set of “information expressions” and with them subscribes to an upstream “dealer” service. Information expressions outline what information must be sourced so as to consider a question. For the instance question listed above, the info expressions are identify,errors,:eq,:sum and identify,rps,:eq,:sum. The dealer service acts as a subscription supervisor that maps an information expression to a set of subscriptions. As well as, it additionally maintains a Question Index of all lively information expressions which is consulted to discern if an incoming datapoint is of curiosity to an lively subscriber. The internals listed here are exterior the scope of this weblog publish.
Subsequent, the Alerting service (through the atlas-eval library) maps the obtained information factors for an information expression to the alert question that wants them. For alert queries that resolve to multiple information expression, we align the incoming information factors for every a type of information expressions on the identical time boundary earlier than emitting the gathered values to the ultimate eval step. For the instance above, the ultimate eval step could be answerable for computing the ratio and sustaining the rolling-count, which is maintaining observe of the variety of intervals by which the ratio crossed the brink as proven beneath:
The atlas-eval library helps streaming analysis for many if not all Query, Data, Math and Stateful operators supported by Atlas right this moment. Sure operators similar to offset, integral, des should not supported on the streaming path.
OK, Outcomes?
At the start, now we have efficiently alleviated our preliminary scalability downside with the polling primarily based structure. Right now, we run 20X the variety of queries we used to run a couple of years in the past, with ease and at a fraction of what it could have price to scale up the Atlas storage layer to serve the identical quantity. A number of platform groups at Netflix programmatically generate and keep alerts on behalf of their customers with out having to fret about impacting different customers of the system. We’re in a position to keep sturdy SLAs round Imply Time To Detect (MTTD) whatever the variety of alerts being evaluated by the system.
Moreover, streaming analysis allowed us to chill out restrictions round excessive cardinality that our customers had been beforehand operating into — alert queries that had been rejected by Atlas Backend earlier than attributable to cardinality constraints are actually getting checked accurately on the streaming path. As well as, we’re in a position to make use of Atlas Streaming to watch and alert on some very excessive cardinality use-cases, similar to metrics derived from free-form log information.
Lastly, we switched Telltale, our holistic software well being monitoring system, from polling a metrics cache to utilizing realtime Atlas Streaming. The basic thought behind Telltale is to detect anomalies on SLI metrics (for instance, latency, error charges, and so on). When such anomalies are detected, Telltale is ready to compute correlations with related metrics emitted from both upstream or downstream companies. As well as, it additionally computes correlations between SLI metrics and customized metrics just like the log derived metrics talked about above. This has confirmed to be beneficial in direction of decreasing Imply Time to Get better (MTTR). For instance, we’re in a position to now correlate elevated error charges with elevated charge of particular exceptions occurring in logs and even level to an exemplar stacktrace, as proven beneath:
Our logs pipeline fingerprints each log message and attaches a (very excessive cardinality) fingerprint tag to a log occasions counter that’s then emitted to Atlas Streaming. Telltale consumes this metric in a streaming style to establish fingerprints that correlate with anomalies seen in SLI metrics. As soon as an anomaly is discovered, we question the logs backend with the fingerprint hash to acquire the exemplar stacktrace. What’s extra is we are actually in a position to establish correlated anomalies (and exceptions) occurring in companies which may be N hops away from the affected service. A system like Telltale turns into more practical as extra companies are onboarded (and for that matter the total service graph), as a result of in any other case it turns into troublesome to root trigger the issue, particularly in a microservices-based structure. A number of years in the past, as famous on this weblog, solely a few hundred companies had been utilizing Telltale; due to Atlas Streaming now we have now managed to onboard 1000’s of different companies at Netflix.
Lastly, we realized that when you take away limits on the variety of monitored queries, and begin supporting a lot greater metric dimensionality/cardinality with out impacting the associated fee/efficiency profile of the system, it opens doorways to many thrilling new prospects. For instance, to make alerts extra actionable, we might now be capable to compute correlations between SLI anomalies and customized metrics with excessive cardinality dimensions, for instance an alert on elevated HTTP error charges could possibly level to impacted buyer cohorts, by linking to exactly correlated exemplars. This may assist builders with reproducibility.
Transitioning to the streaming path has been a protracted journey for us. One of many challenges was problem in debugging situations the place the streaming path didn’t agree with what’s returned by querying the Atlas database. That is very true when both the info just isn’t obtainable in Atlas or the question just isn’t supported due to (say) cardinality constraints. This is likely one of the causes it has taken us years to get right here. That stated, early indicators point out that the streaming paradigm might assist with tackling a cardinal downside in observability — efficient correlation between the metrics & occasions verticals (logs, and doubtlessly traces sooner or later), and we’re excited to discover the alternatives that this presents for Observability on the whole.