Improved Alerting with Atlas Streaming Eval

Ruchir Jha, Brian Harrington, Yingwu Zhao

TL;DR

  • Streaming alert analysis scales significantly better than the standard strategy of polling time-series databases.
  • It permits us to beat excessive dimensionality/cardinality limitations of the time-series database.
  • It opens doorways to help extra thrilling use-cases.

Engineers need their alerting system to be realtime, dependable, and actionable. Whereas actionability is subjective and should fluctuate by use-case, reliability is non-negotiable. In different phrases, false positives are unhealthy however false negatives are absolutely the worst!

Just a few years in the past, we had been paged by our SRE crew attributable to our Metrics Alerting System falling behind — important software well being alerts reached engineers 45 minutes late! As we investigated the alerting delay, we discovered that the variety of configured alerts had lately elevated dramatically, by 5 occasions! The alerting system queried Atlas, our time collection database on a cron for every configured alert question, and was seeing an elevated throttle charge and extreme retries with backoffs. This, in flip, elevated the time between two consecutive checks for an alert, inflicting a worldwide slowdown for all alerts. On additional investigation, we found that one consumer had programmatically created tens of hundreds of recent alerts. This consumer represented a platform crew at Netflix, and their purpose was to construct alerting automation for his or her customers.

Whereas we had been capable of put out the instant hearth by disabling the newly created alerts, this incident raised some important considerations across the scalability of our alerting system. We additionally heard from different platform groups at Netflix who needed to construct related automation for his or her customers who, given our state on the time, wouldn’t have been ready to take action with out impacting Imply Time To Detect (MTTD) for all others. Relatively, we had been an order of magnitude improve within the variety of alert queries simply over the subsequent 6 months!

Since querying Atlas was the bottleneck, our first intuition was to scale it as much as meet the elevated alert question demand; nonetheless, we quickly realized that may improve Atlas price prohibitively. Atlas is an in-memory time-series database that ingests a number of billions of time-series per day and retains the final two weeks of information. It’s already one of many largest companies at Netflix each in dimension and value. Whereas Atlas is architected round compute & storage separation, and we might theoretically simply scale the question layer to satisfy the elevated question demand, each question, no matter its sort, has a knowledge element that must be pushed all the way down to the storage layer. To serve the rising variety of push down queries, the in-memory storage layer would wish to scale up as nicely, and it grew to become clear that this may push the already costly storage prices far greater. Furthermore, widespread database optimizations like caching lately queried information don’t actually work for alerting queries as a result of, usually talking, the final acquired datapoint is required for correctness. Take for instance, this alert question that checks if errors as a % of complete RPS exceeds a threshold of fifty% for 4 out of the final 5 minutes:

identify,errors,:eq,:sum,
identify,rps,:eq,:sum,
:div,
100,:mul,
50,:gt,
5,:rolling-count,4,:gt,

Say if the datapoint acquired for the final time interval results in a optimistic analysis for this question, counting on stale/cached information would both improve MTTD or end result within the notion of a false adverse, at the least till the lacking information is fetched and evaluated. It grew to become clear to us that we would have liked to unravel the scalability drawback with a basically totally different strategy. Therefore, we began down the trail of alert analysis through real-time streaming metrics.

Excessive Stage Structure

The thought, at a excessive degree, was to keep away from the necessity to question the Atlas database virtually totally and transition most alert queries to streaming analysis.

Alert queries are submitted both through our Alerting UI or by API shoppers, that are then saved to a customized config database that helps streaming config updates (full snapshot + replace notifications). The Alerting Service receives these config updates and hashes each new or up to date alert question for analysis to one in every of its nodes by leveraging Edda Slots. The node liable for evaluating a question, begins by breaking it down right into a set of “information expressions” and with them subscribes to an upstream “dealer” service. Information expressions outline what information must be sourced with the intention to consider a question. For the instance question listed above, the information expressions are identify,errors,:eq,:sum and identify,rps,:eq,:sum. The dealer service acts as a subscription supervisor that maps a knowledge expression to a set of subscriptions. As well as, it additionally maintains a Question Index of all lively information expressions which is consulted to discern if an incoming datapoint is of curiosity to an lively subscriber. The internals listed here are exterior the scope of this weblog put up.

Subsequent, the Alerting service (through the atlas-eval library) maps the acquired information factors for a knowledge expression to the alert question that wants them. For alert queries that resolve to multiple information expression, we align the incoming information factors for every a kind of information expressions on the identical time boundary earlier than emitting the accrued values to the ultimate eval step. For the instance above, the ultimate eval step can be liable for computing the ratio and sustaining the rolling-count, which is retaining monitor of the variety of intervals through which the ratio crossed the brink as proven under:

The atlas-eval library helps streaming analysis for many if not all Query, Data, Math and Stateful operators supported by Atlas immediately. Sure operators similar to offset, integral, des usually are not supported on the streaming path.

OK, Outcomes?

Before everything, we’ve efficiently alleviated our preliminary scalability drawback with the polling based mostly structure. At the moment, we run 20X the variety of queries we used to run a couple of years in the past, with ease and at a fraction of what it will have price to scale up the Atlas storage layer to serve the identical quantity. A number of platform groups at Netflix programmatically generate and keep alerts on behalf of their customers with out having to fret about impacting different customers of the system. We’re capable of keep robust SLAs round Imply Time To Detect (MTTD) whatever the variety of alerts being evaluated by the system.

Moreover, streaming analysis allowed us to calm down restrictions round excessive cardinality that our customers had been beforehand working into — alert queries that had been rejected by Atlas Backend earlier than attributable to cardinality constraints are actually getting checked accurately on the streaming path. As well as, we’re ready to make use of Atlas Streaming to watch and alert on some very excessive cardinality use-cases, similar to metrics derived from free-form log information.

Lastly, we switched Telltale, our holistic software well being monitoring system, from polling a metrics cache to utilizing realtime Atlas Streaming. The elemental concept behind Telltale is to detect anomalies on SLI metrics (for instance, latency, error charges, and so forth). When such anomalies are detected, Telltale is ready to compute correlations with related metrics emitted from both upstream or downstream companies. As well as, it additionally computes correlations between SLI metrics and customized metrics just like the log derived metrics talked about above. This has confirmed to be priceless in the direction of decreasing Imply Time to Get well (MTTR). For instance, we’re capable of now correlate elevated error charges with elevated charge of particular exceptions occurring in logs and even level to an exemplar stacktrace, as proven under:

Our logs pipeline fingerprints each log message and attaches a (very excessive cardinality) fingerprint tag to a log occasions counter that’s then emitted to Atlas Streaming. Telltale consumes this metric in a streaming vogue to establish fingerprints that correlate with anomalies seen in SLI metrics. As soon as an anomaly is discovered, we question the logs backend with the fingerprint hash to acquire the exemplar stacktrace. What’s extra is we are actually capable of establish correlated anomalies (and exceptions) occurring in companies that could be N hops away from the affected service. A system like Telltale turns into more practical as extra companies are onboarded (and for that matter the complete service graph), as a result of in any other case it turns into troublesome to root trigger the issue, particularly in a microservices-based structure. Just a few years in the past, as famous on this weblog, solely a few hundred companies had been utilizing Telltale; due to Atlas Streaming we’ve now managed to onboard hundreds of different companies at Netflix.

Lastly, we realized that after you take away limits on the variety of monitored queries, and begin supporting a lot greater metric dimensionality/cardinality with out impacting the fee/efficiency profile of the system, it opens doorways to many thrilling new prospects. For instance, to make alerts extra actionable, we could now be capable of compute correlations between SLI anomalies and customized metrics with excessive cardinality dimensions, for instance an alert on elevated HTTP error charges might be able to level to impacted buyer cohorts, by linking to exactly correlated exemplars. This may assist builders with reproducibility.

Transitioning to the streaming path has been an extended journey for us. One of many challenges was problem in debugging situations the place the streaming path didn’t agree with what’s returned by querying the Atlas database. That is very true when both the information isn’t out there in Atlas or the question isn’t supported due to (say) cardinality constraints. This is among the causes it has taken us years to get right here. That mentioned, early indicators point out that the streaming paradigm could assist with tackling a cardinal drawback in observability — efficient correlation between the metrics & occasions verticals (logs, and doubtlessly traces sooner or later), and we’re excited to discover the alternatives that this presents for Observability in normal.


Improved Alerting with Atlas Streaming Eval was initially printed in Netflix TechBlog on Medium, the place individuals are persevering with the dialog by highlighting and responding to this story.