Rebuilding Netflix Video Processing Pipeline with Microservices | by Netflix Expertise Weblog

Liwei Guo, Anush Moorthy, Li-Heng Chen, Vinicius Carvalho, Aditya Mavlankar, Agata Opalach, Adithya Prakash, Kyle Swanson, Jessica Tweneboah, Subbu Venkatrav, Lishan Zhu

That is the primary weblog in a multi-part sequence on how Netflix rebuilt its video processing pipeline with microservices, so we are able to preserve our fast tempo of innovation and repeatedly enhance the system for member streaming and studio operations. This introductory weblog focuses on an outline of our journey. Future blogs will present deeper dives into every service, sharing insights and classes realized from this course of.

The Netflix video processing pipeline went dwell with the launch of our streaming service in 2007. Since then, the video pipeline has undergone substantial enhancements and broad expansions:

  • Beginning with Normal Dynamic Vary (SDR) at Standard-Definitions, we expanded the encoding pipeline to 4K and Excessive Dynamic Vary (HDR) which enabled help for our premium providing.
  • We moved from centralized linear encoding to distributed chunk-based encoding. This structure shift enormously lowered the processing latency and elevated system resiliency.
  • Shifting away from the usage of devoted cases that had been constrained in amount, we tapped into Netflix’s inside trough created attributable to autoscaling microservices, resulting in vital enhancements in computation elasticity in addition to useful resource utilization effectivity.
  • We rolled out encoding improvements similar to per-title and per-shot optimizations, which offered vital quality-of-experience (QoE) enchancment to Netflix members.
  • By integrating with studio content material programs, we enabled the pipeline to leverage wealthy metadata from the artistic facet and create extra participating member experiences like interactive storytelling.
  • We expanded pipeline help to serve our studio/content-development use instances, which had totally different latency and resiliency necessities as in comparison with the standard streaming use case.

Our expertise of the final decade-and-a-half has strengthened our conviction that an environment friendly, versatile video processing pipeline that permits us to innovate and help our streaming service, in addition to our studio companions, is vital to the continued success of Netflix. To that finish, the Video and Picture Encoding group in Encoding Applied sciences (ET) has spent the previous couple of years rebuilding the video processing pipeline on our next-generation microservice-based computing platform Cosmos.


Beginning in 2014, we developed and operated the video processing pipeline on our third-generation platform Reloaded. Reloaded was well-architected, offering good stability, scalability, and an inexpensive stage of flexibility. It served as the muse for quite a few encoding improvements developed by our group.

When Reloaded was designed, we targeted on a single use case: changing high-quality media information (also referred to as mezzanines) obtained from studios into compressed property for Netflix streaming. Reloaded was created as a single monolithic system, the place builders from varied media groups in ET and our platform accomplice group Content material Infrastructure and Options (CIS)¹ labored on the identical codebase, constructing a single system that dealt with all media property. Over time, the system expanded to help varied new use instances. This led to a major improve in system complexity, and the constraints of Reloaded started to point out:

  • Coupled performance: Reloaded was composed of plenty of employee modules and an orchestration module. The setup of a brand new Reloaded module and its integration with the orchestration required a non-trivial quantity of effort, which led to a bias in the direction of augmentation somewhat than creation when growing new functionalities. For instance, in Reloaded the video high quality calculation was carried out contained in the video encoder module. With this implementation, it was extraordinarily tough to recalculate video high quality with out re-encoding.
  • Monolithic construction: Since Reloaded modules had been usually co-located in the identical repository, it was simple to miss code-isolation guidelines and there was fairly a little bit of unintended reuse of code throughout what ought to have been sturdy boundaries. Such reuse created tight coupling and lowered improvement velocity. The tight coupling amongst modules additional pressured us to deploy all modules collectively.
  • Lengthy launch cycles: The joint deployment meant that there was elevated worry of unintended manufacturing outages as debugging and rollback may be tough for a deployment of this dimension. This drove the strategy of the “launch prepare”. Each two weeks, a “snapshot” of all modules was taken, and promoted to be a “launch candidate”. This launch candidate then went by way of exhaustive testing which tried to cowl as massive a floor space as attainable. This testing stage took about two weeks. Thus, relying on when the code change was merged, it might take anyplace between two and 4 weeks to succeed in manufacturing.

As time progressed and functionalities grew, the speed of recent function contributions in Reloaded dropped. A number of promising concepts had been deserted owing to the outsized work wanted to beat architectural limitations. The platform that had as soon as served us nicely was now turning into a drag on improvement.


As a response, in 2018 the CIS and ET groups began growing the next-generation platform, Cosmos. Along with the scalability and the steadiness that the builders already loved in Reloaded, Cosmos aimed to considerably improve system flexibility and have improvement velocity. To attain this, Cosmos was developed as a computing platform for workflow-driven, media-centric microservices.

The microservice structure gives sturdy decoupling between companies. Per-microservice workflow help eases the burden of implementing advanced media workflow logic. Lastly, related abstractions permit media algorithm builders to deal with the manipulation of video and audio indicators somewhat than on infrastructural considerations. A complete record of advantages supplied by Cosmos may be discovered within the linked weblog.

Service Boundaries

Within the microservice structure, a system consists of plenty of fine-grained companies, with every service specializing in a single performance. So the primary (and arguably an important) factor is to establish boundaries and outline companies.

In our pipeline, as media property journey by way of creation to ingest to supply, they undergo plenty of processing steps similar to analyses and transformations. We analyzed these processing steps to establish “boundaries” and grouped them into totally different domains, which in flip turned the constructing blocks of the microservices we engineered.

For instance, in Reloaded, the video encoding module bundles 5 steps:

1. divide the enter video into small chunks

2. encode every chunk independently

3. calculate the standard rating (VMAF) of every chunk

4. assemble all of the encoded chunks right into a single encoded video

5. mixture high quality scores from all chunks

From a system perspective, the assembled encoded video is of major concern whereas the interior chunking and separate chunk encodings exist with the intention to fulfill sure latency and resiliency necessities. Additional, as alluded to above, the video high quality calculation gives a completely separate performance as in comparison with the encoding service.

Thus, in Cosmos, we created two impartial microservices: Video Encoding Service (VES) and Video High quality Service (VQS), every of which serves a transparent, decoupled perform. As implementation particulars, the chunked encoding and the assembling had been abstracted away into the VES.

Video Providers

The strategy outlined above was utilized to the remainder of the video processing pipeline to establish functionalities and therefore service boundaries, resulting in the creation of the next video services².

  1. Video Inspection Service (VIS): This service takes a mezzanine because the enter and performs varied inspections. It extracts metadata from totally different layers of the mezzanine for downstream companies. As well as, the inspection service flags points if invalid or sudden metadata is noticed and gives actionable suggestions to the upstream group.
  2. Complexity Evaluation Service (CAS): The optimum encoding recipe is very content-dependent. This service takes a mezzanine because the enter and performs evaluation to grasp the content material complexity. It calls Video Encoding Service for pre-encoding and Video High quality Service for high quality analysis. The outcomes are saved to a database to allow them to be reused.
  3. Ladder Technology Service (LGS): This service creates a complete bitrate ladder for a given encoding household (H.264, AV1, and many others.). It fetches the complexity information from CAS and runs the optimization algorithm to create encoding recipes. The CAS and LGS cowl a lot of the improvements that now we have beforehand introduced in our tech blogs (per-title, mobile encodes, per-shot, optimized 4K encoding, and many others.). By wrapping ladder technology right into a separate microservice (LGS), we decouple the ladder optimization algorithms from the creation and administration of complexity evaluation information (which resides in CAS). We count on this to offer us higher freedom for experimentation and a sooner charge of innovation.
  4. Video Encoding Service (VES): This service takes a mezzanine and an encoding recipe and creates an encoded video. The recipe consists of the specified encoding format and properties of the output, similar to decision, bitrate, and many others. The service additionally gives choices that permit fine-tuning latency, throughput, and many others., relying on the use case.
  5. Video Validation Service (VVS): This service takes an encoded video and a listing of expectations in regards to the encode. These expectations embody attributes specified within the encoding recipe in addition to conformance necessities from the codec specification. VVS analyzes the encoded video and compares the outcomes towards the indicated expectations. Any discrepancy is flagged within the response to alert the caller.
  6. Video High quality Service (VQS): This service takes the mezzanine and the encoded video as enter, and calculates the standard rating (VMAF) of the encoded video.

Service Orchestration

Every video service gives a devoted performance and so they work collectively to generate the wanted video property. At the moment, the 2 major use instances of the Netflix video pipeline are producing property for member streaming and for studio operations. For every use case, we created a devoted workflow orchestrator so the service orchestration may be custom-made to greatest meet the corresponding enterprise wants.

For the streaming use case, the generated movies are deployed to our content material supply community (CDN) for Netflix members to devour. These movies can simply be watched tens of millions of occasions. The Streaming Workflow Orchestrator makes use of virtually all video companies to create streams for an impeccable member expertise. It leverages VIS to detect and reject non-conformant or low-quality mezzanines, invokes LGS for encoding recipe optimization, encodes video utilizing VES, and calls VQS for high quality measurement the place the standard information is additional fed to Netflix’s information pipeline for analytics and monitoring functions. Along with video companies, the Streaming Workflow Orchestrator makes use of audio and timed textual content companies to generate audio and textual content property, and packaging companies to “containerize” property for streaming.

For the studio use case, some instance video property are advertising and marketing clips and each day manufacturing editorial proxies. The requests from the studio facet are usually latency-sensitive. For instance, somebody from the manufacturing group could also be ready for the video to evaluate to allow them to resolve the taking pictures plan for the following day. Due to this, the Studio Workflow Orchestrator optimizes for quick turnaround and focuses on core media processing companies. Right now, the Studio Workflow Orchestrator calls VIS to extract metadata of the ingested property and calls VES with predefined recipes. In comparison with member streaming, studio operations have totally different and distinctive necessities for video processing. Subsequently, the Studio Workflow Orchestrator is the unique person of some encoding options like forensic watermarking and timecode/textual content burn-in.