The Real Lesson from Prime Video’s Microservices → Monolith Story
Amazon Prime’s Video Quality Analysis (VCA) team went about building a tool to monitor each video stream for quality concerns. A member of their team wrote a blog post highlighting some cloud architecture decisions that were made which allowed them to scale their service while reducing costs by 90%. The TLDR summary is that they switched from an initial, serverless, microservice architecture to a more monolithic cloud architecture.
As AWS has been pitching serverless and microservice architectures for years, the internet enjoyed sounding off on this article. While it would be fun to talk about whether or not serverless/microservice architectures we find that such a discussion misses the tangible, immediately applicable lessons for designing our own infrastructure.
The Real Lesson
The real takeaway is that any potential technical solution must be evaluated against your specific use case, needs, and constraints. No solution is perfect, and tradeoffs and benefits of each option should be evaluated.
By building a cloud architecture optimized for their use case, and not for checking off buzz words, the Prime Video team saved money while implementing a better service. This is such a simple idea and takeaway that you may be asking what the point of this post is, but as always, execution itself is the tricky part. Furthermore, its worth emphasizing this key lesson when so much commentary has been given to debating the merits of microservices.
It is easy to image the Prime Video team going through a planning exercise while evaluating their cloud architecture, and saying:
1. What is the customer problem we are solving? We need robust, always on, video stream quality monitoring across so that we can identify issues and improve the customer’s streaming experience.
2. What are our target metrics for this solution? Service uptime, coverage of available streams, and cloud computing spend.
3. What restrictions exist for a potential solutions? Technical knowledge, budget, time, etc.
All three of these questions apply to any engineering team, and help guide solution development. Furthermore, when evaluating cloud architecture patterns, we can ask more specific questions applicable to virtually any organization:
Architecture Evaluation Checklist
- 1. What is the usage level and scale for this service? If the service is at an enormous scale and being utilized constantly, it might make sense to be wary of serverless. Why? Because the major cloud providers charge a slight premium on compute for the added value of abstracting away servers. For smaller applications and use cases with highly variable consumption, this is an excellent trade off vs. the cost of managing and maintaining servers or a K8s cluster. At Prime Video’s scale, however, the serverless premium was enormous.
- 2. How are individual services communicating/sharing data? Are there significant reads and write of data going on? If so, your bill may get hit with orchestration and networking fees, as we saw in the Prime Video case study. On the other hand, if communication is through message queues or pub/sub mechanisms, then this may be a less significant cost component of a potential serverless architecture.
- 2. How do you balance system complexity vs. server complexity? Coordinating a wide array of Serverless compute in a microservices architecture can lead to a higher level of complexity than a more monolithic design. Of course, the later has servers that are more difficult and expensive to manage and scale. Every organization needs to decide for their own use case which of these costs is more tolerable.
The debate around serverless microservices or a more monolithic structure is missing the forest for the trees. Any potential technical solution must be evaluated against your specific use case, needs, and constraints. The best framework, programming language, cloud computing resource, etc. for your use case…is the best one for your use case, not the one that is most “in vogue”. The Prime Video team simply used an architecture that works quite well for their use case. For teams that have lower usage demands than Prime Video, a serverless architecture’s benefits may greatly outway the costs.
dragondrop.cloud’s mission is to automate developer best practices while working with Infrastructure as Code. Our flagship OSS product, cloud-concierge, allows developers to codify their cloud, detect drift, estimate cloud costs and security risks, and more — while delivering the results via a Pull Request. For enterprises running cloud-concierge at scale, we provide a management platform. To learn more, schedule a demo or get started today!
Learn More About Web Application Development
Why a Cloud Asset Management Platform? With ever expanding cloud environments, having visiblity for and control of cloud assets is not a trivial task to perform manually. A series of offerings exist to automate this problem, providing functionality to at least: Detect...
Background We built an OSS containerized tool, cloud-concierge, for automating Terraform best practices, and primarily implemented the solution in Go, given that Go generally is the “lingua franca” for Terraform tooling. With some gaps in Go, however, we built a few...
Motivation Database migrations are an essential component in the life cycle of any relational database. We needed a way to trigger database migrations in our development and production environments within our existing CI/CD pipelines running in GitHub Actions. We...