Table Of Contents
Stream Episode Two: Episode Transcript:

For Amazon to survive, they needed the cloud. But they had to invent it — and creating the cloud meant overcoming obstacles fundamental to the nature of software development at the time.

The main obstacle was what developers lovingly referred to as “The Monolith.” In Monolith architecture, it was like all elements of a software system were plugged into the same outlet, and if you wanted to replace or update one, you had to unplug the whole thing — not a sustainable structure for the kind of global-scale business Amazon wanted to create.

In other words, to create the cloud, Amazon had to first reinvent the wheel — redefining standards for building and running software.

Stream Episode Two:

applepodcasts-badge
spotify-badge
amazonmusic-badge
deezer-badge

PREVIOUS: Episode 1, The Cloud Gathers

NEXT: Episode 3, The Big Bang

finops-automation-series-thumbnails

Episode Transcript:

Cloud Atlas is brought to you by CloudZero, the cost intelligence platform that offers advanced visibility and optimization for your entire cloud environment. Eliminate wasteful spending, ship efficient code, and innovate profitably — all in one platform.  

In episode 1, we looked at the business pressures that necessitated something like the cloud. In this episode, we’ll look at the technological innovations that made it possible — and which amounted to a total reconfiguration of how web applications were built.

Andy Jassy: In the first 10 years of Amazon, we had entangled a bunch of pieces of our platform that we wished we hadn’t.

Amazon was founded in 1994. The first 10 years of Amazon were almost the first ten years of the internet itself — thus, the first 10 years of website construction as we know it. 

Erik Peterson: When people started building the very first websites in the world, nobody really knew what they were doing. And the technology was really horrible.

That’s CloudZero’s co-founder and CTO, Erik Peterson. He’s talking about the wild wild west days of the world wide web, when the first web designers were trying to use the internet to sell stuff. Like any first-timers, they made some big mistakes — which are easy to see in hindsight, but were impossible to see in the moment.

Erik Peterson: Think about it. There are no senior engineers. There are no, you know, web developers for this stuff. People are just making stuff up on the fly, trying to figure it out.

Erik Peterson: The idea was, all right. Well, I’ve built some functionality. And then, as we need to build more functionality, build more functionality on top of that, and then I’ll build more functionality on top of that. And it became this pattern that everyone lovingly called the monolith, which was just a huge giant application that services all the needs of the customer or customers.

Erik is talking about one of the most integral, and to the untrained ear, most trivial-sounding events in software engineering history: the transition from monolith to microservice architecture. Again, this is part of why more people don’t understand the cloud — it takes software engineering expertise to understand the terms themselves, let alone the innovation events that they’re describing. But the point is: monolith to microservices is why we can put a ridesharing, or photo-posting, or video-streaming app in our pocket. Here’s why.    

Monolith architecture is incredibly limited. To understand why, let’s go back to the metaphor of building a house. Imagine if, when you built a house, every single part depended on every single other part — if one failed, the whole house failed. The kitchen sink depended on the bathroom toilet, the living room light switches depended on the HVAC unit, the storage closet depended on the garage door, and so on. In practical terms: If you ever wanted to replace something — like an old, leaky pipe — you’d risk bringing the whole house down. Every time.

Plus, at the time, every company in the world was also building their own monolith. So, unlike today, where we can integrate Slack and Zoom and Google Calendars; Figma and Atlassian and Salesforce; monoliths could only operate on their own. They couldn’t communicate. As Michael Skok told us, the cloud would fix all of that.

Michael Skok: There are three things going on there. Number one is, the cloud is connecting everybody. Number two is, it’s connecting everything, meaning all the applications. And finally, the third thing is, it’s enabled people in in real time to actually collaborate within the document on the process that they’re working on.

What you got to remember is, you go back to the world of the data centers. Every data center was built like a Snowflake, you know, with individual services and capabilities, and there was no commonality between them. And so it would be a great idea.

I remember in the late eighties as an entrepreneur trying to solve something for British telecom which was in their customer services center. They wanted to be able to take faxes in, and perform Optical Character Recognition (OCR) on those faxes so they could break down what problems needed to be solved and give them to the right people in the in the help center to be able to then get more organized with it.

That was seven different applications. I mean, it was just unbelievable how difficult it was to integrate this. It was a nightmare — a multi-million dollar project by the way. So we were all thrilled. But the reality was, what a pain to deliver it. Virtually impossible.

So, circa 2000, the “entanglement” issue of software development made individual monoliths cumbersome, and it prevented separate monoliths from communicating. Getting back to Amazon’s specific goals, whenever they wanted to add another feature, they had to be sure that it was configured to the exact specifications of their “monolith,” or it could crash the entire platform.

Allan Vermeulen: So for the semi-technical or the more technical audience, the classic problem teams have is that they share a database right, which is just an absolute disaster.

Because some team wants to change the scheme of that column that represents some new idea, because they’re building a feature building, gift cards or something, and they want to tag users as having a gift card. But if they change the schema it’s going to break every other team’s software that’s using that same database.

In the monolith configuration, giving someone a gift card the wrong way could make it impossible to actually buy anything with that gift card. Or, you know, use Amazon at all.

So, the first thing companies like Amazon did was break their monolithic platforms into microservices. 

Allan Vermeulen: What we actually did is okay. We need projects where we can take these databases and break them apart, so every team can have their own database. And yeah, I know It’s going to cost more, and there’s going to be inefficiencies there, and Oracle’s going to charge us more licenses because that’s what they do. But we have to do it, and those are the kind of projects we took on.

Think of it like a building’s brick wall: Each brick is a microservice within their software platform, and they could add, remove, or replace bricks in the wall whenever it was necessary. And in between the bricks, holding them together and helping them communicate, was a new connective tissue called APIs. That’s short for Application Programming Interface, and all you need to know about APIs is that they’re rules for how different parts of a platform interact.

But even microservice architecture had its limitations. 

Erik Peterson: IBM and HP and all these companies in the early 2000s are leading the charge with building distributed service-oriented architectures, and all this other great stuff. But the best that they could come up with was, you know, just buy more of our hardware, and build your distributed systems on top of a monolithic hardware platform [chuckle].

In other words, while the software had evolved to a microservice model, the hardware hadn’t. Companies were still powering their software with their own monolithic hardware. To understand the issue with that, imagine if, in order to use electricity in your home, you first needed to build your own power plant. And so did your neighbor, and so did everyone else in your neighborhood, and so did everyone anywhere who wanted to watch “Love Island” or charge their phone. 

It could happen, but it would be super hard, super slow, and super expensive. And for pre-cloud software engineers, that’s how life was. If you wanted to run a software product, you had to build it and run it on your own physical servers. A server was that big, bulky thing you used to have under your desk with the fan and the disc drive and everything — the computer’s engine, basically. Remember how we used to buy CDs for software like AOL and Microsoft? We would install them on our servers, and then run them with our own computing power whenever we opened them.

For the world’s largest businesses, imagine that dynamic, but multiplied by a million, ten million, a hundred million. As people like Michael Skok learned, if a company like Deutsche Bank wanted to deploy a new software solution, they’d have to spend an outrageous amount of money and time buying and configuring new servers. Michael faced this exact scenario when he was CEO of AlphaBlox, a software company he founded in 1996.

Michael Skok: So, one of my big customers was Deutsche Bank. We’d done a $33 million partnership with them at one stage for them to use AlphaBlox to do real-time in-line analytics, to literally look at the risk before they made a trade, which was pretty fundamental. So It made a huge amount of sense to them, because they could put, you know, not an exaggeration, billions of dollars at risk. And this is back in the late nineties. So imagine what it would be today.

It’s a great idea. It’s a great concept. But before they could even put our application into practice, they’d have to spend triple that amount with IBM — another partner of ours, who ended up buying us actually. So it’s like, it’s great that you’ve got this fantastic application, but we can’t run it on anything until we spend, you know, another $100 million on the rest of the infrastructure.

Joe Kinsella, a software engineer who would become a cloud entrepreneur, had also faced this issue.

Joe Kinsella: I would engage in annual planning where I would sit down with my finance team. I would sit down with my operations team. I would propose a multi-million dollar capital expenditure for physical infrastructure that I wanted, for data centers to support my applications for the course of a year.

We would discuss it. We would agree to it. We would then, over the next several months, purchase and provision and deploy all of this hardware infrastructure. We did this all under the hope that during the year I might achieve the peak volume at some point in time that I needed for my software application, which, to be perfectly honest, almost never was achieved by me or almost anyone else.

Joe Kinsella: It was a very inefficient, heavyweight, bureaucratic, capital-intensive approach to actually bringing your software to market.

For these software engineers, it took an enormous amount of time and money to get all the engines they’d need to build and run their software projects. And months and months later, who knew whether their ideas would still be viable? Who knew whether customers would already have a better solution that would make those servers irrelevant?

Microservices addressed one part of the problem: They let developers work quickly without worrying that they’d destroy the whole company. Which was great. But without addressing the hardware problem, their speed, and the company’s overall time to market, were still limited by the time it took to buy and configure physical servers.

What if, instead of buying their own physical servers, they could buy virtual servers, which they could configure in a heartbeat and that the seller would run? And what if, to go with those physical servers, they had access to a Home Depot-like repository of infrastructure that they could buy and deploy in seconds?

To go back to our building metaphor, it would be like if, instead of building a building from the ground up, you could copy and paste parts of other buildings to build your own. Don’t want to spend the time digging a foundation for your home? Copy and paste a foundation from your neighbor’s home. Don’t want to hire a team to build custom walls and doors? Copy and paste them from a home you saw in Architectural Digest.

If you could copy and paste elements from other homes, you could literally build a home in minutes. Or cafes, or apartment buildings, or sports arenas, or skyscrapers. Imagine how convenient that would be, how many other builders would want that tool, and, for the sellers of that tool, how much money they could make. 

The first step? Getting those buildings electricity.

The death of the monolith obliterated some key software engineering obstacles. But it would take much more to create the cloud — including millions of dollars of capital investment, a major bet on the Amazonian vision of the future.

Cloud Atlas is written, hosted, and produced by me, Dustin Lowman, with invaluable assistance from Natalie Jones, Greg Barrette, and many others at CloudZero. Credit also to Tim O’Keefe, our sound designer, composer, and associate producer. He made all those pretty sounds you hear in the background.

Thank you to Erik Peterson, Michael Skok, Allan Vermeulen, and Joe Kinsella for their contributions to this episode. Thanks also to CloudZero for trusting me to turn Cloud Atlas into a reality. And, of course, thank you for listening. Until next time, this is Dustin Lowman reminding you to keep your feet on the ground, and your head in the cloud.

PREVIOUS: Episode 1, The Cloud Gathers

NEXT: Episode 3, The Big Bang

The Modern Guide To Managing Cloud Costs

Traditional cost management is broken. Here's how to fix it.

Modern Cost Management Guide