At my first tech industry job, I built servers. Ordered the chassis, mainboard, hard drives, RAM and CPU. After I’d built a few servers, this job was straightforward: analyse the requirements for the software it will run, match it to hardware. Order. Assemble. I had the whole process down to just a couple of weeks. Then HP and Dell came along with their rack-mounted server lines. Commodity hardware? But this hardware is really not matched as well to the application! And it's more expensive! Shit works, why change?
Soon enough I was running my own Dell servers out of a local ISP. Cheap rack rental, and I bought data directly from the ISP. But soon
enough, bigger players in the data-center space like Pipe and Equinix came along offering rentable servers with redundant power and data. Commodity servers? But I have to pay monthly for it! And I don't even own it! It's more expensive, and why would I pay for redundancy that barely matters! People understand that things go down now and then, right? Shit works, why change?
A few years later and I was running a large e-commerce application at scale out of a colo European data center. We had managed to automate a lot, and we deployed at a reasonable frequency - once or twice a week. The data center staff were fantastic, and would provision new hardware with just a few days lead time. Then AWS came along. VM instances running on shared hardware? The performance is terrible compared to bare-metal! Plus it's expensive. OK so you get fast provisioning, but we only really need to scale up hard for the Christmas sales anyway. Shit works, why change?
Fast-forward. I’m at 99designs, matching designers and clients around the world, and deploying to EC2 daily. We even have automated testing with Jenkins! Nice, this is it, right? Then Docker comes along. Run your app in a container, free yourself from OS dependencies. So the OS is commoditised? OK this really makes CI/CD easier which sounds nice in theory, but do we really need to be deploying more frequently? Containers add indirection and another layer of complexity, and increases the security surface-area! Shit works, why change?
Jump to the present. Today at 99designs we have 25 microservices on ECS, we deploy about 50 times per day, and a new design is uploaded to our platform every 2 seconds. Amazing! All the shiny toys, Docker, CI/ CD, Cloudformation, ECR, S3, Cloudfront! Run the same containers in dev, CI AND production! But wait, now I’m hearing about Lambda. A commodity runtime? Right, so this means no more servers to manage. And, no containers. But do we really need this? Vendor lock-in! Cold starts! Shit works, why change?
Lambda is more than just a new service. Yet again it's the beginning of a new paradigm, and the same forces that have caused disruption regularly over the last 20 years are at work. The key commonality running through these disruptive changes is that more and more of the underlying stack becomes commoditised.
The FaaS model is more secure. It scales automatically. There are no servers to manage. There is unprecedented transparency into costs. It enables new tooling and new workflows with less code and standardised storage services. It allows us to stop needing to worry about what "service" a particular piece of code needs to live in. And it's done in a consistent repeatable way, at scale.
At every progressive step in this chain we've had to upskill people, build new tools and adapt. At every point of disruption, the next step ahead felt uncomfortable.
The reality is that a new startup today will use FaaS. That also means they will launch into production today. They will move faster, they will be more productive. They won't need complicated tooling like Kubernetes or even today’s concept of DevOps.
How does 99designs as a company stay competitive? We adapt and change as the technology evolves. Those who don't make these leaps are eventually left behind.