This is how the architecture behind web applications has been complicated in the last 26 years



26 years ago, in 1995, took place an event that revolutionized the incipient web development sector: the birth of the LAMP stack; that is, the combined use of Linux systems equipped with the Apache server, MySQL databases and the PHP interpreter.



The problem is that required high memory and CPU usage In an age when both resources were still measured in megahertz and megabytes. Ben Johnson, creator of Litestream, recalls in his blog this time and the subsequent evolution of the sector, and as a result he asks a question:




"But, if Moore's Law promised us a better future in which computers became exponentially faster [...], Why do we need more computers than ever?".





Lamp



The complexity starts to grow



What Johnson is talking about is the rise of n-tier architecture, which led to the physical separation of layers between client, logical server and data server, in a search that the slow processing of PHP or Ruby did not affect the performance of the SQL databases.



This 'n' tier architecture sounds simple at first, but contains a hidden complexity:




"On a single machine, we could cache our server to speed up requests, but now the data is shared between multiple machines, so we need to add a server memcached or Redis to share cached data. "



"Database servers easily get overloaded with multiple simultaneous connections, so we have to add intermediate services like PgBouncer".



"If we have events in our system that must be communicated to all of our nodes, then we need a cluster of machines Kafka".




Definitely, "we find a whole fleet of machines to manage".



And virtualization and containers arrive



But, while everyone was losing their nerves because of the threat of 'Effect 2000', it was forging a new revolution in the web architecture industry: virtualization.



According to Lawrence Edmonson, Squarespace's chief engineering officer, "virtualization solved many of the dotcom bubble scaling problems and spawned an entirely new Amazon business unit (Amazon Web Services) in 2006. "



Thus, different virtualization technologies (VMWare, Virtualbox) or complementary to it (such as Chef and Vagrant) were arriving. Then there were containers, like Docker and Kubernetes.







From Docker to Kubernetes: Understanding What Containers Are and Why It's One of the Biggest Revolutions in the Development Industry





These are all useful tools that have moved web development forward, but Johnson points out the downside of them:




"All of these layers slow us down from our work of writing software systems that solve real problems."



"What started as a simple two-tier software system has grown into a giant with a dozen layers of complexity."




Add to that another problem, precisely one that has helped the Kubernetes rise ... pursuit of uptime as a goal in itself:




"Solutions like Kubernetes tout the benefits of zero-downtime deployments, but ignore that their inherent complexity causes availability issues."





Docker



Complexity has generated complexity



And he sums it up like this: "Complexity begets complexity." A user commenting on Johnson's post on Hacker News delves into that idea:




"I see that the complexity is due to the replication of the many operating system services that applications require. As a result, a containerized ecosystem will only reinvent the wheel but complicate it even more.



And this is only spreading like a cancer: non-Kubernetes infrastructures are now abandonware. "




The conclusion is that a series of decisions on technological architecture made over the last quarter of a century explain that, despite the huge increase in computing power, our applications require more machines behind than ever to function.