The evolution of computing has been marked by a series of paradigm shifts, from mainframes to personal computers and now the cloud. Currently, the most common and reliable way to deploy web infrastructure is to become a paying customer of a large company like Amazon or Google and write infrastructure as code to launch a constellation of interoperable services on their machines. This approach ensures that the infrastructure you create is secure, scalable and efficient, and is the preferred choice for many organizations around the world.
The current dominance of cloud hyperscalers, who hold the lion's share of global computing, is not a natural monopoly but rather a temporary state. The open source community is working tirelessly to develop a decentralized solution to revolutionize the industry once again. A decentralized solution will allow users to move freely between service providers, breaking lock-in chains. This will lead to increased price competition and allow companies to negotiate their terms with their cloud providers. The current state of deep entanglement and dependence on a single entity will be a thing of the past.
Linux lessons
The potential of a decentralized cloud is best understood by examining the history of the Linux operating system. It is a well-known fact that it took a decade for Unix software developed by Bells Labs to evolve into Minix, and another decade for Minix to become Linux. And finally, after another decade, Linux was widely adopted by businesses, becoming the obvious choice for developers.
There is no denying that open source Linux software currently dominates the market, running on a staggering 80% of public servers and a staggering 100% of supercomputers. It is the ideal choice for all developers when launching an EC2 (Elastic Compute Cloud), an exceptional service provided by Amazon that allows users to effortlessly run applications in the AWS public cloud.
Linux's success can be attributed to the power of cumulative gains achieved through shared and open development practices. Developing an operating system from scratch is a monumental and complex task that requires millions of hours of specialized work, research and testing. However, after years of incubation fueled by government funding and hobbyist contributions, Linux finally reached a usable state and the benefits began to multiply.
Open source operating systems have become an indispensable part of modern technology. The Linux kernel, the heart of a computer's operating system, serves as a common foundation for users to collaborate and continually improve. Every user of the Linux kernel is responsible for identifying and reporting any bugs they find, and some even provide patches to fix them. It's safe to say that open source codebases are incredibly resilient and almost impossible to eliminate.
Open source software support is undeniably cheaper than open source software support plans. This is because creating an open source software product costs much less than creating and maintaining a custom operating system. The cost of Linux to an enterprise is minimal because it can be expressed in multiple developers helping out or support plans, which is particularly beneficial for companies without in-house kernel expertise.
On the other hand, the cost of creating and maintaining a custom operating system is exorbitant, requiring the controlling entity to cover the entire cost over its lifetime. This means that a high consumption cost is necessary to make the business economically viable. Multiple closed operating systems competing with each other must finance their own development and find sufficient market share to recoup their substantial investments, which to a large extent explains the current state of the market.
The cloud is essentially another operating system
The cloud is more than just a tool for abstracting away when and where tasks run, their resources, and how they interact: it's another operating system in itself. Just like operating systems, clouds are very sophisticated and complex resource managers, schedulers, and security providers. It's not just about running a program on your computer or setting up an open source Kubernetes cluster; the cloud operates on a much larger scale. AWS, GCP, Azure and DigitalOcean are the mainframes of our time and they are managed independently within each company. This is the reality of the cloud: a technology that has revolutionized the way we think about computing and data management.
It's essential to remember that open source codebases are nearly indestructible because their knowledge and power is stored in git repositories rather than in human processes that are prone to failure. OpenStack, developed by Rackspace and NASA in the 2010s as an open source cloud stack capable of turning any data center into its own AWS, is still under active development by companies, primarily in China. While some claim he is dead, that is far from the case. As a result, open source competitors have a significant advantage and a high probability of success since they can always be reborn, even when their original business model has collapsed.
Correct the mistakes made in our previous attempt
The failure of OpenStack was inevitable due to the selfish interests of companies that insisted on creating their software distributions, leading to a fragmented and competitive market rather than a collaborative one. Major vendors released their OpenStack distributions, causing further fragmentation and fierce competition, which hampered progress and development. The lack of standardization across OpenStack distributions has made cloud deployment and management consistently difficult for users. In contrast, OpenStack's complexity made it even more difficult to install and operate, especially compared to more sophisticated and user-friendly public cloud offerings from Amazon, Microsoft, and Google.
Without a strong central governing body, it becomes daunting to promote a cohesive vision and rally the community around shared priorities. Unfortunately, many companies that initially supported OpenStack have reduced their investments or pulled out completely. However, this time we have a secret weapon: nodes, consensus, block rewards, public goods, and ecosystem alignment research. With 15 years of experience in blockchain, node distribution is a largely solved problem, and we can certainly overcome standardization.
We need to determine the right public goods mechanisms and align users to unite people around protocols. This will put an end to endless bifurcations, competition and defection. Governance and shared roadmaps are issues we are continually working to tackle, and we are making great progress. The shared state and shared value of blockchains bind communities together and force them to collaborate in ways that easily exploitable open source repositories simply cannot. With financial primitives and collaboration tools almost ready to go, we already have a promising start. However, we must continue to work hard and move forward to achieve our goals.
A journey to a successful outcome
Decentralized cloud infrastructure is not a question of if, but when. We are closer than we think to decentralized cloud commercialization, which could bring hundreds of billions of dollars to the ecosystem if we captured the market like Linux did. Cloud markets are growing at an annual rate of 11%, and AI adoption can only accelerate this trend. In comparison, Bitcoin ETFs look like money for lunch.
However, there are some gaps that are holding us back from winning right now, and the area we're missing most is smart product decisions. The current state of Web3's attempts at decentralized cloud services is questionable, with limited product-market fit. Most people use these protocols only to play with their token incentives, which is unfortunate.
We need to step up and compete with a mature and incredibly well-funded industry. Technically or theoretically, ensuring storage, computing or delivery at CDN speed is not enough. This will not attract paying users and token inflation will run out, leading to failure.
We need to design hybrid centralized/decentralized SLAs. We need to secure it with trustworthiness incentives, security attestations, zero-knowledge proofs, fully homomorphic encryption, fraud proofs, governance protocols, and more. It is crucial to combine our local incentive structures with the monitoring, compliance and security technologies of Web2 and cloud hyperscalers.
Most of us in this segment have prototypes and MVPs, but we need to do the hard part: compete with incumbents by working with users and carefully adapting to their needs. What will attract paying users will be the product iteration cycle, evolving our offerings to meet their needs, and marketing them as a Web2 cloud company.
We need to listen to user feedback and use reward mechanisms carefully. Fortunately, the Web3 community is almost endlessly patient with testing and feedback, because we all want (need?) this to work. Take a close look at projects like Spheron Network, Estuary, and ArDrive and ask yourself if you would choose them over Dropbox, especially if your job depended on them. Now think about how much work still needs to be done to get there.