With Google Anthos, My Other Cloud Is Whatever I Want

SADA Says | Cloud Computing Blog

By Simon Margolis | Associate CTO, AI & ML

I remember the first few development projects I did in school. It was amazing when my code compiled and ran for the first time. Of course, there were logic errors, but that’s a story for another day...

The thrill quickly faded when I wanted to keep working on the same project on my personal computer at home. I copied the source files -- why wouldn’t anything work at all? And of course it got even worse when sending the source to a buddy to help troubleshoot. “This thing won’t even compile!”

We’ve come a long way from the days of “well it works on my box,” and we’re never going back. With standard libraries, standard runtimes, and standard operating systems, the community has done a lot to help prevent these annoying blockers of productivity and innovation.

We then saw the rise of shared infrastructure and distributed computing; the blocker to our success became a race for resources. This, being not so different from time-sharing on early academic computing systems, required yet another novel standardization in the form of cgroups. But groups were just the start.

The Rise of Containerization

With heavyweights like Google, RedHat, Amazon, Microsoft, and VMware contributing to the Kubernetes project, I knew early on that containerization was the future and that container orchestration would be a key element of that future. Sure, only organizations at the magnitude of those behemoths needed the power, scale, flexibility, and efficiencies that these technologies brought, but it was only a matter of time before the democratization of data, the cost of computing (and storage!), and the ubiquity of machine learning encouraged all organizations to operate in a similar manner.

Just as standardization helped us jump from “pets” to “cattle,” containerization is not going away. Today’s developers have to justify why they’d want greater management overhead and less automation to warrant not using abstracted services such as container orchestration platforms. We see every major public cloud provider offering various flavors of container-orchestration-as-a-service, and portability is just one of the myriad benefits.

Many in the community have been exposed to the benefits of containerization at this point, and most have had a chance for hands-on experience. Portability is key, scale is important, and my personal interest centers on the enablement of microservices-oriented architecture which enables development at unprecedented breakneck speed. Happier developers developing better applications while making more efficient use of their time as well as of the underlying compute resources is a win-win-win as far as I see it.

And even better, we’re seeing fantastic innovation on tools and services which make the management and deployment of these microservices-oriented applications en-masse, such as Google’s Cloud Run and App Engine Flexible Environment. The focus is now inclusive of, but also beyond the developer. These tools are making everyone more efficient, and thus, more enabled.

There Will Never Be "One Cloud to Rule Them All"

I was exposed early on in my career to the power of cloud computing and immediately recognized how it enabled me to try new technologies and, most importantly, ideas. Today’s developers are also able to push far beyond the familiar and experiment with new ideas as enabled by these ever-emerging technologies. The barriers to innovation and experimentation are dropping at a never before seen pace.

That said, it’s important to recognize that, in this seeming computing renaissance, there are still paths that may lead to eventually undesirable outcomes for the future of this community-turned-industry.

Many of the clients I work with have, for cause, a fear of vendor lock-in. We’ve seen this thread constantly in the technology space, and even as recently as in the early “cloud wars.” Similarly, the fear of getting locked into a given technology might mean a rapid accrual of technical debt, a concept that is as loathed as any in today’s landscape. Admittedly, I have always been, and still am, a proponent of technologies like Google’s App Engine and AWS Lambda. This is what initially drew me to cloud computing as I don’t have to pay any mind to any infrastructure component at all. This is the ideal, right?

As it turns out, not always. Today’s cloud world is a multi-cloud world. Much like the paradigm shifts before it, this too is here to stay. There will never be “One Cloud to Rule Them All,” regardless of how badly some providers would like one. In this multi-cloud environment, I’ve had to accept that proprietary, truly serverless platforms may not be the endgame for all of the world’s computing. I may have a data lake in GCP due to the vendor-specific data tools which I employ, while my application backend lives in Lambda. I’m now stuck paying egress fees and dealing with potential performance implications because I simply want to leverage the best possible technology for my given task. I could, of course, replatform my data lake on AWS or my backend on GCP, but that would be a significant undertaking, and it may be done in spite of, not because of, the best technology.

Google Anthos: Embracing the Paradigm Shift to Multi-Cloud

This is why I’ve become so excited about Anthos. It has the potential to solve the “it works on my cloud” problem for the future of multi-cloud computing. Just as my Docker container will run anywhere I can find a Docker runtime, my entire infrastructure-application ecosystem can run anywhere I have an Anthos “runtime.” I’ll have portability to run my environment wherever it makes the most sense, whether that is on hardware (compliance, performance, data residency, etc), GCP, AWS, or Azure. The fear of vendor lock-in goes away in multiple regards. I can deploy the Anthos environment wherever I want, so I’m not dependent on a provider. Furthermore, should I wish to completely leave Anthos, I can take my Docker and Kubernetes constructs and rebuild them elsewhere with minimal effort.

This significantly expands the options and tools available to developers, CREs, and operations teams. Scale can be done across clouds, high availability can take on new meaning, data can reside where it’s most needed or required, specific services can be exactly where they need to be, following the ever-expanding “edge.” Disaster recovery strategies can be reconsidered with less cost and greater reliability. Machine learning can be applied to the services themselves, making determinations on how to run in the most cost-efficient, performant, reliable way without sacrifices or compromise.

Sound interesting?

The thing is, this isn’t even what excites me the most. What makes me the most optimistic for the future of this community is the prospect of building services which can be “installed” on a cloud-ready “operating system” such as Anthos. When I consider that Google leverages containers and container orchestration to deliver applications like Gmail, YouTube, BigQuery, and Compute Engine, I start to dream about what the future could hold. Can we one day deploy BigQuery on Anthos on AWS so that it can query on the data stored in Redshift with no egress cost and no latency hit? Could I launch GCE inside of a client’s datacenter to give them the flexibility and power of Google innovations like Live Migration without concern for data residency, compliance, or capitalization?

I believe that in the near term we will see a future where a team can run the best service, leveraging the best technology, with the lowest cost, and highest performance wherever it makes sense to do so at any given moment. If you love Google BigTable but need to take advantage of your unused AWS RIs, simply deploy Anthos, install BigTable, and start loading data.

We’re not here today, but with the pace of innovation we’ve seen, and the exponential leaps that standardization has brought to our community, I’m extremely optimistic about what the future holds, and what the next generation’s products and services will solve for when efficiency, portability, and innovative technology are simply the defaults.

LET'S TALK

Our expert teams of consultants, architects, and solutions engineers are ready to help with your bold ambitions, provide you with more information on our services, and answer your technical questions. Contact us today to get started.

Scroll to Top