The Rise of Heterogeneous Deployments: Hybrid and Multi-Cloud Solutions in GCP

SADA Says | Cloud Computing Blog

By Simon Margolis | Associate CTO, AI & ML

Until recently, the idea of a multi-cloud architecture was more of a pipe dream than a viable option for your workloads. In theory, it sounds nice to spread risk across more than one infrastructure for business continuity, avoid vendor lock-in in the wild west of IaaS and PaaS speculation, and take tactical advantage of the strengths of one platform over another. However, monolithic application architectures don’t lend well to an infrastructure split, and VMs are too heavy and clunky to easily port from one cloud provider to another.

With the maturation of microservices, the increasing adoption of containerization, and general public cloud sentiment moving towards an infrastructure agnostic future, heterogeneous deployments will be increasingly in vogue in 2018. Google Cloud Platform is leading the way. GCP, along with partner services, is embracing a future in which companies can split workloads between on premises, competitor cloud and Google Cloud infrastructures.

Primitive Heterogeneous Environments: Isolated Applications and Business Processes

Running a multi-cloud or hybrid cloud environment is not quite as daring as it sounds. 82% of organizations, in fact, are running some type of heterogeneous environment. Think of the old school company that knows about the public cloud but is reluctant to give up its on premises or private cloud environment, or perhaps lacks the IT support for a full scale migration. They might house all highly sensitive workloads privately but place a few departments in a public cloud (HR, R&D, for example). While the business is categorized as hybrid cloud, the workloads are running parallel and not converged.

Google made a huge statement in 2017 by partnering with Cisco to target on-premises companies straddling the future and the past with a truly hybrid cloud service suite. Cisco, while out of the game in the public cloud space, has the distinction of being a stalwart in most on private data centers. With this partnership, Google hopes to provide the best of container orchestration and distributed workloads to companies dead set on keeping sensitive workloads close to home.

Limitations to Multi-Cloud Portability: The AirBnB Analogy of Cloud Infrastructure Demand

The cause of demand for multi-cloud solutions (not to be confused with hybrid cloud, which typically refers to public and private cloud split) can be explained in a simple analogy. Say you planned a trip to Madrid with friends, and you booked an AirBnB in the center of the city. There will be some days that you travel in traffic to the opposite end of the city and your housing arrangement is very inconvenient. There might also be days where certain, way cooler AirBnBs, are available at a cheaper price, but only for a short window. Economically, it would make sense if you booked the best, cheapest, most convenient AirBnb for each day.

But the hassle of booking several places, packing and unpacking your bags, communicating with hosts in your broken Spanish from middle school makes all this unfeasible.

This is the situation that faces most IaaS customers today. There are certain AWS workloads, for instance, that would run more cheaply and faster on GCP. You might want to take advantage of a PaaS service only available through GCP, such as Google Cloud Functions, Google Cloud TPUs or Google’s Machine Learning Engine. You may also want to simply have a backup plan for service outages. But as previously mentioned, porting over your application environment from one provider to the next can be laborious and confusing. But as standards start to solidify around container orchestration, microservices architectures patterns and service discovery, the promise of being truly portable has arrived.

Keeping Track of Your Billing with Hybrid and Multi-Cloud

The primary modern purpose of containers is to package microservice applications in infrastructure agnostic deployment units. Containers are lightweight and portable (the online ones are – have you ever tried to pick up a shipping container? Takes work). In theory, this sounds great and nice. However, most applications that make use of hundreds of containers and thousands of workloads require an orchestration layer, complex load balancing and service discovery. Google’s betting on Istio, which serves as an open source, secure service mesh that works across clouds, as well as upcoming GCP integrations with Pivotal Cloud Foundry (PCF – which allows you to move workloads cross cloud in a matter of minutes) as a way of telling naysayers: “So, you can actually distribute your workloads now, control and secure them all in one place, and it’s actually not that hard.”

But wait – why would one cloud provider like Google embrace a future of multi-cloud? Wouldn’t that be like telling customers at a restaurant that it’s cool if they just ordered a side salad and left versus a whole meal? (I mean, you remember how reluctantly Apple was to make anything work with Windows, right?) There are two main reasons. First, Google monetizes GCP through a plethora of service offerings that each have their own unique revenue streams. Second, Google is confident that giving new clients a foot in the door to GCP will open more doors and revenue opportunities.

Plus, we’re talking about the same company that open sourced Kubernetes and Hadoop. Google just likes sharing.

LET'S TALK

Our expert teams of consultants, architects, and solutions engineers are ready to help with your bold ambitions, provide you with more information on our services, and answer your technical questions. Contact us today to get started.

Scroll to Top