Why Google Cloud is perfect for GenAI developers

SADA Says | Cloud Computing Blog

By Simon Margolis | Associate CTO, AI & ML

In my role as Associate CTO at SADA, I get to help developers from a wide variety of businesses navigate a wide variety of AI and ML advancements. The ecosystem has been in constant flux, but breakthroughs continue to impress. Just yesterday, I had the uncanny experience of hearing a podcast about an article I wrote which was completely AI-generated thanks to Google’s NotebookLM tool. 

Despite these paradigm-shattering breakthroughs, some suggest that Google’s AI efforts and the broader Google Cloud Platform have fallen short of the competition. I’d like to offer a counterargument to these claims based on my experiences working with a wide range of customers.

Google Cloud–a tested platform for AI development

Comparing model-builders like Anthropic, AI21 Labs, and OpenAI to platform providers like Google Cloud is not an apples-to-apples comparison. While Google Cloud also produces first-party models like gemini-1.5-pro and code-bison-32k, Google Cloud is not just a model developer. 

Rather, Google Cloud has produced a massive platform to facilitate the development of AI-based applications and workflows across numerous modalities, ranging from bespoke model creation to low- and no-code platforms for creating cutting-edge generative AI applications.

AI flexibility 

While Google’s models continue to be second to none for their use cases, it’s also important to note that Google Cloud’s customers have the freedom to use their cutting-edge platforms like Vertex AI and Agent Builder with non-Google models found in their Model Garden. 

This means you get a win-win experience; they can make use of Google’s game-changing platforms for managing, using, fine-tuning, and constraining models while maintaining the choice and flexibility to select a model that best suits their needs. 

I can build an application in Google Cloud Platform that uses Meta’s Llama 3.1 for some tasks, Google’s Gemini 1.5 Pro for others, and AI21 Lab’s Jamba model for even more. I can do this all in one place, with one console, one set of credentials, and receive one consolidated bill at the end of the month. I’m not dealing with authentication across platforms and vendors (or the security implications of doing so) and can focus on the core logic of my application. 

A strong case for security

Speaking of security, billing, and other important but often overlooked factors pertaining to launching commercial-grade applications, this is another area where comparing Google’s approach to AI stands alone. Google launched their Cloud Platform over a decade ago and have built a robust, enterprise-grade platform that covers everything from MLOps to FinOps. 

This foundation is critical for a few reasons, but especially when it comes to security and reliability. Google Cloud has more than a decade of experience when it comes to partitioning and securing client data at global scale in a shared-service environment. 

I’ve never gone to Google.com and found it to be non-operational. Google has applied the same level of SRE practices to their cloud platform that they have to their other products serving over a billion users, like Gmail and YouTube.

The importance of data privacy

When it comes to generative AI applications sold to businesses or consumers, data privacy is no small consideration. Not only can failures erode public trust, but they can lead to significant legal and regulatory concerns. Pre-revenue, fast-moving model creators like OpenAI have the potential to cut corners, leading to data leaks that have serious consequences for users who build applications with their services.

Another major consideration is that these AI-based applications must consume data from somewhere and must run somewhere. This means the need for robust application-serving environments and databases, all items found at hyper-scale cloud providers, but often not at model-building vendors. 

Again, this means that developers must navigate shifting contexts, disparate authentication environments, and complex billing and cost models. When developing within Google Cloud, these concerns are allayed. I can select a model of my choosing, point the model at data I want to leverage for RAG, and create an API for that service via a hosting environment like Google’s Cloud Run. I can do all of this without ever worrying about authentication (the services all talk securely to each other without intervention), and I know I’ll get a single, easy-to-digest bill at the end of it all.

tech, develop, knowledge, model, libraries, developer, efficiency, coding, quality, create,
explain, interested, benefits, job, 
engage, ability, recognize, examples, training, risk, analytics, 
components, practices, focus

Assessing Gemini

How are Google’s Gemini family of models performing, comparatively speaking? While the race for the “best” LLM continues, Google’s state-of-the-art experimental Gemini models continue to score at or above their competition in many benchmarks. 

More importantly, not all Gemini models are created equal. While gemini.google.com can be a fun tool for some to play with, it’s fundamentally different from gemini-1.5-pro. When it comes to coding tasks like code generation, code-bison significantly outperforms even other Gemini models. And when it comes to context windows, Gemini’s 2 million tokens are a big deal. This means up to one hour of video can be fed into the “memory” of the model, unlocking use cases that are simply impossible on models with smaller context windows.

In my experience, when developers see poor results from any LLM model from any vendor, the issue tends to stem from forcing models to behave in ways that they were not designed to. This is where working with a partner like SADA can be especially helpful. We’ve seen so many use cases and learned what works and how so that our customers can spend more time devoted to  the task at hand.

Google’s history of innovation

Some may wonder if Google has been late to the AI party. I want to point out that the same criticisms came out when Google first entered the public cloud space. While Google Cloud came to market after AWS, the Google Cloud foundation was significantly older, and by many accounts the first “cloud.” 

It’s a tribute to Google’s thoroughness and caution that they took the time to develop, test, and fine-tune their cloud offering to ensure the most stringent security for the greenest cloud available. Google just felt (and in some cases were correct) that the public was not advanced enough to make use of this type of technology; at least not yet. 

The same can be said about Google’s AI prowess. Google has been working on and releasing AI-based services for many years. Whether it’s Gmail’s intelligent spam filtering, YouTube’s captioning systems, or Google Photos’ ability to identify family members as they age, Google has a robust pedigree of AI development. 

Ease of development

And that brings me to my last point: developer friendliness. Google Cloud has long differentiated itself from competitors by being the most developer-friendly cloud environment. We constantly hear from cloud developers that Google is their choice in terms of ease of development. While there’s no doubt that other model vendors also make it easy to use their models, Google is not some behemoth with complex implementation. 

And you don’t have to take my word for it. A full 90% of “unicorn” companies and 60% of startups in general choose to work with Gemini and Google Cloud. 

To double-check my thinking, I went to Google AI Studio, wrote a quick prompt, and clicked the “get code” button on the home screen. This took about fifteen seconds and gave me code that I could copy and paste into my application to call the Gemini 1.5 Pro model. 

I should also mention that this was completely free of charge. I did not provide any payment information or other personal details. And while yes, like with all other generative AI model developers, I still needed to provide an API key to my code, doing so took about another five seconds. I simply clicked “get API key” and got a key–still without providing payment information. 

architecture, contribute, 
support, document, techniques, 
relevant, productive, creating, 
workflows, integrated, technologies, innovative, platform

Google Cloud’s winning approach to AI

So, TLDR, I’d love to help anyone who’s on the fence about AI platforms to better understand that not all players in the AI ecosystem are created equal. 

Based on my understanding across thousands of clients I’ve had the opportunity to serve, Google Cloud’s approach to AI represents the best of all worlds; customers gain state-of-the-art performance, a holistic application development environment, low- and no-code platforms for AI development, tools for traditional ML model development, freedom of choice across many model vendors, security, privacy, and consolidated billing.
I’d be remiss if I asked you to take me at my word on any of this. Reach out today and we’d be happy to jump on a screen share and show you how all of this works. No slide shows, no marketing material, just real development consoles. It’s no exaggeration when I say that we can not just help clients work with models in minutes, but rather launch full-blown generative agents with knowledge of their own, and private organizational data in an enterprise-grade, scalable environment in under an hour.

LET'S TALK

Our expert teams of consultants, architects, and solutions engineers are ready to help with your bold ambitions, provide you with more information on our services, and answer your technical questions. Contact us today to get started.

Scroll to Top