I think I watched a different Google I/O than most of you did.
Generative AI, a foldy phone, Generative AI, Chloe’s Rainbows, also Generative AI, and in addition Generative AI, and certainly also Generative AI were front and center on stage at Google I/O 2023. But you know me, I’m easily distractible and often thinking about context, implications, and what this stuff means for the rest of us.
I love these new tools, and my mind is blown by how much value is going to be created not just with this wave of launches, but of course the second order of creations born in a world where these tools exist. When will the first startup founded by creative, empathetic, patient, and insightful business folks ship a SaaS product without any human-only software development? Lemme tell you, that company is going to go faster than yours.
But where does all this stuff live?
I keep coming back to just how few technology systems are actually running in the cloud today. The numbers keep creeping up, reflecting the growth rate of each of the hyperscalers and their surrounding ecosystem, but it’s certainly less than 1/5th of all the infrastructure. Roughly 80% of the gear is still in data centers, broom closets, and under desks.
Look, the hyperscalers have better procurement departments, supply chain management, capital positions, and deeper pockets than most, typically by several orders of magnitude. Behind each of this week’s launches is a global system of incredible complexity pulling ML-optimized silicon into production faster and cheaper than is possible when you’re not buying in the units of billions.
Ever tried to buy a rack of GPUs? How about a TPU? Check this out–we have it on good authority that Nvdia’s complete production run for the next outputs from their fleet of fab facilities are literally all spoken for. All sold. The availability crunch will get worse before it gets better.
Not to mention–those Googlers have a few developers who are pretty sharp! Much of the foundational research, both narrowly in AI, as well as more broadly in the infrastructure systems required to efficiently vend AI, depends on billions in R&D investment putting tens of thousands of engineers into building what’s next. These are long bets that require deep, patient pockets. Does that sound like your CFO?
Hybrid, hybrid, hybrid
So, you, there, with the two datacenters, buying appliances from vendors with none of the above going for them, operating software by hand, waiting months for delivery of out-of-date equipment. It was time for you to move to the cloud a while ago, and many of your peers already have. Now it’s utterly irresponsible to not divest of legacy* colo and dc investments as fast as possible, and here’s the kicker, even if it looks like cloud costs more on paper.
(Okay, so why does legacy* gets an asterix, you ask? There are still lots of reasons (latency, jurisdiction, and licensing remain the top 3) why some on-premises equipment isn’t going anywhere, but that doesn’t mean it can stay manual and static! Hybrid! Read here, here, and here!)
Just add lightness
As with exhaust, tobacco, or sugary drinks, the downsides aren’t advertised on the label of the really slick 4u server you just bought. And those downsides are huge, namely your inability to benefit from all the innovations of the public cloud. That gap is only expanding as a result of the GenAI wave.
I read the “moat” doc too. There is an incredible OSS community around AI, the LLaMA leak, the communities working out how to run these models smaller and more efficiently and figuring out how to fit their magic into a laptop. OSS teams have often been on the efficiency vanguard because they’re working with constraints the hyperscalers don’t have, and that’s a great thing. It doesn’t mean that your production systems should be hand-managed! All that innovation is poured back into the hyperscale systems, too, so it’s a wash–the benefit helps both ways of working, so the balance of value still tilts to cloud customers.
I remember watching a documentary about the Lotus race car engineers, and the idea of “adding lightness” as a feature. The fastest customers I talk to are always looking to manage, operate, orchestrate, optimize, and yes, own as little of their systems as they possibly can, so that they can go faster. They want a lighter, more focused, faster way of delivering value. Often they end up operating some critical pieces because their bar for performance/reliability/availability is extremely high. But otherwise? They let it go. They have to.
I remember at AWS, for a while, we talked about not having sales people at all. I mean, Amazon.com didn’t, right? What we found (nearly instantly) was that all other enterprise infrastructure systems did, and that they were an accelerant on the path to value for customers. If one team has gas and the other doesn’t, we can all see who’s gonna win. So, we invested, and it worked! Now AWS is nearly a $100b business. GCP is over $30b now, about 1/3rd, where I remember when it was 1/1000th. I wonder how this season will play out 😉
Cloud is the prerequisite for Generative AI
AI, and Generative AI in particular, is going to be an accelerant to nearly every task, for every function, in every role, in every department, in every vertical/type of company everywhere, to at least a noticeable degree. Even if you think this is all very early and immature and the use cases are unclear and ROI is not explicit, it’s pretty hard to not see where this is heading. Being in an environment that prevents access to a change this substantial… I can’t see how that works out well.
THAT is the I/O that I watched, and maybe I had that view in common with other engineering and technology leaders in companies without 100B in revenue and a quarter million employees. You know, like, actual normal companies.
At this I/O, what took over the stage was cloud as a prerequisite for what’s next.
I came to SADA because this is obvious to me, and I felt a responsibility to help as many companies as I can make the move to cloud, to “digitally transform,” and get ready to transform even more. I’ve done as much as I can in these last four years to make us capable of helping companies of all shapes and sizes make this leap, and help them across the technical, fiscal, legal, educational, cultural, and logistical hurdles that stand in the way. We are changing internally as a result of these innovations, and I’m confident we’re going to be ever better and faster at this tomorrow than we are today.
But today we’re pretty good! Best in class, as they say. So, if this adds up for you like it did for me, reach out, let us know when you’ve got a chance to talk shop and think about what’s to come.