Approaches for CTOs to drive innovation
Miles Ward
CTO
THE GREAT AI RECKONING HITS THE P&L
The era of the "AI science fair" is officially over. In 2026, the C-suite will stop funding anything that doesn't have a direct, measurable line to revenue or cost reduction. The conversation has moved from "what's possible" to "what's profitable." This isn't about “magic” anymore. It's about building and—critically—evaluating systems that show a hard return on investment. If you can't measure it, you won't get the budget for it.
Leaders choose growth, not just "less"
We're at a fork in the road. The lazy executive will see AI as a simple tool to cut headcount—a trap that just leads to smaller, more brittle companies. The smart leaders will use AI to do what's harder, but always better: to find growth. We'll see a surge in using generative AI to create "McKinsey in a box" style opportunity generators. These tools will analyze a company's unique data and market position to identify and quantify new growth opportunities across the enterprise, not just shave points off OpEx.
"Service-as-a-Software" goes from theory to production
Last year, I talked about "Service-as-a-Software," and in 2026, we'll see it in production. Categories of service work that companies used to staff with large teams will be replaced by agentic software. This isn't just a chatbot. Think autonomous agents built on Google's Vertex ADK and Gemini Enterprise that execute complex, multi-step business processes—like routing non-emergency calls, processing insurance claims, or identifying the perfect video keyframes for a media company. The competitive advantage will shift to companies that are best at evaluating and managing these non-deterministic, agent-based systems.
The commoditization and democratization of AI and its tools
Simon Margolis
ASSOCIATE CTO of AI/ML
LLMS ARE GOING TO BECOME COMMODITIES LIKE CPUs
Today, there's still a lot of hype about the latest LLM. Whenever Google, OpenAI, Anthropic, etc, release a new model, it's newsworthy. Many years ago, the same could be said about CPUs. Intel, AMD, ARM, etc, were in a constant fight to ship better, faster, smaller, and more capable chips. Getting your hands on the latest was a serious advantage. Today, however, cloud native customers couldn't care less about the underlying silicon. The chipset sits on top of multiple layers of abstraction, and the performance or efficiency of the system is almost never bottlenecked by the CPU. The same will be true for LLMs. Customers will care about the performance and efficiency of the AI system, not the specific model that powers it. Rather, the emphasis will be on the platform (Vertex AI, Bedrock, etc.), not on the underlying models.
AI will be expected
We've become accustomed to web pages loading quickly. If a click on a web app takes more than a second to respond, we worry the app doesn't work, lose patience, or move on. The leading web technologies that enable this rapid responsiveness are table stakes for web developers. Likewise, the consumer of the future will expect AI capabilities to be a part of every workflow. If a user is asked to book a hotel stay by manually putting in search requirements, scrolling results, etc., as opposed to talking to a virtual agent who can help navigate the available options based on the user's preferences, the user will simply look elsewhere.
Application development will be significantly more accessible
A bit over a decade ago, getting experience with scalable systems meant working at a company that had resources to provide scaled servers, networking gear, storage, and access to a datacenter. There was an extremely high barrier to entry for folks who wanted to gain experience with these systems and their interworkings. The advent of the public cloud radically changed this, making state-of-the-art gear accessible to anyone with a credit card and a few dollars. With the rise of agentic AI in the software development lifecycle, we're due to see increased accessibility and democratization of application development. No longer will young founders need to shop for development resources to build their prototype. Likewise, developers who want to skill up will not be barred by expensive courses, boot camps, or learning on the job. AI tools will enable folks to quickly get prototypes and POCs off the ground, enabling far more people to put the proverbial pen to paper.
The maturation and operationalization of artificial intelligence
Fabian Duarte
Associate CTO
SHIFT TO AUTONOMOUS OPERATIONS (AIOps)
Agentic AI will begin to evolve into autonomous AI and imperative AI. Businesses will be extending their AI initiatives, moving from augmenting business value to business transformational solutions. There’ll be a wider adoption of AI agents incorporated into our daily lives. With the adoption of functional AI solutions, AI will be consumed across the board, driving up the need for cloud native solutions like Vertex AI for rapid time-to-value. Hybrid cloud will also dominate, driven by the need to secure AI resources and burst on-demand.
AI and cloud: the era of financial accountability
Robin Roacho
Lead FinOps Financial Analyst
PROVING AI ROI
It’s been challenging for companies that’ve adopted AI to prove the ROI, primarily because the benefits can be abstract, intangible, and highly subjective. Companies will need to get creative with how they want to narrate their AI investments to shareholders.
Cost management commoditization
As the cost management tool market gets saturated, and new features are quickly adapted for most, if not all, of those cost management tools, the products will become a commodity. The cheapest will win the business.
AI cost optimization
Companies will focus heavily on optimization for Vertex AI and the rest of the AI-related products from Google Cloud, as the costs could get out of control quickly. Perhaps Google Cloud will introduce features in the products to avoid anomalous spending.
The maturation and governance of AI systems in the enterprise
Chris Hendrich
Associate CTO
AI AGENTS GO CORPORATE
In 2026, we’ll move past the initial "wow" factor of AI agents and get down to business. Companies will be juggling dozens, if not hundreds, of these agents, and they'll be scrambling for a way to keep them all in line. Having a solid plan for managing their entire lifecycle—from creation to retirement—will be crucial for keeping up with security patches and new tech. This is why we'll see a bigger push for "app store"-like marketplaces, where employees can grab trusted, pre-vetted AI tools to safely use in their daily work.
Figuring out the AI bill
Once AI systems are running in the wild, companies will hit a new set of growing pains. It's one thing to build a cool AI tool, but it's another to run it efficiently and prove its worth. The old ways of tracking IT costs just won't cut it for AI, where expenses can swing wildly based on usage. A new field, "FinOps for AI," will take shape to tackle this, helping businesses figure out if their AI investments are actually paying off and making sense of the new, complex bills.
AIOps that actually know your systems
The tools we use to manage our tech are going to get a lot smarter and more personal. Imagine an AIOps tool that doesn't just give you generic advice but knows your company's specific Kubernetes clusters and network layout inside and out. You'll stop getting textbook commands that you have to tweak just to make them work. Instead, the AI will give you guidance that's ready to go or even just fix the problem for you, making it a true partner for your IT team.
The cultural imperative for AI success
Veronica Raulin
Senior Director, Advisory
THE 'LEAPFROG EFFECT' WILL DEMAND RADICAL EMPATHY AND CHANGE LEADERSHIP
In 2026, companies that have been slow to embrace cloud-native collaboration and tooling will feel intense pressure to "leapfrog" to realize the full benefits of built-in AI functionality. While this unlocks incredible speed and efficiency, it creates a massive cultural whiplash for employees suddenly forced to abandon old, familiar ways of working. As change management leaders, our toughest job won't be teaching new tools, but rather providing the radical empathy needed to help people unlearn deeply rooted behaviors. We must ensure employees feel supported, not overwhelmed, as they make this huge jump, acknowledging that a fast technical rollout requires an even faster human connection to succeed.
We will need to use AI to master AI
The continuous, rapid evolution of AI capabilities has made large, static training programs unnecessary. By 2026, the only way to scale employee proficiency—in everything from onboarding to a new role to deep dives to preparing meaningful talking points for an upcoming customer call—will be by fully embracing AI as the core learning infrastructure. Tools that support Deep Research and structured knowledge capture, like NotebookLM, will become the norm, enabling personalized, just-in-time learning directly within the workflow. For employees, this is a huge win: they get the knowledge they need, instantly. For our training experts, their value pivots to mentorship and validation, ensuring the AI-generated learning is accurate, ethical, and aligned with strategic goals. This creates a powerful human-in-the-loop learning system that scales knowledge faster while keeping our best thinkers at the center of the learning journey.
A focus on success metrics related to AI-augmented outcomes
We began by measuring the value of AI tools, focusing on time saved and user satisfaction. Now that we know what AI can do for different user personas and departments, the metrics for success will focus on AI-augmented outcomes. We’ll stop asking if people used the tool, and instead ask questions like “how much faster did AI help us make a high-stakes decision,” or “how many more customers were we able to add to the pipeline”? This shift, which requires a new layer of data visibility and an understanding of how we get work done today, fundamentally redefines the purpose of our collaboration tools from just connecting people to accelerating business outcomes.
AI's shift: from hype to secure, internal ROI
John Veltri
Managing Director, Solution Sales
AI SPENDING SLOWS, CREATING AN INFLECTION POINT FOR ECOSYSTEM/PARTNERS
The“open checkbook” for AI has ended, creating a necessary inflection point in the market. With increased scrutiny on adoption and security, C-level executives are now driving a market correction where ROI becomes the primary focus. This is moving AI from 'hype' to practical implementation. Robust solutions like Gemini for Enterprise are perfectly positioned for this new era, delivering core business value through employee optimization and ensuring that AI is a governed business driver that delivers clear success factors for 2026 and beyond.
Increased scrutiny on AI governance
As AI spending is scrutinized, there’ll be a market correction in which ROI becomes the primary focus, moving AI from "hype" to practical implementation—in Google-speak, Google Workspace and Gemini for Enterprise. This has also yielded greater oversight from both government and technology leaders alike to rein in unmanaged technology growth. We’re seeing this unfold through the EU Artificial Intelligence Act and many prominent technology companies pushing back on the development of superintelligence. Security practices and platforms will move to the forefront in 2026 to protect the development of AI and to ensure that the governance of enterprise models and agents is built on security in tandem.
Focus on internal consumption and employee optimization
Executives will see AI as an opportunity to rapidly change the culture and behaviors of the employee base. Executives are meeting market demand by providing new technologies; the resulting empowerment through AI productivity tools will improve company culture and drive operational efficiencies. With both a more structured and scrutinized focus on the horizon and greater global/governmental concerns and regulations, we’re going to see a retreat in the adoption of AI into enterprises' internal workflows in 2026. Google is driving large-scale enterprise partnerships by providing branded SaaS offerings, merging its AI end-user platforms with Vertex, and focusing on end-user optimization through search and agentic workflows. Google is primed not only to weather but also to win the AI strategy during this turbulent time, and we’re already seeing these wins publicly (GM). Google’s “chip to employee” AI framework is unchallenged in the hyperscaler market, and through our focus on end-user adoption and efficiencies, Insight is primed to deliver this win for Google to our end users.
LET'S TALK
Our expert teams of consultants, architects, and solutions engineers are ready to help with your bold ambitions, provide you with more information on our services, and answer your technical questions. Contact us today to get started.