Wow! Yesterday’s Google I/O keynote speakers did not disappoint! This year’s premier event showcasing Google’s latest developer solutions, products, and technology, gave us a holistic view of their overall AI strategy and the role that these emerging technologies are playing across various portfolios. From consumer tools like Magic Editor in Google Photos to compelling foundation models in Vertex AI, it’s clear that AI investments are paying off.
New AI features integrated with Google tools we all use
I was excited to see various generative AI offerings that were built on technologies released last year, now integrated across so many Google products. I was especially thrilled about the announcement of PaLM 2, a major improvement in Google’s LLM offering. While we at SADA have been having a lot of fun helping our customers build bespoke generative AI solutions for unique use cases, it was equally rewarding to see these powerful technologies applied to well-established Google products such as Google Workspace and Photos.
While Google has had a steady stream of technologies such as Magic Eraser and Smart Compose embedded in their products over the years, today’s announcements were more impactful. While auto-complete in Gmail has been helpful, the ability to “help me write” an email in response to an existing thread, complete with all the contextual information, is going to make a big difference in the time spent replying to emails. Better yet, the “help me write” feature can be found in Google Docs as well as in Gmail, functioning as an assistant to a human user.
The concept of collaborating with AI, as demonstrated by Duet, represents Google’s unique approach to bringing generative AI to the forefront. Google Workspace has always been the best place to collaborate with other humans, and now humans will have the ability to collaborate with AI, too. Whether it’s creating speaker notes and graphics for a slide presentation or leveraging Bard to help get started on a new document, these AI-driven aids will make massive improvements to human productivity and creativity.
This blog post itself is an excellent example of humans and AI working together! While these ideas came from me, PaLM 2 helped me organize my thoughts and develop a catchy, search-engine-optimized headline. Not too shabby!
Google Search, developer tools, and tools for domain experts
Beyond these productivity boosters in Workspace, Google also announced significant improvements to the traditional Google Search experience. By incorporating AI-powered LLM technologies into the existing world-class search experience of Google Search, users can get the information they’re looking for more efficiently and holistically, as opposed to searching for each piece of information independently. Enhanced contextual awareness is a game changer for helping democratize access to information.
And the democratization of AI was the second major theme of today’s keynote. While machine learning tools have existed for some time, they’ve often been left in the hands of experts who are able to take advantage of things like Google’s TensorFlow software framework and who know how to push GPU and TPU hardware to their limits. With the announcements today regarding Google’s foundation models and other generative AI tools in Vertex AI, the democratization of AI continues, allowing people without machine learning experience to access powerful generative AI tools.
With today’s announcements regarding Vertex AI Model Garden and Generative AI Studio, domain experts such as marketing specialists, sales leaders, and finance professionals are able to easily and quickly create bespoke LLM chatbots, enterprise search experiences, and more without writing code or becoming experts in AI or machine learning.
Better yet, Google’s tools under the Vertex umbrella put the power of choice in the user’s hands. If the out-of-the-box experience for leveraging foundation models such as Codey (for programming languages), Chirp (for speech recognition), and Imagen (for text-to-image) fail to meet the specific needs of an organization, you can create a custom model from scratch, all without writing any code. Vertex also allows AI and ML specialists to further fine-tune models and leverage the tools, programming languages, and technologies they’ve become accustomed to.
This means that organizations can balance an easy-to-use, out-of-the-box experience with traditional machine learning development practices, finding the right mix of control and effort to produce custom, AI-driven results.
Privacy, security, and social responsibility
All this power comes with Google’s consistent commitment to privacy and security. The data used to train models as well as the outputs from those models remain strictly private for enterprise customers. This means that Google will not use private data to train their models or for any other purpose. This gives enterprises the freedom to train custom models on their own proprietary data without worrying about exposing their IP to competitors or to the public.
Google also extended their IAM granularity to their various AI offerings, ensuring that even users within a given organization are only able to see and interact with the data that they have permission to work with.
Google’s announcements today represent a continuation of their holistic AI strategy. This means that the dramatic productivity and ingenuity improvements that stem from generative AI will be accessible to more people, within more departments, across all industries.
By combining cutting-edge AI technology, democratization, privacy, and social responsibility, Google’s announcements at I/O 2023 give end-users extreme power, while providing businesses with the guardrails they need to adopt these technologies in a safe and controlled manner. I certainly can’t wait to see what comes next for Google’s AI technologies and what SADA’s diverse array of customers continue to build with them!
Have a generative AI use case you’d like to develop? Contact SADA’s AI/ML experts and get started on your custom solution today.