Google I/O was an AI evolution, not a revolution
At Google’s I/O developer conference, the company made its case to developers — and to some extent, consumers — why its bets on AI are ahead of rivals. At the event, the company unveiled a revamped AI-powered search engine, an AI model with an expanded context window of 2 million tokens, AI helpers across its suite of Workspace apps, like Gmail, Drive and Docs, tools to integrate its AI into developers’ apps and even a future vision for AI, codenamed Project Astra, which can respond to sight, sounds, voice and text combined.
While each advance on its own was promising, the onslaught of AI news was overwhelming. Though obviously aimed at developers, these big events are also an opportunity to wow end users about the technology. But after the flood of news, even somewhat tech-savvy consumers may be asking themselves, wait, what’s Astra again? Is it the thing powering Gemini Live? Is Gemini Live sort of like Google Lens? How is it different from Gemini Flash? Is Google actually making AI glasses or is that vaporware? What’s Gemma, what’s LearnLM…what are Gems? When is Gemini coming to your inbox, your docs? How do I use these things?
If you know the answers to those, congratulations, you’re a TechCrunch reader. (If you don’t, click the links to get caught up.)
What was missing from the overall presentation, despite the enthusiasm from the individual presenters or the whooping cheers from the Google employees in the crowd, was a sense of the coming AI revolution. If AI will ultimately lead to a product that will profoundly impact the direction of technology the way the iPhone impacted personal computing, this was not the event where it debuted.
Instead, the takeaway was that we’re still very much in the early days of AI development.
On the sidelines of the event, there was a sense that even Googlers knew the work was unfinished. When demoing how AI could compile a student’s study guide and quiz within moments of uploading a multihundred-page document — an impressive feat — we noticed that the quiz answers weren’t annotated with the sources cited. When asked about accuracy, an employee admitted that the AI gets things mostly right and a future version would point to sources so people could fact-check its answers. But if you have to fact-check, then how reliable is an AI study guide in preparing you for the test in the first place?
In the Astra demo, a camera mounted over a table and linked to a large touchscreen let you do things like play Pictionary with the AI, show it objects, ask questions about those objects, have it tell a story and more. But the use cases for how these abilities will apply to everyday life weren’t readily apparent, despite the technical advances that, on their own, are impressive.
For example, you could ask the AI to describe objects using alliteration. In the livestreamed keynote, Astra saw a set of crayons and responded “creative crayons colored cheerfully.” Neat party trick.
When we challenged Astra in a private demo to guess the object in a scribbled drawing, it correctly identified the flower and house I drew on the touchscreen right away. When I drew a bug — one bigger circle for the body, one smaller circle for the head, little legs off the sides of the big circle — the AI stumbled. Is it a flower? No. Is it the sun? No. The employee guided the AI to guess something that was alive. I added two more legs for a total of eight. Is it a spider? Yes. A human would have seen the bug immediately, despite my lack of artistic ability.
Know More https://techcrunch.com/2024/05/16/google-i-o-was-an-ai-evolution-not-a-revolution/
While each advance on its own was promising, the onslaught of AI news was overwhelming. Though obviously aimed at developers, these big events are also an opportunity to wow end users about the technology. But after the flood of news, even somewhat tech-savvy consumers may be asking themselves, wait, what’s Astra again? Is it the thing powering Gemini Live? Is Gemini Live sort of like Google Lens? How is it different from Gemini Flash? Is Google actually making AI glasses or is that vaporware? What’s Gemma, what’s LearnLM…what are Gems? When is Gemini coming to your inbox, your docs? How do I use these things?
If you know the answers to those, congratulations, you’re a TechCrunch reader. (If you don’t, click the links to get caught up.)
What was missing from the overall presentation, despite the enthusiasm from the individual presenters or the whooping cheers from the Google employees in the crowd, was a sense of the coming AI revolution. If AI will ultimately lead to a product that will profoundly impact the direction of technology the way the iPhone impacted personal computing, this was not the event where it debuted.
Instead, the takeaway was that we’re still very much in the early days of AI development.
On the sidelines of the event, there was a sense that even Googlers knew the work was unfinished. When demoing how AI could compile a student’s study guide and quiz within moments of uploading a multihundred-page document — an impressive feat — we noticed that the quiz answers weren’t annotated with the sources cited. When asked about accuracy, an employee admitted that the AI gets things mostly right and a future version would point to sources so people could fact-check its answers. But if you have to fact-check, then how reliable is an AI study guide in preparing you for the test in the first place?
In the Astra demo, a camera mounted over a table and linked to a large touchscreen let you do things like play Pictionary with the AI, show it objects, ask questions about those objects, have it tell a story and more. But the use cases for how these abilities will apply to everyday life weren’t readily apparent, despite the technical advances that, on their own, are impressive.
For example, you could ask the AI to describe objects using alliteration. In the livestreamed keynote, Astra saw a set of crayons and responded “creative crayons colored cheerfully.” Neat party trick.
When we challenged Astra in a private demo to guess the object in a scribbled drawing, it correctly identified the flower and house I drew on the touchscreen right away. When I drew a bug — one bigger circle for the body, one smaller circle for the head, little legs off the sides of the big circle — the AI stumbled. Is it a flower? No. Is it the sun? No. The employee guided the AI to guess something that was alive. I added two more legs for a total of eight. Is it a spider? Yes. A human would have seen the bug immediately, despite my lack of artistic ability.
Know More https://techcrunch.com/2024/05/16/google-i-o-was-an-ai-evolution-not-a-revolution/
Comments
Post a Comment