Google is ‘reimagining’ Android to be all-in on AI – and it looks truly impressive
Gemini AI will be at the core of Android
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
Googleis looking to “reimagine” Android with itsGemini AI, touting this as a “once in a generation event to reimagine what phones can do”.
AtGoogle I/O 2024, the search giant said it would bake AI into Android in three ways: putting AI search into Android, making Gemini the new AI assistant, and harnessing on-device AI.
Translated into everyday speech, this means more AI search tools such asCircle to Searchbeing front and center in Android. The AI-powered tool, which can identify physically circled objects and text in photos and onscreen, will be boosted to tackle more complex problems like graphs and formulae later this year.
The Gemini AI, which can be found in theGoogle Pixel 8aright now, will become the AI foundation for Android, bringing multimodal AI – the tech to process, analyze and learn from information and inputs for multiple sources and sensors – to the mobileoperating system. All of which makes this one ofthe bigger AI announcements from Google I/O 2024.
In practice, this’ll mean Gemini will work in all manner of apps and tools to provide context-aware suggestions, answers and prompts. One example of this was using the AI in the Android Messages app to produce AI-made images to share in chats. Another is the ability to answer questions about aYouTubevideo a person is watching, and pull data from sources like PDFs to answer very specific queries, such as a particular rule in a sport.
What’s more, Gemini can learn from all this and use that information to predict what a person might want. For instance, from knowing that the user has shown an interest in tennis and in chats about the sport, it could serve up (pun intended) options to find nearby tennis courts.
The third aspect of AI-ing Android is to ensure a lot of the smart processing can happen on the phone, rather than needing an internet connection. So, Gemini Nano provides a low-latency foundational model for onboard AI processing, with multimodal capabilities; this effectively lets the AI understand more about the context of what’s being asked of it and what’s going on.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
An example of this in action was how Gemini can detect a fraudulent call looking to scam a person of their bank details, and alert them before fraud can take place. And as this processing takes place on the phone, there’s no concern about a remote AI listening in on private conversations.
Equally, the AI tech can use its contextual understanding to help provide accurate descriptions of what a person with visual impairments is looking at, be it in real life or online.
In short, Google intends to make an AI-centric Android more helpful and more powerful when it comes to finding things and getting things done. And with Gemini Nano coming with multimodal capabilities to Pixel devices later this year, we can surely expect to see theGoogle Pixel 9series be the first phones out of the gate with the reimagined Android.
You might also like
Roland Moore-Colyer is Managing Editor at TechRadar with a focus on phones and tablets, but a general interest in all things tech, especially those with a good story behind them. He can also be found writing about games, computers, and cars when the occasion arrives, and supports with the day-to-day running of TechRadar. When not at his desk Roland can be found wandering around London, often with a look of curiosity on his face and a nose for food markets.
WhatsApp will soon help you spot misinformation – here’s how
Google Messages on Android could soon get a big photo-sharing boost
Huge Black Friday Samsung sale: save up to $1,900 on QLED, OLED TVs, and more