< Back to 68k.news CL front page

Google on what on-device AI is good at, more Android apps that use Gemini Nano coming

Original source (on modern site) | Article images: [1] [2] [3]

On-device AI is a big priority for Android going forward, and Google shared more developer resources at I/O 2024.

The "Android on-device AI under the hood" I/O 2024 session provided "good use cases" for on-device generative AI:

In general, benefits include secure local processing, offline availability, reduced latency, and no additional (cloud) costs. The limitations are lower parameter size at 2-3 billion, or "almost an order of magnitude smaller than cloud-based equivalents." There's also a smaller context window and how the model will be less generalized. As such, "fine-tuning is critical in order to get good accuracy."

Gemini Nano is Android's "foundational mode of choice for building on-device GenAI replications," but you can also run Gemma and other open models.

So far, only Google apps — Summarize in Pixel Recorder, Magic Compose in Google Messages, and Gboard Smart Reply — leverage it, but Google has been "actively collaborating with developers who have compelling on-device Gemini use cases" through an early access program. These are expected to launch in 2024. 

Meanwhile, Google will soon be using Gemini Nano for TalkBack captions, Gemini dynamic suggestions, and spam alerts, while a Multimodality update is later this year "starting with Pixel." 

Google also noted the state of on-device gen AI a year ago and what improvements have been made since then, like hardware acceleration:

More on Android 15:

FTC: We use income earning auto affiliate links. More.

< Back to 68k.news CL front page