Author: Thomas Ezan, Developer Relations
Over the past year, Android has made significant improvements to on-device machine learning, which has enabled developers to create more intelligent applications with improved capabilities to process sound, images, and text. In the recent Google I/O event, Android Developer Relations Engineer, Thomas Ezan, and PM on ML Kit, David Miro-Llopis, presented new Android APIs and solutions, showcasing ways to use on-device machine learning in apps.
On-device machine learning enables low-latency and provides increased data privacy. It also facilitates offline support and potentially reduces cloud bills. Applications such as Lens AR Translate and document scanning in Files in India benefit from the advantages of on-device machine learning.
Developers have two options to incorporate on-device machine learning into their Android applications. These are:
- ML Kit: This framework provides pre-built ML solutions to common user flows with easy-to-use APIs.
- Android’s custom ML stack: Built on TensorFlow Lite, this framework offers control over the inference process and user experience.
New APIs and Improved Features with ML Kit
The ML Kit team has launched new APIs for face mesh detection and document scanning, in addition to improving existing APIs such as barcode detection (by 17%), text recognition, digital ink recognition, pose detection, translation, and smart reply. The team is set to launch a document scanner API in Q3 2023, providing a consistent scanning experience across Android apps. A few lines of code are all developers need to use it, and it does not require camera permission. Similarly, Google code scanner is now generally available in Google Play Services, providing a consistent scanning experience across various apps that do not need camera permission.
Developers can easily integrate machine learning into their apps with ML Kit. For instance, WPS uses ML Kit to translate text into 43 languages and make yearly savings of $65M.
Public Beta of Acceleration Service in Android’s Custom ML Stack
The Android ML team is actively developing the custom ML stack to support custom machine learning. Recently, Google Play Services included TensorFlow Lite and GPU delegates, which let developers use TensorFlow Lite without bundling it in their app and provide automatic updates. Improvements in the inference process have significantly improved the hardware acceleration rendering better user experiences for the ML-enabled Android app. The Android ML team is set to launch the Acceleration Service API in public beta, enabling developers to select the optimal hardware acceleration configuration at runtime. To learn more and kickstart, developers can refer to the official TensorFlow documentation.
For more information, check out this video: