In a previous blog post, we explained how developers can use Google’s ML Kit’s Barcode Scanning Module to scan QR codes via a camera or import images from the gallery. In this blog post we will compare the functionality and implementation of ML Kit with that of Mobile Vision.
Functional comparison
To the end user, there’s likely no discernable difference between the two technologies. Even the ML Kit documentation states they are essentially the same: “The Barcode scanning, Text recognition and Face detection APIs provide the same functionality and capabilities as their Mobile Vision counter-parts”. However, this doesn’t mean there are no functional benefits to choosing ML Kit. It’s important to read the documentation further, where it goes on to state: “Migrating to ML Kit ensures your application benefits from the latest bug fixes and improvements to the APIs, including updated ML models and hardware acceleration”.
Improvements of ML Kit over Mobile Vision
Let’s go over some of the improvements.
- Improved recall. This means an improvement in the correct detection of barcodes out of all formats. For example, ML Kit 16.0.0 added support for broken PDF417 start/stop pattern detection.
- Tolerance for low-quality images. Barcode detection is very sensitive to image quality. So by improving their tolerance, it directly boosts usability for people with lower quality cameras.
- Long tail latency. There have been improvements in what the worst 1% of latency experienced by users is at. This is important because pain points in latency often leave a lasting impression on those users.
- Bounding box stability. This refers to the minimization of boundary fluctuations around a detected object, commonly known as jittering.
- Integration with CameraX and Camera2. These are two camera libraries in Android which have benefits per use case, and do not restrict you to one or the other when using ML Kit which provides us with more flexibility.
- Support for Android Jetpack Lifecycle. Simpler integration with changes such as screen rotations, which are always a pain to handle if there’s no lifecycle support.
Code comparison
The differences between Mobile Vision and ML Kit aren’t restricted to solely functionality. In this section we’ll look over some key differences in the code used to implement them.
- Embedding the camera into the fragment.
In ML Kit we do it as follows: