0
shares
- Author
- Recent Posts
Researcher at
CIEMAT – PSA PhD. in Computer Science / Solar Thermal Energy Researcher
(see all)
- Flutter on Raspberry Pi with flutter-pi – 26 November, 2019
- ESP8266 NodeMCU pinout for Arduino IDE – 19 November, 2019
- Cross-compile and deploy Qt 5.12 for Raspberry Pi – 17 November, 2019
A previous post entitled Machine Learning on Desktop, iOS and Android with Tensorflow, Qt and Felgo explored how to integrate Tensorflow with Qt and Felgo by means of a particular example which integrated two Google pre-trained neural networks for image classification and object detection.
This post focuses on developing the same app but in this case using Tensorflow Lite. TensorFlow Lite is TensorFlow’s lightweight solution for mobile and embedded devices currently at technological preview state. TensorFlow Lite uses many techniques for achieving low latency for mobile apps, smaller and faster neural network models. Besides that, the compilation of Tensorflow Lite is easier and faster.
Have a look at our previous post since only the differences between that app and the one developed here are mentioned in this post. This means that all the code and explanations given there apply for this case.
Image Classification and Object Detection Example
The goal is to developed an app where the user can select one of two Google pre-trained neural network models. The following steps describe how to build such app.
Object detection with Tensorflow Lite on iOS and Android
Clone Repository
To clone this repository execute the following command, clone it recursively since the TensorFlow repository is inside it.
git clone --recursive https://github.com/MechatronicsBlog/TensorFlowLiteQtVPlay.git
How to build TensorFlow Lite for Qt
We need to build TensorFlow Lite for our target platforms: iOS and Android. In this example, make
is used to build TensorFlow Lite for iOS (and Linux), whereas bazel
is used for Android. The shell commands in the following sections must be executed inside the main Tensorflow folder.
Building for Linux
Download Tensorflow Lite dependencies for make
compilation and compile as follows.
tensorflow/lite/tools/make/download_dependencies.sh make -f ./tensorflow/lite/tools/make/Makefile
Building for Android (on Linux)
First, we need to set our Android SDK
and NDK
paths in the WORKSPACE
file located in the main Tensorflow folder. We just have to include the following configuration, setting the appropriate paths
and api_level
values. If you need further information about how to download and install NDK
, have a look at our previous post. For further information about how to download and install Android SDK
have a look at Android Studio website.
android_sdk_repository( name = "androidsdk", path = "/home/user/Android/sdk" ) android_ndk_repository( name = "androidndk", path = "/home/user/Android/android-ndk-r14b", api_level = 21, )
Then, execute the following command to compile Tensorflow for ARM
instructions using bazel
build system.
bazel build --cxxopt='--std=c++11' -c opt --config=android_arm tensorflow/lite/java:libtensorflowlite_jni
Building for iOS (on macOS)
Download Tensorflow Lite dependencies for make
compilation and compile by executing the next scripts.
tensorflow/lite/tools/make/download_dependencies.sh tensorflow/lite/tools/make/build_ios_universal_lib.sh
How to Use TensorFlow Lite in Your Qt Mobile App
The source code of this app is in a GitHub repository. This section highlights only the main differences with respect to the previous app version.
Link TensorFlow Lite in Your Project
The following code shows the lines added to our qmake
project file in order to
include the TensorFlow Lite header files and link against TensorFlow Lite libraries
depending on the target platform.
# TensorFlow Lite - Global TENSORFLOW_PATH = $$PWD/tensorflow/ TFLITE_MAKE_PATH = $$TENSORFLOW_PATH/tensorflow/lite/tools/make INCLUDEPATH += $$TENSORFLOW_PATH \ $$TFLITE_MAKE_PATH/downloads/ \ $$TFLITE_MAKE_PATH/downloads/eigen \ $$TFLITE_MAKE_PATH/downloads/gemmlowp \ $$TFLITE_MAKE_PATH/downloads/neon_2_sse \ $$TFLITE_MAKE_PATH/downloads/farmhash/src \ $$TFLITE_MAKE_PATH/downloads/flatbuffers/include # TensorFlow Lite - Linux linux:!android { INCLUDEPATH += -L$$TFLITE_MAKE_PATH/gen/linux_x86_64/obj LIBS += -L$$TFLITE_MAKE_PATH/gen/linux_x86_64/lib/ \ -ltensorflow-lite -ldl } # TensorFlow Lite - Android - armv7a android { QT += androidextras LIBS += -L$$TENSORFLOW_PATH/bazel-bin/tensorflow/lite \ -L$$TENSORFLOW_PATH/bazel-bin/tensorflow/lite/c \ -L$$TENSORFLOW_PATH/bazel-bin/tensorflow/lite/core/api \ -L$$TENSORFLOW_PATH/bazel-bin/tensorflow/lite/kernels \ -L$$TENSORFLOW_PATH/bazel-bin/tensorflow/lite/kernels/internal \ -L$$TENSORFLOW_PATH/bazel-bin/external/androidndk \ -L$$TENSORFLOW_PATH/bazel-bin/external/farmhash_archive \ -L$$TENSORFLOW_PATH/bazel-bin/external/fft2d \ -L$$TENSORFLOW_PATH/bazel-bin/external/flatbuffers \ -lframework -larena_planner -lsimple_memory_arena -lutil -lapi \ -lc_api_internal -lbuiltin_ops -lbuiltin_op_kernels -lkernel_util \ -leigen_support -lgemm_support -laudio_utils -lkernel_utils \ -ltensor_utils -lneon_tensor_utils -lquantization_util -llstm_eval \ -lstring_util -lcpufeatures -lfarmhash -lfft2d -lflatbuffers } # TensorFlow Lite - iOS - Universal library ios { LIBS += -L$$TFLITE_MAKE_PATH/gen/lib/ \ -framework Accelerate \ -ltensorflow-lite }
Create the GUI with QML
The GUI is the same than for the Tensorflow app version. The only difference is that two Tensorflow Lite configuration features are added to the AppSettingsPage
.
App Settings Page
An screenshot of this page on iOS is shown below. The two new features are: number of threads and Android Neural Networks API (NNAPI) support.
Setting the number of threads establishes how many threads will Tensorflow Lite use for inference tasks. The maximum number of threads is limited in the app to the number of cores in the mobile device. The number of threads can be configured by means of a slider and its value is stored in the nThreads
property.
AppSlider{ id: sThreads anchors.horizontalCenter: parent.horizontalCenter width: parent.width - 2*dp(15) from: 1 to: auxUtils.numberThreads() enabled: to>1 live: true snapMode: AppSlider.SnapAlways stepSize: 1 value: nThreads onValueChanged: nThreads = value }
The C++ numberThreads
function returns the number of processors cores by means of the idealThreadCount
function which queries the number of processor cores, both real and logical, in the system. This function returns 1 if the number of processor cores cannot be detected.
int AuxUtils::numberThreads() { return QThread::idealThreadCount(); }
NNAPI is an Android C API designed for running computationally intensive operations for machine learning on mobile devices, and is available on devices running Android 8.1 (API level 27) or higher. A switch can be used to enable or disable the use of NNAPI. This switch is only enabled on Android and only takes effect if the version is equal or higher than 8.1. The switch Boolean value is stored in the acceleration
property.
AppSwitch{ anchors.verticalCenter: parent.verticalCenter id: sAcceleration enabled: Qt.platform.os === "android" checked: enabled ? acceleration : false onToggled: acceleration = checked }
In main.qml
, the nThreads
and acceleration
property values are read and stored in a Storage
component (default values are provided for the first app run), which is a singleton type for reading and writing to SQLite databases. Those values are passed to VideoPage
, then to objectsRecognitionFilter
up to Tensorflow Lite.
C++ Tensorflow Lite Interface and Video Frame Filter
Two main tasks are programmed in C++.
- Managing video frames.
- Interfacing with TensorFlow Lite.
Video frames are managed in the same way that for the previous app version.
Interfacing with Tensorflow Lite
The TensorflowLite
C++ class interfaces with the TensorFlow Lite library. This class is a wrapper. Have a look at the code for a detailed description of this class, also you can check the Tensorflow Lite C++ API documentation for further information.
Neural Network Models for Image Classification and Object Detection
Google provides a set of pre-trained neural network models to perform image classification and object detection tasks. The file extension for Tensorflow Lite neural network models is .tflite
.
This example already includes MobileNet models: MobileNet V1 224 for image classification and SSD Mobilenet v1 Quantized 300 for object detection. MobileNet is a class of efficient neural network models for mobile and embedded vision applications.
To have an idea of the pre-trained neural networks performance on different platforms and devices, or to evaluate your own networks, have a look at Tensorflow Lite Performance.
Image Classification Models
Image classification models can be download from the Tensorflow Lite – List of Hosted Models. Our example code is designed for MobileNet
neural networks. For example, download mobilenet_v1_1.0_224.tgz
, uncompress it, and copy the mobilenet_v1_1.0_224.tflite
file to our assets
folder as image_classification.tflite
. Labels for these models are already set in the image_classification_labels.txt
file. Labels belong to ImageNet classes.
Object Detection Models
Currently, there are no official models listed on Tensorflow Lite – List of Hosted Models. Nevertheless, you can generate tflite models from quantized models in Tensorflow Model Zoo using the toco tool as described in Training and serving a realtime mobile object detector in 30 minutes with Cloud TPUs. You can also train your own custom models as described in the Tensorflow Lite Developer Guide. In this example the SSD Mobilenet v1 Quantized coco model from Tensorflow Model Zoo has been converted to .tflite
using the toco tool.
Any SSD MobileNet
model can be used in the given example. This kind of models provides caption, confidence and bounding box outputs for each detected object. The .tflite
file must be named as object_detection.tflite
in our assets folder. Labels for this kind of models are already given by the object_detection_labels.txt
file. Labels belong to COCO labels.
5
1
vote Article Rating