tflite-react-native
A React Native library for accessing TensorFlow Lite API. Supports Classification, Object Detection, Deeplab and PoseNet on both iOS and Android.
Table of Contents
Installation
$ npm install tflite-react-native --save
iOS (only)
TensorFlow Lite is installed using CocoaPods:
-
Initialize Pod:
cd ios pod init
-
Open Podfile and add:
target '[your project's name]' do pod 'TensorFlowLite', '1.12.0' end
-
Install:
pod install
Automatic link
$ react-native link tflite-react-native
Manual link
iOS
- In XCode, in the project navigator, right click
Libraries
➜Add Files to [your project's name]
- Go to
node_modules
➜tflite-react-native
and addTfliteReactNative.xcodeproj
- In XCode, in the project navigator, select your project. Add
libTfliteReactNative.a
to your project'sBuild Phases
➜Link Binary With Libraries
- Run your project (
Cmd+R
)<
Android
- Open up
android/app/src/main/java/[...]/MainApplication.java
- Add
import com.reactlibrary.TfliteReactNativePackage;
to the imports at the top of the file - Add
new TfliteReactNativePackage()
to the list returned by thegetPackages()
method
- Append the following lines to
android/settings.gradle
:include ':tflite-react-native' project(':tflite-react-native').projectDir = new File(rootProject.projectDir, '../node_modules/tflite-react-native/android')
- Insert the following lines inside the dependencies block in
android/app/build.gradle
:compile project(':tflite-react-native')
Add models to the project
iOS
In XCode, right click on the project folder, click Add Files to "xxx"..., select the model and label files.
Android
-
In Android Studio (1.0 & above), right-click on the
app
folder and go to New > Folder > Assets Folder. Click Finish to create the assets folder. -
Place the model and label files at
app/src/main/assets
. -
In
android/app/build.gradle
, add the following setting inandroid
block.
aaptOptions {
noCompress 'tflite'
}
Usage
; let tflite = ;
Load model:
tflite;
Image classification:
tflite;
- Output fomart:
{
index: 0,
label: "person",
confidence: 0.629
}
Object detection:
SSD MobileNet
tflite;
Tiny YOLOv2
tflite;
- Output fomart:
x, y, w, h
are between [0, 1]. You can scale x, w
by the width and y, h
by the height of the image.
{
detectedClass: "hot dog",
confidenceInClass: 0.123,
rect: {
x: 0.15,
y: 0.33,
w: 0.80,
h: 0.27
}
}
Deeplab
tflite;
-
Output format:
The output of Deeplab inference is Uint8List type. Depending on the
outputType
used, the output is:-
(if outputType is png) byte array of a png image
-
(otherwise) byte array of r, g, b, a values of the pixels
-
PoseNet
Model is from StackOverflow thread.
tflite;
- Output format:
x, y
are between [0, 1]. You can scale x
by the width and y
by the height of the image.
[ // array of poses/persons
{ // pose #1
score: 0.6324902,
keypoints: {
0: {
x: 0.250,
y: 0.125,
part: nose,
score: 0.9971070
},
1: {
x: 0.230,
y: 0.105,
part: leftEye,
score: 0.9978438
}
......
}
},
{ // pose #2
score: 0.32534285,
keypoints: {
0: {
x: 0.402,
y: 0.538,
part: nose,
score: 0.8798978
},
1: {
x: 0.380,
y: 0.513,
part: leftEye,
score: 0.7090239
}
......
}
},
......
]
Release resources:
tflite.close();
Example
Refer to the example.