Google Coral Edge TPU USB Accelerator to Get Started Experience

Coral Beta
TPU, also known as Tensor Processing Unit is mainly used by Google data center. For general users, you can use it on Google Cloud Platform (GCP), or you can use Google Colab to use the free version.
Google first demonstrated their Edge TPU at the 2019 International Consumer Electronics Show (and this year's TensorFlow Development Summit), and then released the Coral Beta in March.
The Beta version includes development boards and USB accelerators, as well as preview versions of PCI-E accelerators and modular systems (SOM) for production purposes.
USB Accelerator
The Edge TPU USB Accelerator is basically the same as any other USB device, similar to Intel's MyriadVPU, but more powerful. Next, let's unpack and take a look.
Unpacking
The box contains:
Getting started
USB accelerator
Type C USB data cable
Getting started
The getting started describes the installation steps, so that you can complete the installation quickly. All required files, including model files, can be downloaded on the official website together with the installation package. The installation process does not require TensorFlow or OpenCV dependent libraries.
Tip: You must use Python 3.5, otherwise the installation cannot be completed. You also need to change the last line of the install.sh file python3.5 setup.py develop-user to python3 setup.py develop-user.
Demo program
The Coral Edge TPU API document includes an overview of image classification and object detection, as well as a demo program.
Edge TPU API
Before completing the following tutorial, there are the following notes about the Edge TPU API:
You need to install the Python edgetpu module to run the TensorFlow Lite model on the Edge TPU. It is a higher-level API that contains some simple APIs to perform the model inference process.
These APIs are pre-installed on the development board, but if you use a USB accelerator, you need to download it yourself. Please refer to this setup guide for details.
The following key APIs are used in the inference process: ClassificationEngine for image classification, DetectionEngine for target detection, and ImprintingEngine for transfer learning.
Image classification
Demo to implement image classification is very simple.
Target Detection
As with image classification, we only need to call the DetectionEngine interface to detect the target in the input picture and use the box to identify it.
Since the default configuration will produce false negatives, we can adjust the threshold in the default sample program from 0.05 to 0.5, and adjust the width of the rectangle to 5.
Since Coral is still only in beta, the details given in the API documentation are not complete. But the parts given so far are sufficient for the above example.
Precautions
All the codes, models and annotation files of the above demo are downloaded on the official website together with the library files included in the installation package. Based on the models and input annotation files that have been given so far, we can complete the classification and detection tasks.
For classification tasks, the results return the top 2 predicted categories and corresponding confidence scores. For target detection tasks, the results will return the confidence scores and the coordinates of the vertices of the labeled boxes. If the category label is given when input, the returned result also contains the category name.
Further development
With the help of Edge TPU, what other products can Coral offer?
Dev Board
As a development board, Google prefers NXP i.MX 8M SOC (Quad-core Cortex-A53 and Cortex-M4F).
If it is used for experiments, especially when only Edge TPU is required, we recommend USB Accelerator.
Follow-up development
What if you have already made a good prototype using the Dev Board or USB Accelerator, but you need to apply the same code to a large-scale production environment in the future?
Google has thought about this in advance. As you can see in the product list, the following modules will be used for enterprise support and have been marked as coming soon.
System-on-module (SOM)
This is a fully integrated system (including CPU, GPU, Edge TPU, Wifi, Bluetooth and security elements), using a pluggable module with a size of 40mm*40mm.
This module can be used for large-scale production. Manufacturers can produce their favorite IO boards according to the guidelines provided by this module. Even the above-mentioned development board (Dev Board) contains this detachable module. Theoretically, it can be used as long as it is removed.
PCI-E accelerator
There is very little information about the PCI-E accelerator, but as the name suggests, it is a module with PCI-E (Peripheral Component Interconnect Express). And there are two variants, which is similar to the USB accelerator. But the difference is that the USB interface is replaced by PCI-E, just like a memory stick or a network card.
With the birth of various peripheral modules, it is expected that some enterprise-level projects will also be born. Google Coral thinks so too, with the following statement on their website:
Flexible and easy to use, precise cutting, suitable for startups and large enterprises.
Tensorflow and Coral project
Google's products are mostly related to Tensorflow. At present, Edge TPU only supports the traditional Tensorflow Lite version of the model, and the stable version of Tensorflow Lite has just been released.
Currently, you need to convert a tflite model to a tflite-tpu model through a web compiler. Don't worry if you are using PyTorch or other frameworks, you can use ONNX to convert the model to a Tensorflow model.

如果喜歡我們的文章,請即分享到︰