Google Coral, Intel Movidius
You might consider using Edge AI accelerators like Google Coral(Edge TPU) dev board , Intel Movidius stick. If you have to use Raspberry Pi and wnat to add AI functions, using a USB type coral or Intel Mobidius stick might be your choice. But if you want to enter the Edge AI, consider using other SBCs like Jetson Nano. Raspberry Pi, Google Coral Dev board, Jetson Nano all use ARM coretex CPUs and linux distros like Debian, Ubuntu. So it's very easy to migrate to other boards.
<Coral Dev Board Raspberry + Mobidius Stick Jetson Nano >
Compatibility comparison
Nvidia has provided some performance data(https://devblogs.nvidia.com/jetson-nano-ai-computing/) of these products. Surely Nvidia might provide us with material that is favorable to them.Look closely at the orange box. DNR (did not run) results occurred frequently due to limited memory capacity, unsupported network layers, or hardware/software limitations. Jetson Nano supports Tensorflow, Caffe, PyTorch, MXNet, Darknet(YOLO) successfully. Because Jetson series has CUDA GPU that most of the AI framework supports. In terms of compatibility, Google Coral, Intel Mobidius Stick can't catch up the Jetson series.
Performance of Jetson Nano
NVidia shows the performance of Jetson Nano as this graph.Wow! This is an amazing result. The graph shows 14FPS OpenPose performance. You can install the OpenPose source files from https://github.com/CMU-Perceptual-Computing-Lab/openpose , and run the python sample code. Perhaps you will never achieve the same performance as above. Why?
The performance of the graph above is highly optimized using TensorRT and C/C++ codes. https://devtalk.nvidia.com/default/topic/1050377/jetson-nano/deep-learning-inference-benchmarking-instructions/ explains in detail.
Keep in mind that the graph above is not false but it is not something you can easily create with sample source codes.
This graph shows above results with other Edge AI products.
Even though the Coral Dev Board has the performance, It doesn't support many models. Because current Coral supports only 8-bits integer (INT8) quantized models to save memory usage and to speed up the calculation. Many deep learnnig network models are trained in FP32(32bit floating point) and FP32 models can be converted to FP16 without mush accuracy loss. You can see more detail information of Google Coral here.
댓글 없음:
댓글 쓰기