2019년 10월 1일 화요일

Why am I interested in Jetson Series?

I have been a big fan of Raspberry Pi for a long time. Raspberry Pi is a very stable system and has a wide community, making it suitable for beginners. But today, as artificial intelligence became popular, the demand for SBC(Single Board Computer) with more powerful computing power is increasing. AI computing power is highly related to the GPU. Most machine learning frameworks, in particular, support compute acceleration using NVIDIA CUDA. So without proper GPU(or TPU), it's very hard to develop, run a AI related programs on the SBC like Raspberry Pi.

Google Coral, Intel Movidius

You might consider using Edge AI accelerators like Google Coral(Edge TPU) dev board , Intel Movidius stick. If you have to use Raspberry Pi and wnat to add AI functions, using a USB type coral or Intel Mobidius stick might be your choice. But if you want to enter the Edge AI, consider using other SBCs like Jetson Nano. Raspberry Pi, Google Coral Dev board, Jetson Nano all use ARM coretex CPUs and linux distros like Debian, Ubuntu. So it's very easy to migrate to other boards.


<Coral Dev Board           Raspberry + Mobidius Stick            Jetson Nano       >


Compatibility comparison

Nvidia has provided some performance data(https://devblogs.nvidia.com/jetson-nano-ai-computing/) of these products. Surely Nvidia might provide us with material that is favorable to them.



Look closely at the orange box.  DNR (did not run) results occurred frequently due to limited memory capacity, unsupported network layers, or hardware/software limitations. Jetson Nano supports Tensorflow, Caffe, PyTorch, MXNet, Darknet(YOLO) successfully. Because Jetson series has CUDA GPU that most of the AI framework supports. In terms of compatibility, Google Coral, Intel Mobidius Stick can't catch up the Jetson series.


Performance of Jetson Nano

NVidia shows the performance of Jetson Nano as this graph.


Wow! This is an amazing result. The graph shows 14FPS OpenPose performance. You can install the OpenPose source files from https://github.com/CMU-Perceptual-Computing-Lab/openpose , and run the python sample code. Perhaps you will never achieve the same performance as above. Why?
The performance of the graph above is highly optimized using TensorRT and C/C++ codes. https://devtalk.nvidia.com/default/topic/1050377/jetson-nano/deep-learning-inference-benchmarking-instructions/ explains in detail.
Keep in mind that the graph above is not false but it is not something you can easily create with sample source codes.

This graph shows above results with other Edge AI products.
Even though the Coral Dev Board has the performance, It doesn't support many models. Because current Coral supports only 8-bits integer (INT8) quantized models to save memory usage and to speed up the calculation. Many deep learnnig network models are trained in FP32(32bit floating point) and FP32 models can be converted to FP16 without mush accuracy loss. You can see more detail information of Google Coral here.

Conclusion

If you are interested in developing Edge AI, and want to run various AI frameworks, Jetson series may be the best choice until new and amazing  Edge AI products come out.










댓글 없음:

댓글 쓰기