2019년 12월 10일 화요일

JetsonNano - Hello AI World (NVIDIA DNN vision library) - 1.Introduction

So far, I have used frameworks such as Tensorflow, PyTorch, and OpenPose to do things like Pose Estimation and Object Recognition. The reason I've focused on TensorFlow, OpenPose, and Pytorch so far is because they are so popular that many people use them and want to port their solutions to the Jetson series. In particular I focused on how to convert Tensorflow to TensorRT and compare performance. After converting the Tensorflow model to the TensorRT model, I have tested both models to compare the inference speed. However, using TensorRT did not achieve the expected performance improvement except in a few cases. Personally, I think the performance should be more than 10 FPS in order to apply it to actual projects. As I've covered some of popular frameworks, let's take a look at the high-performance frameworks NVIDIA offers for the Jetson series.

Hello AI World



"Hello AI WORLD" is provided by NVIDIA to help Jetson series users develop real-time vision processing with deep learning. In particular, this repo uses TensorRT for high performance in the Jetson series. Regardless of PyTorch, Tensorflow, etc., it is made with NVIDIA's own framework, so you can expect performance improvement.

Be Careful : This site is updated from time to time by NVIDIA to update its content and sample files, so it is helpful to check back occasionally for changes.

 


The latest examples have been updated to allow development in Python as well as C ++.


Prerequisites


Prepare Jetson Nano SD Card Image.(Jetpack installed). See My blog.
If you are using Jetson TX2, see my other blog.


Building the Project from Source

As always I will use Python 3, so install the library for Python 3.



$ sudo apt-get update
$ sudo apt-get install git cmake libpython3-dev python3-numpy
$ cd /usr/local/src
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ mkdir build
$ cd build
$ cmake ../
$ make -j4
$ sudo make install
$ sudo ldconfig

While "cmake ../ " process, you may encounter this screen.
Choose the models you want to install. You can download them later from this site(https://github.com/dusty-nv/jetson-inference/releases).



Tip: You can run this download tool later. This tool can be found in the "jetson-inference/tools" directory.

Next choose the PyTorch version. Surely choose the PyTorch for python3.6. If you have already installed PyTorch, just click the Quit button.


Tip: You can run this pyTorch selection tool later. This tool can be found in the "jetson-inference/tools" directory.

$ cd jetson-inference/tools
$ ./install-pytorch.sh


The project will be built to jetson-inference/build/aarch64, with the following directory structure:



|-build
   \aarch64
      \bin             where the sample binaries are built to
         \networks     where the network models are stored
         \images       where the test images are stored
      \include         where the headers reside
      \lib             where the libraries are build to

In the build tree, you can find the binaries residing in build/aarch64/bin/, headers in build/aarch64/include/, and libraries in build/aarch64/lib/. These also get installed under /usr/local/ during the sudo make install step.

The Python bindings for the jetson.inference and jetson.utils modules also get installed during the sudo make install step under /usr/lib/python*/dist-packages/. If you update the code, remember to run it again.




댓글 없음:

댓글 쓰기