2019년 10월 3일 목요일

JetsonNano - Human Pose estimation using OpenPose

last updated 2021.02.11 : update for Jetpack 4.5
 

 
OpenPose(https://github.com/CMU-Perceptual-Computing-Lab/openpose) is one of the most popular pose estimation framework. You can install OpenPose on the Jetson Nano.

As you can see from the article below, OpenPose 1.5 was not properly installed in the JetPack 4.4 Production Release. However, according to the installation manual of OpenPose 1.7.0 released in November 2020, there is a content that Caffe has been partially modified so that it can be built in CUDA 10.
So I tried installing the latest version of OpenPose 1.7.0 from JetPack 4.5. The result was a success. And I quickly realized. JetPack 4.4 and 4.5 are not significantly different in CUDA. Therefore, I thought that it could be possible to install the latest version of OpenPose on JetPack 4.4 Production Release. So, I tried installing OpenPose 1.7.0 once again on the Jetson Nano with JetPack 4.4 installed. The result was a success too.

Jetpack 4.5 Users

If you use jetpack 4.5 and want to install the newest OpenPose(1.7.0), please see :

JetPack 4.4 Users

If you use jetpack 4.4 and want to install the newest OpenPose(1.7.0), please see :
Although the article in the link above describes JetPack 4.5, the same operation is possible even when using JetPack 4.4.

Jetpack 4.3 Users

I used Jetson Nano, Ubuntu 18.04 Official image with root account.
If you are using Jetson Xavier NX, please refer to the following https://spyjetson.blogspot.com/2020/07/jetson-xavier-nx-human-pose-estimation.html page.

Prerequisites

Before you build OpenPose, you must pre install these packages. If you use Jetpack 4.3 or higher, it already includes OpenCV 4.1.1, If you use Jetpack 4.2 or below, see the URL to use OpenCV 4.1.1.

cmake version check

To build Openpose on the Jetson Nano, you should have cmake version 3.12.2 or higher. First check the cmake version.
cmake --version

If your Jetson Nano's cmake version is lower than 3.12.2, remove the old cmake and rebuild from source codes. Check the latest version at https://github.com/Kitware/CMake/releases . At this time(2020.05.01), Ver 3.17.2 is the latest cmake verison.

apt-get install libssl-dev libcurl4-openssl-dev
apt-get remove cmake
cd /usr/local/src
wget https://github.com/Kitware/CMake/releases/download/v3.17.2/cmake-3.17.2.tar.gz
tar -xvzf cmake-3.17.2.tar.gz
cd cmake-3.17.2
./bootstrap
make -j4
make install


Then restart your ssh session.

Install OpenPose for JetPack

Install OpenPose for JetPack 4.3

Installation is not difficult. Follow these steps. OpenPose uses caffe for it's deep learning framework. These steps will install caffe framework too. Don't forget to do final steps to use python.  JetPack 4.3 will use OpenPose 1.5.1. At the time of the release of Jetpack 4.3, the OpenPose version is 1.5.1, and I have not tested it on the higher version of OpenPose.


cd /usr/local/src
wget https://github.com/CMU-Perceptual-Computing-Lab/openpose/archive/v1.5.1.tar.gz
tar -xvzf v1.5.1.tar.gz
cd openpose-1.5.1
bash ./scripts/ubuntu/install_deps.sh
mkdir build
cd build
cmake -D BUILD_EXAMPLES=ON  -D BUILD_PYTHON=ON  -D USE_OPENCV=ON  ..
make -j4
make install
#==== python build ==== 
# don't do make install command. because it installs openpose module to python 2.7 directory
cd python
make -j4

Install OpenPose for JetPack 4.4(Developer Preview)

If you build OpenPose on JetPack 4.3 and Jetson Nano in the previous article, there is no problem, but the build on JetPack 4.4 gives an error during cmake process. If you proceed as below, there is a problem that cannot find the cudnn version. This is because cudnn's version up has changed the version of the header file to be checked.

Warning : In previous versions, the CUDNN MAJOR value was defined in the /usr/include/cudnn.h file. However, in JetPack 4.4 this definition has been moved to the /usr/include/cudnn_version.h file. This is why an error occurs. We need to fix this in OpenPose, but right now we have to fix it and use it.

If it is Jetson Xavier NX, change CUDA_ARCH_BIN value to 7.2. This content is  available at https://developer.nvidia.com/cuda-gpus. JetPack 4.4 will use OpenPose 1.6.0. At the time of the release of Jetpack 4.3, the OpenPose version is 1.5.1, and I have not tested it on the higher version of OpenPose.


cd /usr/local/src
wget https://github.com/CMU-Perceptual-Computing-Lab/openpose/archive/v1.6.0.tar.gz
cd openpose-1.6.0
bash ./scripts/ubuntu/install_deps.sh
mkdir build
cd build
sudo cmake -D CMAKE_INSTALL_PREFIX=/usr/local \
-D CUDA_HOST_COMPILER=/usr/bin/cc \
-D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda \
-D CUDA_USE_STATIC_CUDA_RUNTIME=ON \
-D CUDA_rt_LIBRARY=/usr/lib/aarch64-linux-gnu/librt.so \
-D CUDA_ARCH_BIN=7.2 \
-D GPU_MODE=CUDA \
-D DOWNLOAD_FACE_MODEL=ON \
-D DOWNLOAD_COCO_MODEL=ON \
-D USE_OPENCV=ON \
-D BUILD_PYTHON=ON \
-D BUILD_EXAMPLES=ON \
-D BUILD_DOCS=OFF \
-D DOWNLOAD_HAND_MODEL=ON ..

...
...
-- Building with CUDA. -- CUDA detected: 10.2 -- Found cuDNN: ver. ??? found (include: /usr/include, library: /usr/lib/aarch64-linux-gnu/libcudnn.so) CMake Error at cmake/Cuda.cmake:263 (message): cuDNN version >3 is required. Call Stack (most recent call first): cmake/Cuda.cmake:291 (detect_cuDNN) CMakeLists.txt:422 (include)

Replace the string "cudnn.h" in the files Cuda.cmake and FindCuDNN.cmake with "cudnn_version.h" using the following "sed" command:

sed -i -e 's/cudnn.h/cudnn_version.h/g' ../cmake/Cuda.cmake
sed -i -e 's/cudnn.h/cudnn_version.h/g' ../cmake/Modules/FindCuDNN.cmake

Now run the cmake command again. You can see that the cmake command works.


cmake -D CMAKE_INSTALL_PREFIX=/usr/local \ -D CUDA_HOST_COMPILER=/usr/bin/cc \ -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda \ -D CUDA_USE_STATIC_CUDA_RUNTIME=ON \ -D CUDA_rt_LIBRARY=/usr/lib/aarch64-linux-gnu/librt.so \ -D GPU_MODE=CUDA \ -D DOWNLOAD_FACE_MODEL=ON \ -D DOWNLOAD_COCO_MODEL=ON \ -D USE_OPENCV=ON \ -D BUILD_PYTHON=ON \ -D BUILD_EXAMPLES=ON \ -D BUILD_DOCS=OFF \ -D DOWNLOAD_HAND_MODEL=ON ..
...
...

-- Building with CUDA. -- CUDA detected: 10.2 -- Found cuDNN: ver. 8.0.0 found (include: /usr/include, library: /usr/lib/aarch64-linux-gnu/libcudnn.so) -- Added CUDA NVCC flags for: sm_72 -- Found cuDNN: ver. 8.0.0 found (include: /usr/include, library: /usr/lib/aarch64-linux-gnu/libcudnn.so) -- Found GFlags: /usr/include -- Found gflags (include: /usr/include, library: /usr/lib/aarch64-linux-gnu/libgflags.so) -- Found Glog: /usr/include -- Found glog (include: /usr/include, library: /usr/lib/aarch64-linux-gnu/libglog.so) -- Found Protobuf: /usr/lib/aarch64-linux-gnu/libprotobuf.so;-lpthread (found version "3.0.0") -- Found OpenCV: /usr (found version "4.1.1")
...
...
-- Models Downloaded. -- Configuring done -- Generating done -- Build files have been written to: /home/spypiggy/src/openpose/build


sed -i -e 's/cudnn.h/cudnn_version.h/g' ../3rdparty/caffe/cmake/Cuda.cmake
sudo make -j4 make install #==== python build ==== 
# don't do make install command. because it installs openpose module to python 2.7 directory
cd python
make -j4


Install OpenPose for JetPack 4.4(Production Release)

With the update to JetPack 4.4 Production Release, CuDNN comes in version 8.0.0. In the JetPack 4.4 Production Release's CuDNN , some of the existing functions and definitions are no longer supported. This is not abrupt, and it is already announced by NVidia. The Caffe framework, which is installed together during the OpenPose build process, uses some of the functions and definitions of the lower version of CuDNN. Therefore, if you follow the description in JetPack 4.4 DP version, an error occurs like this. JetPack 4.4 DP, Production Release both have the same CuDNN version 8.0.0. But the actual CuDNN seems to be different.

/home/spypiggy/src/openpose/3rdparty/caffe/src/caffe/layers/cudnn_conv_layer.cpp:136:7: error: CUDNN_CONVOLUTION_FWD_SPECIFY_WORKSPACE_LIMIT was not declared in this scope
       CUDNN_CONVOLUTION_FWD_SPECIFY_WORKSPACE_LIMIT,
/home/spypiggy/src/openpose/3rdparty/caffe/src/caffe/layers/cudnn_conv_layer.cpp:131:17: error: there are no arguments to cudnnGetConvolutionForwardAlgorithm that depend on a template parameter, so a declaration of cudnnGetConvolutionForwardAlgorithm must be available [-fpermissive]
     CUDNN_CHECK(cudnnGetConvolutionForwardAlgorithm(handle_[0],
/home/spypiggy/src/openpose/3rdparty/caffe/src/caffe/layers/cudnn_conv_layer.cpp:151:11: error: CUDNN_CONVOLUTION_BWD_FILTER_SPECIFY_WORKSPACE_LIMIT was not declared in this scope
           CUDNN_CONVOLUTION_BWD_FILTER_SPECIFY_WORKSPACE_LIMIT,
......
<Error messages when building OpenPose on the JetPack 4.4 Production Release>

This is described at https://forums.developer.nvidia.com/t/jetpack-4-4-l4t-r32-4-3-production-release/140870/4. Therefore, until caffe supports CuDNN 8.0 of JetPack 4.4 Production Release, you have to  build OpenPose using JetPack 4.4 DP version. If you want to build OpenPose in JetPack 4.4 Production Release, you may need to lower the version of JetPack 4.4 Production Release CuDNN or use a version of Caffe fork that supports JetPack 4.4 Production Release CuDNN . It is not easy.

Under the hood

Now let's dig deeper.
Be Careful : The following Python code works fine with OpenPose 1.5.X. Doesn't work with OpenPose 1.7.0( released 17 Nov 2020). This is because the Python interface used by OpenPose has been partially changed.

Run a sample program

Let's run a sample program to test whether the OpenPose is properly installed.


root@spypiggy-desktop:/usr/local/src/openpose# ./build/examples/openpose/openpose.bin --video ./examples/media/video.avi
Starting OpenPose demo...
Configuring OpenPose...
Starting thread(s)...
Auto-detecting all available GPUs... Detected 1 GPU(s), using 1 of them starting at GPU 0.

If you see this output, it's a successfull installation.

This sample program records 0.6 fps with video.avi video clip. It's a very poor fps.

Python import path Problem

Before you run python samples, first check the openpose python file. Yes it's under the "openpose installation directory/build/python/openpose".


spypiggy@spypiggy-desktop:/usr/local/src/openpose/build/python/openpose$ pwd
/usr/local/src/openpose/build/python/openpose
spypiggy@spypiggy-desktop:/usr/local/src/openpose/build/python/openpose$ ls -al
total 332
drwxr-xr-x 4 root root   4096 10월  3 22:20 .
drwxr-xr-x 4 root root   4096 10월  3 22:20 ..
drwxr-xr-x 3 root root   4096 10월  3 22:20 CMakeFiles
-rw-r--r-- 1 root root   3029  9월 26 22:19 cmake_install.cmake
-rw-r--r-- 1 root root     39  9월 26 22:19 __init__.py
-rw-r--r-- 1 root root   7793  9월 26 22:19 Makefile
drwxr-xr-x 2 root root   4096  9월 26 22:43 __pycache__
-rwxr-xr-x 1 root root 303672  9월 26 22:36 pyopenpose.cpython-36m-aarch64-linux-gnu.so



You can check python import path using sys.path command. As you can see there's no openpose python directory. So you must add openpose python directory to the sys.path


spypiggy@spypiggy-desktop:/usr/local/src$ python3
Python 3.6.8 (default, Aug 20 2019, 17:12:48) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import openpose
>>> import sys
>>> sys.path
['', '/usr/lib/python36.zip', '/usr/lib/python3.6', '/usr/lib/python3.6/lib-dynload', '/home/spypiggy/.local/lib/python3.6/site-packages', '/usr/local/lib/python3.6/dist-packages', '/usr/local/lib/python3.6/dist-packages/torchvision-0.4.0a0+a1ed206-py3.6-linux-aarch64.egg', '/usr/local/lib/python3.6/dist-packages/Pillow-6.1.0-py3.6-linux-aarch64.egg', '/usr/lib/python3/dist-packages', '/usr/lib/python3.6/dist-packages']
>>> 


It's time to copy our OpenPose python modules to python3.6 path directory(/usr/lib/python3.6/dist-packages). Then you can import openpose without path problem.


cp -r /usr/local/src/openpose/build/python/openpose/ /usr/lib/python3.6/dist-packages


Let's test your openpose python library.


spypiggy@spypiggy-desktop:/$ python3
Python 3.6.8 (default, Aug 20 2019, 17:12:48) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import openpose
>>> 


Webcam test

Let's test with a webcam to check the framework performance. I'll use a webcam. After connect a webcam, make sure it's connected properly. Use lsusb command, then find the webcam. I'm using a Logitech Webcam. You can check whether your webcam is properly installed here.
You can find python sample codes at /usr/local/src/openpose/examples/tutorial_api_python directory. But there's no webcam samples. So I made run_webcam.py to test OpenPose webcam fps on the Jetson Nano. You can download this code from my github.



import logging
import sys
import time
import math
import cv2
import numpy as np
from openpose import pyopenpose as op

if __name__ == '__main__':
    fps_time = 0

    params = dict()
    params["model_folder"] = "../../models/"

    # Starting OpenPose
    opWrapper = op.WrapperPython()
    opWrapper.configure(params)
    opWrapper.start()


    print("OpenPose start")
    cap = cv2.VideoCapture(0)
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)

    ret_val, img = cap.read()
    fourcc = cv2.VideoWriter_fourcc('m', 'p', '4', 'v')
    out_video = cv2.VideoWriter('/tmp/output.mp4', fourcc, cap.get(cv2.CAP_PROP_FPS), (640, 480))

    count = 0

    if cap is None:
        print("Camera Open Error")
        sys.exit(0)
    while cap.isOpened() and count < 30:
        ret_val, dst = cap.read()
        if ret_val == False:
            print("Camera read Error")
            break
        #dst = cv2.resize(image, dsize=(320, 240), interpolation=cv2.INTER_AREA)
        #cv2.imshow("OpenPose 1.5.1 - Tutorial Python API", dst)
        #continue

        datum = op.Datum()
        datum.cvInputData = dst
        opWrapper.emplaceAndPop([datum])
        fps = 1.0 / (time.time() - fps_time)
        fps_time = time.time()
        newImage = datum.cvOutputData[:, :, :]
        cv2.putText(newImage , "FPS: %f" % (fps), (20, 40),  cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
        out_video.write(newImage)

        print("captured fps %f"%(fps))
        cv2.imshow("OpenPose 1.5.1 - Tutorial Python API", newImage)
        count += 1


    cv2.destroyAllWindows()
    out_video.release()
    cap.release()

Be careful : Inference Image format is OpenCV's default format(BGR) not RGB.

As you can see, there's no absolute import path when I import openpose. And input video size is set to 640X480 VGA size.  And output video is saved in the /tmp directory. The result fps is about 0.8 ~ 0.9.




Using Keypoints

If you want to utilize this framework. you must know the keypoints position and its name(left shoulder, left eye, right knee, ...).  I've explained CoCo keypoint number and its name in my another article. OpenPose keypoints are different from CoCo keypoints.
This is OpenPose keypoints numbering.


And each keypoint number represents these parts.
// Result for BODY_25 (25 body parts consisting of COCO + foot)
// const std::map<unsigned int, std::string> POSE_BODY_25_BODY_PARTS {
//     {0,  "Nose"},
//     {1,  "Neck"},
//     {2,  "RShoulder"},
//     {3,  "RElbow"},
//     {4,  "RWrist"},
//     {5,  "LShoulder"},
//     {6,  "LElbow"},
//     {7,  "LWrist"},
//     {8,  "MidHip"},
//     {9,  "RHip"},
//     {10, "RKnee"},
//     {11, "RAnkle"},
//     {12, "LHip"},
//     {13, "LKnee"},
//     {14, "LAnkle"},
//     {15, "REye"},
//     {16, "LEye"},
//     {17, "REar"},
//     {18, "LEar"},
//     {19, "LBigToe"},
//     {20, "LSmallToe"},
//     {21, "LHeel"},
//     {22, "RBigToe"},
//     {23, "RSmallToe"},
//     {24, "RHeel"},
//     {25, "Background"}
// };



Finding humans, keypoints from image.

Look at the above python code.
The most important piece of code is this.


datum = op.Datum()
datum.cvInputData = dst
opWrapper.emplaceAndPop([datum])


datum.cvInputData accetps inference image, then opWrapper.emplaceAndPop function do "pose estimation job". So all of the results are stored in datum variable.
You can see the structure of Daum class here .




First let's find how many humans in the picture. Datum.poseKeyPoints variable is multidimensional  list.

You can find human counts using len() function.

human_count = len(datum.poseKeypoints)

Next let's search all keypoints of all humans


for i in range(human_count):
    print(datum.poseKeypoints[i])

If there's one person, you might see the result like this.


[[1.4762344e+02 1.3347028e+02 8.2816368e-01]
 [1.3464059e+02 2.0289427e+02 2.7535394e-01]
 [9.6994888e+01 1.8996812e+02 3.8270497e-01]
 [2.3700981e+02 2.7823151e+02 7.6166058e-01]
 [2.8759573e+02 2.0168320e+02 8.1591970e-01]
 [1.7230042e+02 2.2052582e+02 1.9583331e-01]
 [2.1466417e+02 3.4412646e+02 3.8592777e-01]
 [3.2411404e+02 3.3472128e+02 7.6674712e-01]
 [1.1935361e+02 4.0530789e+02 5.8029477e-02]
 [7.4650200e+01 4.0530975e+02 5.7644092e-02]
 [0.0000000e+00 0.0000000e+00 0.0000000e+00]
 [0.0000000e+00 0.0000000e+00 0.0000000e+00]
 [1.6407520e+02 4.0413968e+02 5.1864248e-02]
 [0.0000000e+00 0.0000000e+00 0.0000000e+00]
 [0.0000000e+00 0.0000000e+00 0.0000000e+00]
 [1.3464485e+02 1.1823448e+02 9.0272474e-01]
 [1.6290562e+02 1.1824193e+02 9.0591764e-01]
 [1.0873835e+02 1.3114075e+02 8.7346315e-01]
 [1.7936464e+02 1.3112959e+02 8.0011082e-01]
 [0.0000000e+00 0.0000000e+00 0.0000000e+00]
 [0.0000000e+00 0.0000000e+00 0.0000000e+00]
 [0.0000000e+00 0.0000000e+00 0.0000000e+00]
 [0.0000000e+00 0.0000000e+00 0.0000000e+00]
 [0.0000000e+00 0.0000000e+00 0.0000000e+00]
 [0.0000000e+00 0.0000000e+00 0.0000000e+00]]

first element(index 0) means nose position and score. (147, 133) is noise coordination, 82.8 is the score. Next line values are Neck values. And so on....
So if you want to get the Neck position and score. Get values like this.


#int type position (x, y)
pos = (int(datum.poseKeypoints[0][0][0]), int(datum.poseKeypoints[0][0][1]))
#float type score 
score = datum.poseKeypoints[0][0][2]


I made a sample python code. You can download the code at my github.


import argparse
import logging
import sys
import time
import math
import cv2
import numpy as np
from openpose import pyopenpose as op

def angle_between_points( p0, p1, p2 ):
  a = (p1[0]-p0[0])**2 + (p1[1]-p0[1])**2
  b = (p1[0]-p2[0])**2 + (p1[1]-p2[1])**2
  c = (p2[0]-p0[0])**2 + (p2[1]-p0[1])**2
  if a * b == 0:
      return -1.0 
  return  math.acos( (a+b-c) / math.sqrt(4*a*b) ) * 180 /math.pi

def length_between_points(p0, p1):
    return math.hypot(p1[0]- p0[0], p1[1]-p0[1])


def get_angle_point(human, pos):
    pnts = []

    if pos == 'left_elbow':
        pos_list = (5,6,7)
    elif pos == 'left_hand':
        pos_list = (1,5,7)
    elif pos == 'left_knee':
        pos_list = (12,13,14)
    elif pos == 'left_ankle':
        pos_list = (5,12,14)
    elif pos == 'right_elbow':
        pos_list = (2,3,4)
    elif pos == 'right_hand':
        pos_list = (1,2,4)
    elif pos == 'right_knee':
        pos_list = (9,10,11)
    elif pos == 'right_ankle':
        pos_list = (2,9,11)
    else:
        logger.error('Unknown  [%s]', pos)
        return pnts

    for i in range(3):
        if human[pos_list[i]][2] <= 0.1:
            print('component [%d] incomplete'%(pos_list[i]))
            return pnts

        pnts.append((int( human[pos_list[i]][0]), int( human[pos_list[i]][1])))
    return pnts



def angle_left_hand(human):
    pnts = get_angle_point(human, 'left_hand')
    if len(pnts) != 3:
        logger.info('component incomplete')
        return -1

    angle = 0
    if pnts is not None:
        angle = angle_between_points(pnts[0], pnts[1], pnts[2])
        logger.info('left hand angle:%f'%(angle))
    return angle


def angle_left_elbow(human):
    pnts = get_angle_point(human, 'left_elbow')
    if len(pnts) != 3:
        logger.info('component incomplete')
        return

    angle = 0
    if pnts is not None:
        angle = angle_between_points(pnts[0], pnts[1], pnts[2])
        logger.info('left elbow angle:%f'%(angle))
    return angle

def angle_left_knee(human):
    pnts = get_angle_point(human, 'left_knee')
    if len(pnts) != 3:
        logger.info('component incomplete')
        return

    angle = 0
    if pnts is not None:
        angle = angle_between_points(pnts[0], pnts[1], pnts[2])
        logger.info('left knee angle:%f'%(angle))
    return angle

def angle_left_ankle(human):
    pnts = get_angle_point(human, 'left_ankle')
    if len(pnts) != 3:
        logger.info('component incomplete')
        return

    angle = 0
    if pnts is not None:
        angle = angle_between_points(pnts[0], pnts[1], pnts[2])
        logger.info('left ankle angle:%f'%(angle))
    return angle

def angle_right_hand(human):
    pnts = get_angle_point(human, 'right_hand')
    if len(pnts) != 3:
        logger.info('component incomplete')
        return

    angle = 0
    if pnts is not None:
        angle = angle_between_points(pnts[0], pnts[1], pnts[2])
        logger.info('right hand angle:%f'%(angle))
    return angle


def angle_right_elbow(human):
    pnts = get_angle_point(human, 'right_elbow')
    if len(pnts) != 3:
        logger.info('component incomplete')
        return

    angle = 0
    if pnts is not None:
        angle = angle_between_points(pnts[0], pnts[1], pnts[2])
        logger.info('right elbow angle:%f'%(angle))
    return angle

def angle_right_knee(human):
    pnts = get_angle_point(human, 'right_knee')
    if len(pnts) != 3:
        logger.info('component incomplete')
        return

    angle = 0
    if pnts is not None:
        angle = angle_between_points(pnts[0], pnts[1], pnts[2])
        logger.info('right knee angle:%f'%(angle))
    return angle

def angle_right_ankle(human):
    pnts = get_angle_point(human, 'right_ankle')
    if len(pnts) != 3:
        logger.info('component incomplete')
        return

    angle = 0
    if pnts is not None:
        angle = angle_between_points(pnts[0], pnts[1], pnts[2])
        logger.info('right ankle angle:%f'%(angle))
    return angle



logger = logging.getLogger('TfPoseEstimatorRun')
logger.handlers.clear()
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('[%(asctime)s] [%(name)s] [%(levelname)s] %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)



if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='openpose pos estimation run')
    parser.add_argument('--image', type=str, default='/usr/local/src/openpose/examples/media/COCO_val2014_000000000294.jpg')
    args = parser.parse_args()

    fps_time = 0

    params = dict()
    params["model_folder"] = "../../models/"

    # Starting OpenPose
    opWrapper = op.WrapperPython()
    opWrapper.configure(params)
    opWrapper.start()

    #imagePath = '/usr/local/src/openpose/examples/media/COCO_val2014_000000000474.jpg'
    #imagePath = '/usr/local/src/openpose/examples/media/COCO_val2014_000000000536.jpg'
    imagePath = args.image

    print("OpenPose start")
    dst = cv2.imread(imagePath)
    img_name = '/tmp/openpose_keypoint.png'

    #dst = cv2.resize(image, dsize=(320, 240), interpolation=cv2.INTER_AREA)
    #cv2.imshow("OpenPose 1.5.1 - Tutorial Python API", dst)
    #continue
    datum = op.Datum()
    datum.cvInputData = dst
    opWrapper.emplaceAndPop([datum])
    newImage = datum.cvOutputData[:, :, :]
    cv2.imshow("OpenPose 1.5.1 - Tutorial Python API", newImage)
    human_count = len(datum.poseKeypoints)
    font = cv2.FONT_HERSHEY_SIMPLEX
    for i in range(human_count):
        for j in range(25):
            if datum.poseKeypoints[i][j][2] > 0.01:
                cv2.putText(newImage,str(j),  ( int(datum.poseKeypoints[i][j][0]) + 10,  int(datum.poseKeypoints[i][j][1])), font, 0.5, (0,255,0), 2) 
        print(datum.poseKeypoints[i])

    cv2.imwrite(img_name, newImage)
    cv2.destroyAllWindows()        

    for i in range(human_count):
        print('=================================')
        angle_left_hand(datum.poseKeypoints[i] )
        angle_left_elbow(datum.poseKeypoints[i] )
        angle_left_knee(datum.poseKeypoints[i] )
        angle_left_ankle(datum.poseKeypoints[i] )
        angle_right_hand(datum.poseKeypoints[i] )
        angle_right_elbow(datum.poseKeypoints[i] )
        angle_right_knee(datum.poseKeypoints[i] )
        angle_right_ankle(datum.poseKeypoints[i] )

If you run this python code, you can get a result image file at /tmp/openpose_keypoint.png


And you also see the console output like this. Both arms angle are successfully calculated.


=================================
[2019-10-05 14:11:51,916] [TfPoseEstimatorRun] [INFO] left hand angle:168.476278
[2019-10-05 14:11:51,930] [TfPoseEstimatorRun] [INFO] left elbow angle:103.517309
component [12] incomplete
[2019-10-05 14:11:51,931] [TfPoseEstimatorRun] [INFO] component incomplete
component [12] incomplete
[2019-10-05 14:11:51,931] [TfPoseEstimatorRun] [INFO] component incomplete
[2019-10-05 14:11:51,931] [TfPoseEstimatorRun] [INFO] right hand angle:15.291078
[2019-10-05 14:11:51,932] [TfPoseEstimatorRun] [INFO] right elbow angle:90.737352
component [9] incomplete
[2019-10-05 14:11:51,932] [TfPoseEstimatorRun] [INFO] component incomplete
component [9] incomplete
[2019-10-05 14:11:51,932] [TfPoseEstimatorRun] [INFO] component incomplete


Wrapping Up

OpenPose is a very powerful framework for pose estimation. As you can see in the above image, OpenPose calculates hidden human parts also. But the problem is poor performance on the Jetson Nano.

If you want to enhance the performance, please see my another article related with Tensorflow that use tensorflow to speed up the fps .
If you want the most satisfactory human pose estimation performance on Jetson Nano, see the following article(https://spyjetson.blogspot.com/2019/12/jetsonnano-human-pose-estimation-using.html). NVIDIA team introduces human pose estimation using models optimized for TensorRT.

If you are using Xavier NX, implement KeyPoint Detection using Tensorflow in Xavier NX, and refer to the next page.


댓글 12개:

  1. 작성자가 댓글을 삭제했습니다.

    답글삭제
  2. Thank you very much for sharing! It took me a long time to make python API work on Jetson. Your instruction is clear and it works!

    답글삭제
    답글
    1. I am happy that my writing was helpful to you.

      삭제
  3. Thank you very much for the article, really helpful.
    What was the final max fps you could achieve on jetson nano? And is the 4fps output looking good in webcam?
    Is is smooth enough for realtime?

    답글삭제
    답글
    1. Hi flowingcowfx.
      I think 4fps is not enough. I think 10 fps is the minimum requirement.
      The original Openpose is not optimized for Jetson series. Jetson Nano optimization is highly related with TensorRT.
      I wrote another Pose Estimation post https://spyjetson.blogspot.com/2019/12/jetsonnano-human-pose-estimation-using.html.
      In this post, I explained high performance pose estimation unsing TensorRT. Please read this post.

      삭제
  4. Thank you for this article. I am trying to run the python webcam code but it fails with error.
    Starting OpenPose Python Wrapper...
    Auto-detecting all available GPUs... Detected 1 GPU(s), using 1 of them starting at GPU 0.
    OpenPose start
    [ WARN:0] global /usr/local/src/opencv-4.1.1/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error.
    [ WARN:0] global /usr/local/src/opencv-4.1.1/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
    [ WARN:0] global /usr/local/src/opencv-4.1.1/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
    VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
    VIDEOIO ERROR: V4L: can't open camera by index 0
    [ERROR:0] global /usr/local/src/opencv-4.1.1/modules/videoio/src/cap.cpp (392) open VIDEOIO(GSTREAMER): raised OpenCV exception:

    OpenCV(4.1.1) /usr/local/src/opencv-4.1.1/modules/videoio/src/cap_gstreamer.cpp:1392: error: (-215:Assertion failed) fps > 0 in function 'open'


    Any help would be appreciated.

    답글삭제
    답글
    1. 작성자가 댓글을 삭제했습니다.

      삭제
    2. Generally the webcam index is 0 for the first webcam(or only one webcam)
      So cv2.VideoCapture(0) works well.
      But it seems that in your case the index number is wrong or the webcam is not recognized correctly.
      Please check the webcam connection using "lsusb" to check whether the webcam is recognized correctly.
      And use "ls /dev/ | grep video" command to find the index number.
      Output "video0" means that the index is 0, and "video1" means index1. If you find both of them, then you have 2 webcams, if nothing, there's no webcam.

      삭제
  5. 작성자가 댓글을 삭제했습니다.

    답글삭제
  6. Hey there. Thank you very much for making this example. It's helping me a lot!
    I made little changes for making it more efficient:

    self.joints = {'lshoulder': (1, 5, 6), 'rshoulder': (1, 2, 3), 'lelbow': (5, 6, 7), 'relbow': (2, 3, 4),
    'lknee': (12, 13, 14), 'rknee': (9, 10, 11), 'lankle': (13, 14, 19), 'rankle': (10, 11, 22)}

    def __length_between_points__(self, p0, p1):
    return math.hypot(p1[0] - p0[0], p1[1] - p0[1])

    def __angle_between_points__(self, p0, p1, p2):
    a = (p1[0] - p0[0]) ** 2 + (p1[1] - p0[1]) ** 2
    b = (p1[0] - p2[0]) ** 2 + (p1[1] - p2[1]) ** 2
    c = (p2[0] - p0[0]) ** 2 + (p2[1] - p0[1]) ** 2
    if a * b == 0:
    return -1.0
    return math.acos((a + b - c) / math.sqrt(4 * a * b)) * 180 / math.pi

    def __get_angle_point__(self, human, pos):
    pos_list = self.joints[pos]
    pnts = []

    for i in range(3):
    if human[pos_list[i]][2] <= 0.1:
    print('component [%d] incomplete' % (pos_list[i]))
    return pnts

    pnts.append((int(human[pos_list[i]][0]), int(human[pos_list[i]][1])))
    return pnts

    def get_joint_angle(self, human, joint):
    pnts = self.__get_angle_point__(human, joint)
    if len(pnts) != 3:
    self.logger.info('component incomplete')
    return -1

    angle = 0
    if pnts is not None:
    angle = self.__angle_between_points__(pnts[0], pnts[1], pnts[2])
    self.logger.info('{} angle: {}'.format(joint, angle))
    return angle

    self.get_joint_angle(poseKeypoints[person], 'lshoulder')
    self.get_joint_angle(poseKeypoints[person], 'rshoulder')
    self.get_joint_angle(poseKeypoints[person], 'lelbow')
    self.get_joint_angle(poseKeypoints[person], 'relbow')

    답글삭제
  7. Hello. Thank you for your helpful article. I follow your installation instructions and successfully installed openpose to python3. But when I run your code in run_detect_keypoint.py, I got this error. I used Jetson Nano with JetPack 4.4, CuDNN 8.0, CUDA 10.2 Can you help me to find out? Thank you very much!

    Starting OpenPose Python Wrapper...
    Auto-detecting all available GPUs... Detected 1 GPU(s), using 1 of them starting at GPU 0.
    OpenPose start
    Traceback (most recent call last):
    File "run_detect_keypoint.py", line 194, in
    opWrapper.emplaceAndPop([datum])
    TypeError: emplaceAndPop(): incompatible function arguments. The following argument types are supported:
    1. (self: openpose.pyopenpose.WrapperPython, arg0: std::vector, std::allocator > >) -> bool

    Invoked with: , []

    Did you forget to `#include `? Or ,
    , , etc. Some automatic
    conversions are optional and require extra headers to be included
    when compiling your pybind11 module.

    답글삭제
    답글
    1. Did you use JetPack 4.4 DP version or Production release?

      삭제