This description is based on Jetpack 4.5 and OpenPose 1.7.0 (released on November 17, 2020). The Jetson series has been tested on the Nano and Xavier NX, but will work the same on the Xavier and TX2.
Prerequisites
Install OpenPose 1.7.0 and basic usage.
- Jetpack 4.5 - Install the latest version of OpenPose on Jetson Nano
- OpenPose 1.7 Programming on Jetson Series using CommandLine tools
Under the hood
When you install OpenPose, there are Python sample files in the openpose-1.7.0/examples/tutorial_api_python directory. These files are good resources for OpenPose programming with Python.
I'll start working on these files.
root@spypiggy-nx:/usr/local/src/openpose-1.7.0/examples/tutorial_api_python# ls -al total 60 drwxrwxr-x 2 root root 4096 11월 17 14:48 . drwxrwxr-x 11 root root 4096 11월 17 14:48 .. -rw-rw-r-- 1 root root 2900 11월 17 14:48 01_body_from_image.py -rw-rw-r-- 1 root root 3146 11월 17 14:48 02_whole_body_from_image.py -rw-rw-r-- 1 root root 3362 11월 17 14:48 04_keypoints_from_images.py -rw-rw-r-- 1 root root 4276 11월 17 14:48 05_keypoints_from_images_multi_gpu.py -rw-rw-r-- 1 root root 3330 11월 17 14:48 06_face_from_image.py -rw-rw-r-- 1 root root 3751 11월 17 14:48 07_hand_from_image.py -rw-rw-r-- 1 root root 3675 11월 17 14:48 08_heatmaps_from_image.py -rw-rw-r-- 1 root root 3377 11월 17 14:48 09_keypoints_from_heatmaps.py -rw-rw-r-- 1 root root 3345 11월 17 14:48 12_asynchronous_custom_output.py -rw-rw-r-- 1 root root 634 11월 17 14:48 CMakeLists.txt -rw-rw-r-- 1 root root 2572 11월 17 14:48 openpose_python.py -rw-rw-r-- 1 root root 174 11월 17 14:48 README.md
Python import path Problem
All of the above examples contain the following code.
# Import Openpose (Windows/Ubuntu/OSX) dir_path = os.path.dirname(os.path.realpath(__file__)) try: # Windows Import if platform == "win32": # Change these variables to point to the correct folder (Release/x64 etc.) sys.path.append(dir_path + '/../../python/openpose/Release'); os.environ['PATH'] = os.environ['PATH'] + ';' + dir_path + '/../../x64/Release;' + dir_path + '/../../bin;' import pyopenpose as op else: # Change these variables to point to the correct folder (Release/x64 etc.) sys.path.append('../../python'); # If you run `make install` (default path is `/usr/local/python` for Ubuntu), you can also access the OpenPose/python module from there. This will install OpenPose and the python library at your desired installation path. Ensure that this is in your python path in order to use it. # sys.path.append('/usr/local/python') from openpose import pyopenpose as op except ImportError as e: print('Error: OpenPose library could not be found. Did you enable `BUILD_PYTHON` in CMake and have this Python script in the right folder?') raise e
The purpose of this code is because of the Python import path problem.
However, since we copied the openpose Python package to the /usr/lib/python3.6/dist-packages directory during the installation process, these codes are unnecessary. So it doesn't matter if you erase it.
Image path, model path problem
The image path and model path name used in the test in the example code are as follows.
parser = argparse.ArgumentParser() parser.add_argument("--image_path", default="../../../examples/media/COCO_val2014_000000000192.jpg", help="Process an image. Read all standard formats (jpg, png, bmp, etc.).") args = parser.parse_known_args() # Custom Params (refer to include/openpose/flags.hpp for more parameters) params = dict() params["model_folder"] = "../../../models/"
You can see that the path doesn't fit. Not "../../../examples/..." but "../../examples/..." is correct. "../../../models/" also "../../models/" is correct.
However, I recommend using absolute paths rather than relative paths. If you write new Python code in a new directory, there is an inconvenience of having to change these relative paths every time. Therefore, it is recommended to use an absolute path name. I will use the absolute path.
Memory Problem
Although the Xavier NX has 8GB of memory, it is much less than the machine learning desktop. In the case of the Jetson Nano, it's even worse. OpenPose requires more memory than you think. Therefore, it is necessary to properly adjust the network input size.
The option to adjust this value in OpenPose is net_resolution. Since OpenPose is an image-targeted operation, net_resolution has a "WxH" format. W and H values are multiples of 16. Keeping the ratio of W and H close to the aspect ratio of the input image helps to output good results.
The example code doesn't use the net_resolution value, but I'll use it appropriately.
# Flags parser = argparse.ArgumentParser() parser.add_argument("--image_dir", default="/usr/local/src/openpose-1.7.0//examples/media/", help="Process a directory of images. Read all standard formats (jpg, png, bmp, etc.).") parser.add_argument("--no_display", default=False, help="Enable to disable the visual display.") args = parser.parse_known_args() # Custom Params (refer to include/openpose/flags.hpp for more parameters) params = dict() params["model_folder"] = "/usr/local/src/openpose-1.7.0/models/" params["net_resolution"] = "320x256"
To specify the net_resolution option in OpenPose Python, simply add it to the params dictionary.
I will copy the example files to /usr/local/src/study and edit them one by one.
cp /usr/local/src/openpose-1.7.0/examples/tutorial_api_python/* /usr/local/src/study/
01_body_from_image.py
Some of the files were modified with the above-mentioned content. The last cv2.waitKey(0) was modified for screen capture, so it is not a necessary modification.
# From Python # It requires OpenCV installed for Python import sys import cv2 import os from sys import platform import argparse from openpose import pyopenpose as op try: # Flags parser = argparse.ArgumentParser() parser.add_argument("--image_path", default="/usr/local/src/openpose-1.7.0/examples/media/COCO_val2014_000000000192.jpg", help="Process an image. Read all standard formats (jpg, png, bmp, etc.).") args = parser.parse_known_args() # Custom Params (refer to include/openpose/flags.hpp for more parameters) params = dict() params["model_folder"] = "/usr/local/src/openpose-1.7.0/models/" params["net_resolution"] = "320x256" #COCO_val2014_000000000192.jpg image is landscape mode, so 320x256 is a good choice # Add others in path? for i in range(0, len(args[1])): curr_item = args[1][i] if i != len(args[1])-1: next_item = args[1][i+1] else: next_item = "1" if "--" in curr_item and "--" in next_item: key = curr_item.replace('-','') if key not in params: params[key] = "1" elif "--" in curr_item and "--" not in next_item: key = curr_item.replace('-','') if key not in params: params[key] = next_item # Starting OpenPose opWrapper = op.WrapperPython() opWrapper.configure(params) opWrapper.start() # Process Image datum = op.Datum() imageToProcess = cv2.imread(args[0].image_path) datum.cvInputData = imageToProcess opWrapper.emplaceAndPop(op.VectorDatum([datum]))human_count = len(datum.poseKeypoints) # Display Image for human in range(human_count): print(datum.poseKeypoints[human]) print("Total %d human detected"%human_count)cv2.imshow("OpenPose 1.7.0 - Tutorial Python API", datum.cvOutputData) k = 0 while k != 27: k = cv2.waitKey(0) & 0xFF except Exception as e: print(e) sys.exit(-1)
<01_body_from_image.py>
Now let's run the code. As the file name suggests, it extracts the human keypoints from the image and prints the keypoints information to the shell. And it also displays the image file to the screen.
root@spypiggy-nx:/usr/local/src/study# python3 01_body_from_image.py Starting OpenPose Python Wrapper... Auto-detecting all available GPUs... Detected 1 GPU(s), using 1 of them starting at GPU 0. Body keypoints: [[[ 3.29139008e+02 2.12978333e+02 7.70797849e-01] [ 3.25197052e+02 2.16946854e+02 9.44534898e-01] [ 2.97181213e+02 2.20953629e+02 8.90540242e-01] [ 2.79063293e+02 2.47102036e+02 9.40675259e-01] [ 2.95104279e+02 2.65098389e+02 8.69051158e-01]
Keypoint analysis
As you can see from the Python code, all the important data is in datum.poseKeypoints.
This value is a Python nested list as shown below.
// Result for BODY_25 (25 body parts consisting of COCO + foot) // const std::map<unsigned int, std::string> POSE_BODY_25_BODY_PARTS { // {0, "Nose"}, // {1, "Neck"}, // {2, "RShoulder"}, // {3, "RElbow"}, // {4, "RWrist"}, // {5, "LShoulder"}, // {6, "LElbow"}, // {7, "LWrist"}, // {8, "MidHip"}, // {9, "RHip"}, // {10, "RKnee"}, // {11, "RAnkle"}, // {12, "LHip"}, // {13, "LKnee"}, // {14, "LAnkle"}, // {15, "REye"}, // {16, "LEye"}, // {17, "REar"}, // {18, "LEar"}, // {19, "LBigToe"}, // {20, "LSmallToe"}, // {21, "LHeel"}, // {22, "RBigToe"}, // {23, "RSmallToe"}, // {24, "RHeel"}, // {25, "Background"} // };
You can find human counts using len() function.
human_count = len(datum.poseKeypoints)
Next let's search all keypoints of all humans
for human in range(human_count): print(datum.poseKeypoints[human])
Improved 01_body_from_image.py
Using the first Python example, I simply tested the body keypoints. This time, we will improve the code and modify it to draw the number at the keypoint.
Modify a part of the code as follows.
After copying the image created in OpenPose, it was modified to print the body part number at 25 coordinates.
# Process Image datum = op.Datum() imageToProcess = cv2.imread(args[0].image_path) datum.cvInputData = imageToProcess opWrapper.emplaceAndPop(op.VectorDatum([datum])) newImage = datum.cvOutputData[:, :, :] human_count = len(datum.poseKeypoints) font = cv2.FONT_HERSHEY_SIMPLEX for human in range(human_count): for j in range(25): if datum.poseKeypoints[human][j][2] > 0.01: cv2.putText(newImage, str(j), ( int(datum.poseKeypoints[human][j][0]) + 10, int(datum.poseKeypoints[human][j][1])), font, 0.5, (0,255,0), 2) print(datum.poseKeypoints[human]) # Display Image for human in range(human_count): print(datum.poseKeypoints[human]) print("Total %d human detected"%human_count) cv2.imshow("OpenPose 1.7.0 - Tutorial Python API", newImage)
<01_1_body_from_image.py part of the modified code>
Now let's run the code.
root@spypiggy-nx:/usr/local/src/study# python3 01_1_body_from_image.py
You will get the following screen output.
Wrapping up
댓글 없음:
댓글 쓰기