In fact, what I'm introducing today is more useful on a desktop with better memory and GPU performance than a Jetson Nano .Today's introduction is from https://github.com/Yijunmaverick/CartoonGAN-Test-Pytorch-Torch.
Prerequisites
This example requires PyTorch. To install PyTorch, see https://spyjetson.blogspot.com/2019/12/jetsonnano-human-pose-estimation-using.html.This link explains how to install the latest PyTorch 1.4 on Jetpack 4.3. And TorchVision requires PIL as a base. However, the error message "cannot import name 'PILLOW_VERSION'" occurs when using the latest Pillow 7.0. This error is due to the deletion of the PILLOW_VERSION definition in Pillow 7.0. Perhaps sooner or later PyTorch will release a new version that fixes this bug. However, we need to avoid this error right now, so we need to install Pillow version 6.X.I already have 7.0 installed, but it was newly installed as a lower version with the below command.
root@spypiggy-nano:/usr/local/src/CartoonGAN-Test-Pytorch-Torch# pip3 install "pillow<7" Collecting pillow<7 Downloading https://files.pythonhosted.org/packages/b3/d0/a20d8440b71adfbf133452d4f6e0fe80de2df7c2578c9b498fb8120 83383/Pillow-6.2.2.tar.gz (37.8MB) 100% |████████████████████████████████| 37.8MB 14kB/s Building wheels for collected packages: pillow Running setup.py bdist_wheel for pillow ... done Stored in directory: /root/.cache/pip/wheels/f6/0a/7c/5e6567101a10388b915c4ebf73edb849f73908ad154e9eb9bc Successfully built pillow Installing collected packages: pillow Found existing installation: Pillow 7.0.0 Uninstalling Pillow-7.0.0: Successfully uninstalled Pillow-7.0.0 Successfully installed pillow-6.2.2
Install CartoonGAN
And download the source code.
cd /usr/local/src git clone https://github.com/Yijunmaverick/CartoonGAN-Test-Pytorch-Torch.git
Then download the pre-trained models using the script files.
cd /usr/local/src/CartoonGAN-Test-Pytorch-Torch sh pretrained_model/download_pth.sh
Test
Some of the example source code provided has been modified to fit the latest PyTorch.And Jetson Nano's 4GB of memory isn't enough to run PyTorch. Therefore, if the load_size value is 450, the process may suddenly stop due to insufficient memory while processing some images. If this happens, reduce this value and test.
import torch import os import numpy as np import argparse from PIL import Image import torchvision.transforms as transforms from torch.autograd import Variable import torchvision.utils as vutils from network.Transformer import Transformer import gc parser = argparse.ArgumentParser() parser.add_argument('--input_dir', type=str, default = 'test_img') parser.add_argument('--load_size', type=int, default = 450) parser.add_argument('--model_path', type=str, default = './pretrained_model') parser.add_argument('--style', type=str, default = 'Hayao') parser.add_argument('--output_dir', type=str, default = 'test_output') parser.add_argument('--gpu', type=int, default = 0) opt = parser.parse_args() valid_ext = ['.jpg', '.png'] if not os.path.exists(opt.output_dir): os.mkdir(opt.output_dir) # load pretrained model model = Transformer() model.load_state_dict(torch.load(os.path.join(opt.model_path, opt.style + '_net_G_float.pth'))) model.eval() if opt.gpu > -1: print('GPU mode') model.cuda() else: print('CPU mode') model.float() for files in os.listdir(opt.input_dir): torch.cuda.empty_cache() gc.collect() ext = os.path.splitext(files)[1] if ext not in valid_ext: continue print('process file:' + files) # load image input_image = Image.open(os.path.join(opt.input_dir, files)).convert("RGB") # resize image, keep aspect ratio h = input_image.size[0] w = input_image.size[1] ratio = h *1.0 / w if ratio > 1: h = opt.load_size w = int(h*1.0/ratio) else: w = opt.load_size h = int(w * ratio) input_image = input_image.resize((h, w), Image.BICUBIC) input_image = np.asarray(input_image) print(input_image.shape) # RGB -> BGR input_image = input_image[:, :, [2, 1, 0]] input_image = transforms.ToTensor()(input_image).unsqueeze(0) # preprocess, (-1, 1) input_image = -1 + 2 * input_image with torch.no_grad(): if opt.gpu > -1: input_image = Variable(input_image).cuda() else: input_image = Variable(input_image).float() # forward output_image = model(input_image) output_image = output_image[0] # BGR -> RGB output_image = output_image[[2, 1, 0], :, :] # deprocess, (0, 1) output_image = output_image.data.cpu().float() * 0.5 + 0.5 # save vutils.save_image(output_image, os.path.join(opt.output_dir, files[:-4] + '_' + opt.style + '.jpg')) print(files + ' save success') print('Done!')
Run the code! The default image directory is "test_img".
cd /usr/local/src/CartoonGAN-Test-Pytorch-Torch mkdir out python3 test.py --output_dir=out --load_size=350
Be Careful : I used load_size valus=350. Because of the lack of memory in the Jetson Nano, the process was killed.
<input images in the test_img directory>
<output images in the out directory>
Wrapping up
There are four models in the pretrained_model directory. The names of these models are:
root@spypiggy-nano:/usr/local/src/CartoonGAN-Test-Pytorch-Torch# ls -al pretrained_model/ total 173968 drwxr-xr-x 2 root root 4096 4월 24 21:57 . drwxr-xr-x 9 root root 4096 4월 24 23:33 .. -rw-r--r-- 1 root root 382 4월 24 21:54 download_pth.sh -rw-r--r-- 1 root root 366 4월 24 21:54 download_t7.sh -rw-r--r-- 1 root root 44529096 6월 16 2018 Hayao_net_G_float.pth -rw-r--r-- 1 root root 44529096 6월 16 2018 Hosoda_net_G_float.pth -rw-r--r-- 1 root root 44529096 6월 16 2018 Paprika_net_G_float.pth -rw-r--r-- 1 root root 44529096 6월 16 2018 Shinkai_net_G_float.pth
If you use --style = Hayao, --style = Hosoda, --style = Paprika in the above Python command, you can test by changing the model.
You can download the source code at https://github.com/raspberry-pi-maker/NVIDIA-Jetson
댓글 없음:
댓글 쓰기