Closed robmarkcole closed 3 years ago
the jetson does include pytorch with access to GPU
dlinano@jetson-nano:~$ python3
Python 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__version__)
1.1.0
>>> print('CUDA available: ' + str(torch.cuda.is_available()))
CUDA available: True
>>> import torchvision
>>> print(torchvision.__version__)
0.2.2
Hi @robmarkcole , this is top of mind for us and so far the jetson is the best option for deepstack on embedded devices. And yeah, pytorch gpu is fully supported. Would review the Balena platform. Core timeline that i see for deepstack is we go open source with the destop cpu and gpu version then add support for custom object detection models and the jetson build will follow after. What are your thoughts on the order of this?
Sounds good to me!
User @stevemac00 might be able to assist with this task
Article on running yolo5 on Jetson -> https://blog.roboflow.com/deploy-yolov5-to-jetson-nx/
Also guide for install on jetson-xavier which is a very capable platform
https://forums.fast.ai/t/platform-nvidia-jetson-xavier-nx/72119
done
I mentioned before a desire to get deepstack running on a Jetson, and I think this could be a reference platform for home users. I think there is also an opportunity to make deepstack even more accessible by providing a disk image which can be flashed to an SD card, and then gives a nice UI for setting up the server. One tool to do this is Balena, and there is an example of its use here. One very appealing aspect of the Jetson is that training can also be performed on this platform. Any thoughts on this?