Closed tdurand closed 5 years ago
Ok + thanks! What about the non-aws cloud GPU hosters e.g. "G0 Pay-as-you-go" https://www.paperspace.com/pricing
Paperspace cheapest GPU seems to be 0.51 USD / h.
Then would need to look into Google Cloud, Azure etc etc but they should be in the same range than AWS / Paperspace...
Looking into more modest GPUs, I found this provider: https://www.gpueater.com/ which seems to have less powerfull GPUs starting at 0.1 USD / h .... but would need to see if it's enough to run YOLO..
Anyway, in order to run opendatacam on the cloud for 24h for example, seems we are looking at a cost somewhere between 3 to 20 USD..
Thanks. That seems rather too pricey ... however the gpueater plan "n1.p400 | 0.641 | 256 | 2GB | NVIDIA Pascal | $0.0992/h" for $71/m looks interesting. The Jeston TX2 is $500 ... so 500 / 72 = gives you 7 months of GPU cloud hosting, or?
Yes, would need to see if it's powerful enough to run Opendatacam..
Low prio for now also, but specifying tasks:
[x] Create a docker-nvidia (https://github.com/NVIDIA/nvidia-docker) image from the work on the jetson docker images
[x] Test it on a laptop with Ubuntu with CUDA software (as nvidia-docker isn't compatible with neither macOS or windows)
[x] Test on cloud and see minimal machine requirement..
[x] Have a version that can run from an IP Cam
Worked today on making a Dockerimage for non-jetson device using docker-nvidia : https://github.com/NVIDIA/nvidia-docker , and after some pain I succeeded to have an image that run from a file ! Yay 🙌️ !
Some early documentation of this : https://github.com/moovel/lab-opendatacam/blob/v2/doc/DOCKER_CLOUDANDDESKTOP.md
More on this, after a bit of pain I successfully:
Deployed and ran Opendatacam on a paperspace instance (on a file) 👌️ ! Turns out the NVIDIA Quadro P4000 minimum setup at 0.51 USD / h is very powerful and I can run full weight Yolov3 at 30 FPS...
And more than this, I also successfully ran it on any IP accessible webcam out of the box, just by changing a setting... 🤯️
The pain was related to the JSON stream of detection sent by darknet, for some reason on a cloud deployment it is behaving differently and split the chunks in more parts, which my JSON stream receiver wasn't handling properly... But it is fixed now and code is much more reliable
I believe we could run YOLO for cheaper on some GTX 1050 / 1070 series (like my laptop), but I have a hard time finding a provider, https://www.gpueater.com/ sounds a good options but doesn't take my credit card, I asked them for paypal... And I tried a "airbnb" of GPUs but it is broken and I couldn't start a machine: https://vast.ai/
Todo on this issue:
Ok this is done, integrated to master doc and rc.1 released for nvidiadocker container also, I'll also try to do a tutorial for deploying on cloud (example for paperspace): https://github.com/opendatacam/opendatacam/blob/master/documentation/nvidia-docker/DEPLOY_CLOUD.md
Hey still works on google vm? I don't have cuda products, so I can use onlys free clouds
Had a quick look on this, seems that the smallest GPU instance on AWS is p2.xlarge, which cost 0.9 USD / hour...
But this seems overkill in term of performance to just run YOLO .. those instance are made for training or running big neural nets...
I saw there is this elastic graphic thing: https://aws.amazon.com/ec2/elastic-graphics/pricing/ which is cheaper (0.1 to 0.4 USD / h) but it's unclear to me if we can run nvidia-docker on it and what are the perfs
I failed to understand what would be the equivalent performance of a jetson tx2 on the cloud