Closed topherbuckley closed 8 months ago
@topherbuckley yea it's not on pip (sorry, this project was started as C++/CUDA and predates a bunch of the popular Python ML stuff, hence the non-standard install method), but building it from source will install the python module such that it is importable elsewhere. If you're using your own containers, you can build that in your own container, or base it off mine. The jetson-inference container isn't that much bigger than l4t-jetpack/l4t-pytorch (but you can ditch the pytorch part if you aren't doing training)
I see. Thanks for the confirmation! I'll try using your container for the time being and we can work on slimming it out later for our purposes.
Hello,
I'd like to use the python libraries in an external project. I'm not seeing the package on pip. Is there a recommended way import just the python libraries somewhere external to this project tree?
I saw a similar issue here for a cpp project, but wondering if there is a similar build script for the python libs somewhere or if they are already built and hosted somewhere other than pip?
I see in the root README you mention:
but this would assume I sill have to clone/download this whole project and build or download the full docker container right? I'm trying to run on a smaller docker image (comparing the base of nvcr.io/nvidia/l4t-base:r32.4.3 @ 631MB vs dustynv/jetson-inference:r35.4.1 @ 12.3 GB) and we don't need all the amazing features inside the project, just want to use our own object detection model using your easy to understand API.
If not, then do you know of a much more minimalist python API for the jetson and camera interface? The Nvdia docs point to here so... :D