Open w3sip opened 6 months ago
No, our official platform is Ubuntu 20.04. All of our Docker images are based off that and our development VMs are as well. The Development Environment Guide is more than you need if you're just doing component development, but the steps for installing OpenCV and ActiveMQ C++ are useful. Here's how you build the OpenMPF C++ SDK.
Thanks for the quick response, makes sense. While I have your attention (and please do point me to a better place to ask questions if there is one):
From what I see, all the components are derived from this container - https://github.com/openmpf/openmpf-docker/blob/master/components/cpp_executor/Dockerfile ... but what if it's desired for the component to be derived from, say, NVidia image? Is there an example of component being deployed on an arbitrary base, and still satisfying framework requirements?
Again, thanks for the prompt response!
A better place for discussions would be https://github.com/openmpf/openmpf/discussions. Just something to keep in mind for the future. This is fine for now.
Yes, all of the C++ component Docker images are derived from the C++ executor base docker image. That takes care of all of the logic for registering the component with the Workflow Manager and interacting with ActiveMQ. We do not have an example of a component that starts with an arbitrary base image, but it's not impossible to develop one. As long as your component performs those two behaviors it will work in an OpenMPF Docker deployment. Those behaviors are executed using the two .py
scripts found here.
FYI though, to get NVIDIA CUDA support, you just need to copy the relevant libraries into your Docker image. Here are two examples:
COPY --from=build_component /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8 /usr/lib/x86_64-linux-gnu/
COPY --from=build_component /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8 /usr/lib/x86_64-linux-gnu/
COPY --from=build_component /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8 /usr/lib/x86_64-linux-gnu/
COPY --from=build_component /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8 /usr/lib/x86_64-linux-gnu/
Those components use the same CUDA libs.
You can see here in the C++ component build Docker image that we install CUDA 11.4. The components copy the relevant libs from that base image using --from=build_component
.
Also, we install CUDA dependencies here in the C++ component executor base image.
The way we install CUDA is based on the official NVIDIA Docker images. We strip stuff out that we don't use to save space.
To be clear, NVIDIA libraries like libcudnn and libcublas might be required by your component. It depends on what you're doing. Just saying that strictly speaking they are not part of the base CUDA runtime install (cudart). We install cuda-cudart-11-4
in the C++ component executor base image, so there's nothing you need to do to use base CUDA.
Is there an easy way to enumerate what makes an image an OpenMPF component container? E.g. if I have a functional docker image that I also want to provide its service as an OpenMPF plugin, is there a way to deploys a specific set of libraries/configuration files in a certain way to accomplish that? Something like:
I think there is an issue regarding the terminology. An "OpenMPF component" is a C++, Java, or Python class that implements the language specific OpenMPF base class. The component API is specified as programming language level constructs, not as a Docker image. You can't just take an existing Docker image and copy in a few files to make it an OpenMPF component, you need to write actual C++, Python, or Java code using our libraries.
Right now you essentially want to move the OpenMPF specific resources in to your existing Docker image. Just do the reverse. Move the resources in your existing Docker image in to an OpenMPF base image. As Jeff mentioned, the using your own base image is "not impossible", but it is certainly not recommended. You will need to access internal APIs that can and do change frequently.
Figuring out how to install your own code in a new environment is generally going to be easier than trying to install someone else's code in a new environment, because you are more familiar with your own code than ours.
Is the frameworks supposed to build or, or be deployed under Windows?
I'm attempting to build it under Windows, and a variety of environments seem to fail. Here's the output of the last attempt (VS 2022 Developer Command Prompt, cmake 3.29.2):
The output is attached. buildLog.txt