Closed lfdmn closed 4 years ago
You need the host-side tools as well. I haven't created nativesdk versions of the the host-side tools or spent any time verifying that even the target-side packages really work for SDKs. That's mainly because NVIDIA already provides the packages needed for doing CUDA cross-development, and repackaging those binaries for use in an OE-style SDK seemed like a lot of work to put in for not much benefit.
I understand your point. What I need is a self contained SDK image with all library dependencies, CUDA being one of them to share across developers and build system. We use different CUDA version for different TK1, TX1 and TX2.
I saw how you install the debian packages for bitbake in cuda-tools-native_8.0.84-1.bb. I understand that I should implement do_populate_sdk in this file or append it
Do you have any idea how I could do it?
On Mon, Apr 16, 2018 at 2:54 PM, Matt Madison notifications@github.com wrote:
You need the host-side tools as well. I haven't created nativesdk versions of the the host-side tools or spent any time verifying that even the target-side packages really work for SDKs. That's mainly because NVIDIA already provides the packages needed for doing CUDA cross-development, and repackaging those binaries for use in an OE-style SDK seemed like a lot of work to put in for not much benefit.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/madisongh/meta-tegra/issues/78#issuecomment-381573690, or mute the thread https://github.com/notifications/unsubscribe-auth/AQY2a-wnOmuYHONexXLxOJ-kNOmnqFTgks5tpIZxgaJpZM4TWFQH .
--
LEFEVRE Damien
lefevre.da@gmail.com
http://www.lfdm.net
044 598 4475, Finland (+358)
Nikkarinhaka 6
90450 Kempele, FINLAND
To start, you'd have to create a separate nativesdk-cuda-tools_8.0.84-1.bb recipe and have it inherit nativesdk
. You might be able to reuse some of the contents of the existing -native recipe and the .inc file it uses, but you'll need to have everything in the .deb package under /usr/local
get installed under ${prefix}/local
instead.
You may run into some issues with the SDK creator or extractor trying to munge the ELF headers of the binaries. Hopefully nothing major. And hopefully the programs themselves will work OK even if they aren't installed under /usr/local/cuda-8.0
(or whatever version).
They make the SDK's, yes, but the problem THERE is that they're not integrated with the rest of the toolchain accordingly. (i.e. You'd have to hand-assemble something from theirs and the resulting output from this meta layer.)
Heh... Guess there's some call for it if you're needing to go there. Remains to be seen if my customer wants to do that or not. X-D
Well, the answer there is, "Yes". And I basically took the native packaging recipe for the tools and adjusted it to be more in line with a nativesdk pass- we'll see. I'll keep people updated.
Well, it seems to have packaged right, but it looks like CMake and maybe Autotools support isn't working right for compilation right now. I'm...disinclined...to package it differently for this- so I guess I'll be fixing the CMake supports accordingly. I'll keep people updated.
@madscientist42 I have encountered the same requirement. How is your SDK going? The CMake issue may be resolved by using your own FindCuda.cmake, just like OpenCV project does.
It's coming. Been tied up with stuff involved with the other team's project (Trying to come up with a platform core for my client...)
I determined that "packaged" right and what needs to be for Yocto Classic SDKs (possibly even Extensibles) is not 100% correct. Where the macros look appears to differ from what we're doing in this project. It doesn't like/work with /usr/local/cuda-x.y/ But if you move the cross-tools up two levels (i.e. /usr) the CMake macros find the cross tools correctly. I still have a problem with the tools in question or the CMake macros finding the headers/libs for things, which are in the current Jetpack place of /usr/local/cuda-x.y/, on the target side of the SDK,
This means placement should likely be the same way for all of it, maybe with a few minor mods of the macros to understand the split sysroot (gRPC needs fixes...protobufs needs fixes...) .
A question I have would be one of anyone actually successfully adding a package on top of this to use the native components correctly or not? It seems...odd...that CMake's stuff works the way it does and we're seeing NVidia's stuff for the Jetpack being so...different. Almost like everyone's working off of the common X86 assumptions instead of the Jetpack mods for Tegra there.
As for not a lot of benefit... Heh... Sorry, but no. You're making a production image build system for this. You don't want to be post-process or manually pouring binaries into the image target here.
You want to, because of audit considerations (DO-178B, FIPS-140-2, etc.) do this very thing. It needs to build more than just a "target" image here.
So, I'll help if that's what's needing to be done here. :D
Which, I suspect, with a re-reading of the thread and the above response to our project host, I answered my question...which is, "no," there. Which the next question would be are PR's welcome for me re-working part of this against the WIP for Thud and Jetpack 4.2?
I'd prefer PRs for master or thud, but if you need to target JetPack 4.2 first, the wip-l4t-r32.1 branch would be OK, too.
I'm still not convinced that this will be easier/better/less error-prone than simply rolling your own build system as a combination of the SDK and the NVIDIA deb package and scripting the installation of the both of them, but I'm willing to change my mind.
"Challenge Accepted."
I don't want you to think wrong of what I'm aiming for or what I've said up to this point. Your project rocks so far, but the client's got a specific workflow that the Jetpacks cause their own issues with the same, require explicit versions of Ubuntu and only Ubuntu, etc. If you're doing CI work, the current way of doing things it requires you to frame in the whole of the recipe first and then sort of try/hope to get that to all work right before you can even thread it into the CI system.
I'm aiming to make it work more akin to what the CMake configs work like on an X86 system would so that you can pretty much scoop up and deploy anything from anyone without much of any customizations, etc. to the CMakeLists.txt needing to be done. I've already done this with Ambarella's S5L "SDK" (Did I call it an "SDK"? More like a janky buildroot that you can't extend... And it's survived a three transitions of their screwball patch/upgrade process to boot...) and it'd be nice to actually have a similar thing here for Tegra that everyone can benefit from. I don't think it will be too hard to accomplish, based on what I have got working so far on the nativesdk recipe I did. It's going to be more busy work, albeit faintly invasive initially compared to what's there now.
Looking forward to working with you on things. :-D
So you know who you've got on the other end...some links for the projects I'm working on right now, either for a client or on my own recognizance:
https://github.com/madscientist42/meta-runit https://github.com/madscientist42/runit https://github.com/madscientist42/meta-pha https://github.com/madscientist42/meta-rtlwifi
As an update, I just got back to this. (Lovely fire drills all throughout then and now... X-D)
I'm starting off of Warrior and 4.21 support once I get that semi-squared with the build system.
Any chance that you can post the "in progress" recipe that was half-working above? Perhaps given a starting point, others reading this thread can then fix things up / suggest ways forward?
My preference is also to build a single SDK to give to the other developers rather than have to manually install nVidia's tools, since that is our current practice with other Yocto based targets we are building for, so it would simplifies setup considerably if it can be made to "just work".
Is there any update on this? Our company is new to yocto and we would also like to have a complete self-sustained sdk with Cuda tools for cross-compling on dev hosts without all devs having to install the whole nvidia sdkmanager and run trough all the steps.
@MJLHThomassen-Sorama give a try to this patch 0001-cuda-native-sdk.txt for branch warrior-l4t-r32.2
It works for me but just to let you know, I'm not using CMake based projects and didn't test that part.
@lfdmn Thanks, ill try it out when i get around to it somewhere this week hopefully!
I finally had time to look into this. I've reworked the CUDA recipes to make them extendable to nativesdk builds, and put together some SDK environment setup files that should make it relatively easy to create CUDA projects (at least CMake-based ones) that are cross-buildable using the CUDA toolchain in the SDK.
You'll need to add nativesdk-packagegroup-cuda-sdk-host
to TOOLCHAIN_HOST_TASK to get the CUDA compiler into your SDK.
The one gotcha here is that unless you're using GCC 7.x or clang 8.x, nvcc won't work for you.
There's a mode that is officially supported but is broken where you use clang itself to generate CUDA apps. I'll keep people updated on this as I can't think we're the only ones needing to try it that way. NVidia now knows there's an issue and there's a ticket out for that- I'll let everyone know what comes of that.
(It seems Matt beat me to getting it all working... Heh. I blame work...they took me off of this one for a long time...I just got back to it a couple of weeks ago...)
The SDK setup is for cross builds, so if you have gcc 7 for CUDA in your Yocto setup, the SDK will include the gcc 7 cross-compiler, so nvcc will be happy.
Hi,
What is the proper way to add cuda-toolkit to SDK?
I added to my SDK image recipe: TOOLCHAIN_TARGET_TASK += "packagegroup-stack-libraries"
And packagegroup-stack-libraries contains: inherit packagegroup
I get this error:
If I add the toolkit dependencies from cuda-toolkit manually to packagegroup-stack-libraries, I get the files in the image but CMake does not find CUDA so obviously I'm missing something