microsoft / ELL

Embedded Learning Library
https://microsoft.github.io/ELL
Other
2.28k stars 295 forks source link

Compiling models for other targets #63

Closed h4gen closed 7 years ago

h4gen commented 7 years ago

Hi everybody!

I was able to go through all examples without problems. But when I tried the examples from ELL/examples/compiled_code, I neither had a Release directory nor a compile command. Do I have to install any further dependencies to make it work or do I oversee something? Thanks for your help! Great work you guys are doing here!

lisaong commented 7 years ago

Hi,

Thanks for bringing this up and glad the tutorial examples worked for you! We've been focused more on the tutorials because they help demonstrate a real use case.

As a result, the examples/compiled_code stuff are out of date, and probably need a revamp.

h4gen commented 7 years ago

Thanks for the update! But how can I compile a model that is not one of the examples? Can you tell me where the entry point is to figure out how the translation from model to LLVM IR works? I don't know if I am missing something, but it seems to me that, apart from the examples, this is essentially what ELL is all about?

lisaong commented 7 years ago

Ah, I get your question now.

If you look at cntkDemo.py and darknetDemo.py, you'll see the models being imported from the python scripts. Basically, the idea is you pass model files (from CNTK or darknet) you want to try to those scripts. It should be pretty clear from the script where you can pass in your own model files, just look for these lines:

    # Pick the model you want to work with
    helper = mh.ModelHelper(sys.argv, "darknetReference", ["darknet.cfg", "darknet.weights"], "darknetImageNetLabels.txt")

    # Import the model
    model = get_ell_predictor(helper)

At this time we've only verified darknetReference and vgg16, but you can always pass in a different model from either framework.

Keep in mind though that some of the other models (e.g. tiny-yolo) are huge, which makes it harder to compile and link against. We're aware of this issue with off the shelf models, and are working to resolve it. For now, I recommend choosing models that are similar in size with darknet.weights for best results.

h4gen commented 7 years ago

Thanks for your kind Answer. Sorry maybe the explanation of my Problem was to short. Just using the script does not help me. I am in particular interested in the build process, as I want to deploy a model on an ARM Cortex M4. For this I have to understand the build process and the cross compilation process. Right now, as it seems to me, the really interesting part, namely the build process, is hidden within huge makefiles. That is why I asked for the compile command, which does not work, but it seemed as it should work. I want to understand how I can compile a small model as in ELL/examples/compiled_code for a different target than the host or an raspberry3. I assume that this is possible, as you have an example for ARM Cortex M0 and state that models can be deployed to other architectures. I just don't see where this happens, or where I can entry to do it on myself.

Thanks for your effort.

Edit: So, the changed topic is also wrong. Its not about the tutorial. Its more about cross compiling and controlling the build process to build for other targets.

lisaong commented 7 years ago

(Updated the title, thanks for the feedback)

The compile and llc commands are documented in compilingAdvanced. The commands are written for a Windows host machine, but you should be able to adapt for a Linux or Mac host machine.

h4gen commented 7 years ago

Thank you for your answer. This article has the same Problem as the other. It uses the compile command. But this command does not work. And I have no chance of fixing this error because it is explained nowhere. Is it a dependency? Is it defined during the make of ELL? WHAT is the compile command? Also there is no Release folder. So I think that something is going wrong when building ELL?

Am 26.07.2017 22:06 schrieb "Lisa Ong" notifications@github.com:

(Updated the title, thanks for the feedback)

The compile commands are documented in compilingAdvanced https://github.com/Microsoft/ELL/blob/master/tutorials/vision/gettingStarted/compilingAdvanced.md. The commands are written for a Windows host machine, but you should be able to adapt for a Linux or Mac host machine.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Microsoft/ELL/issues/63#issuecomment-318166982, or mute the thread https://github.com/notifications/unsubscribe-auth/AL-Z26z73iflJGyt4vIwCJlx-Qg1v1Saks5sR5wwgaJpZM4Ojmcg .

h4gen commented 7 years ago

Okay, I found it. It seems, as that the compile executable is not located in the ELL/build/bin/Release, but in the ELL/build/bin folder. Now it makes sense, that you have to add this directory to the PATH before you can use it.

That answered my question.

lisaong commented 7 years ago

Correct, for non-Windows (i.e. Linux and Mac), the compile executable is in the build/bin folder.

For Windows, you add the "Release" or "Debug" subfolders because of the way Visual Studio generates the configurations.

vrbala commented 7 years ago

@h4gen Hi, I am also interested in using ELL library in Cortex series micro controllers. I am barely a starter and would like to know more about this and looking forward for options to collaborate. Could you provide any pointers on how to start on this please? Currently I am spending most of my time on learning the Cortex ecosystem.

Many thanks!

h4gen commented 7 years ago

@vrbala Hi! Surely I can explain you what I know so far. Regarding the Cortex Ecosystem: This is quite a complex topic. Currently I am using the Bosch XDK and the XDK Workbench as IDE. Right now I am looking for a way to inject my generated object files from ELL into my Project. What I can tell you so far:

The Compiling Readme works for me on Mac (or Linux) if I make the ELL compiler known to my system by:

export PATH=$PATH:pathtoELL/ELL/build/bin

After this I can use the compile command to create the LLVM IR and then the llc to create an object file. Basically I am using:

llc -mtriple=armv6m-unknown-none-eabi -march=thumb -mcpu=cortex-m4 -float-abi=soft -mattr=+armv6-m,+v6m -filetype=obj ../../../examples/compiled_code/identity.ll

This is just the same as in the example, but I changed the filetype from asm to obj and I also changed the target to cortex-m4. With a working toolchain for ARM Cortex you should be able to make the generated object file known to the linker and so you should be able to inject your model in your embedded code. At least this is how I understand it :) This is the step where I am currently stuck (XDK Workbench problem) but in general it should work like this.

vrbala commented 7 years ago

Thanks @h4gen! This is a very good starter for me.