Closed fnoop closed 5 years ago
Hi Fnoop, ARM architecture is currently not supported by OpenVINO.
Hi @SeverineH - is this by design or just not got round to it yet? Will ARM support be added in the future?
Hi, No, this will be fixed. We have a fix, but would like to do some checks on it first. But just to make sure we are talking about same thing. It will not be possible to do inference on ARM CPU, just use ARM as a host for accelerator. There are no primitives to run on ARM itself.
@yury-gorbachev, will your fix allow us to do inference with models in the Intermediate Representation on a Raspberry Pi with a Movidius Neural Compute Stick?
on a Raspberry Pi with a Movidius Neural Compute Stick?
I am also interested in this feature.
@rsippl This might be also an option in the future: https://aiyprojects.withgoogle.com/edge-tpu
Hi, No, this will be fixed. We have a fix, but would like to do some checks on it first. But just to make sure we are talking about same thing. It will not be possible to do inference on ARM CPU, just use ARM as a host for accelerator. There are no primitives to run on ARM itself.
@yury-gorbachev Will this be available soon (before the end of 2018)?
@yury-gorbachev I've been following this for a month now. I am also very interested, will this be released anytime soon, thanks.
Yes, it should be available before new year. I'm not able to say more exact date though, sorry guys.
Chiming here to say that this is excellent: I'm exploring some use cases with a Raspberry Pi 3B+ as the host and would like to use an NCS2 to run inference on.
Sounds like this would make that possible (since the NCS2 requires OpenVINO rather than NCSSDK) ?
see NCS2 should be compatible with raspberry pi 3 B+ , then only Intel can say that AI now comes from cloud to edge..........?
It’s great to hear that support is in the works: I’ll work on other parts of my project and leave the NCS1 in place for now. Thanks for the transparency, it’s a big help in scheduling.
@raygeeknyc soo it means i purchsed NCS2, the money will not going to be wasted , support will come for raspberry pi definately ?
@kakubotics read the thread
Hi, No, this will be fixed. We have a fix, but would like to do some checks on it first. But just to make sure we are talking about same thing. It will not be possible to do inference on ARM CPU, just use ARM as a host for accelerator. There are no primitives to run on ARM itself.
Does this mean inference will take place on the NCS2 stick only?
Guys just let me know NCS2 will work with raspberry pi or not? in future ?
Hi, No, this will be fixed. We have a fix, but would like to do some checks on it first. But just to make sure we are talking about same thing. It will not be possible to do inference on ARM CPU, just use ARM as a host for accelerator. There are no primitives to run on ARM itself.
Does this mean inference will take place on the NCS2 stick only?
Yes
Guys just let me know NCS2 will work with raspberry pi or not? in future ?
NCS definitely works on Raspberry Pi (we tried without OpenVINO/DLDT). Per thread here NCS will soon work there with DLDT as well. Theoretically NCS2 should be a similar case despite of its significant performance improvement. However I am not sure who had ever tried it on a Raspberry Pi given it was just launched less than two months ago.
Yes, it should be available before new year. I'm not able to say more exact date though, sorry guys.
Hi @yury-gorbachev, do you know if we can now use Raspberry Pi 3 B+ with NCS2 already? If not, is the estimate still no later than the end of this year please?
Cannot wait! Go NCS2 on arm!
R5 released!!!! support raspberry pi
Where?
What is R5?
An install guide: https://software.intel.com/articles/OpenVINO-Install-RaspberryPI
guys it means now we can run NCS2 on raspberry pi? right ? please say yes
https://software.intel.com/articles/OpenVINO-Install-RaspberryPI
@kakubotica - yeah!
This is from the install guide:
Raspberry Pi* board with ARMv7-A CPU architecture
Raspberry pi 3b+ has v8 ARM but I hope v7 is the minimum one right?
@mpeniak, please provide an output from cat /proc/cpuinfo
.
Thanks to all of you and Intel
Great, thanks, guys!
I've followed these instructions and everything works OK.
One minor thing that I noticed is the error in python code example in line 13:
blob = cv.dnn.blobFromImage(frame, size=(672, 384), ddepth=cv.CV_8U) net.setInput(blob)
has to be split in two lines as follows:
import cv2 as cv
# Load the model
net = cv.dnn.readNet('face-detection-adas-0001.xml', 'face-detection-adas-0001.bin')
# Specify target device
net.setPreferableTarget(cv.dnn.DNN_TARGET_MYRIAD)
# Read an image
frame = cv.imread('/path/to/image')
# Prepare input blob and perform an inference
blob = cv.dnn.blobFromImage(frame, size=(672, 384), ddepth=cv.CV_8U)
net.setInput(blob)
out = net.forward()
# Draw detected faces on the frame
for detection in out.reshape(-1, 7):
confidence = float(detection[2])
xmin = int(detection[3] * frame.shape[1])
ymin = int(detection[4] * frame.shape[0])
xmax = int(detection[5] * frame.shape[1])
ymax = int(detection[6] * frame.shape[0])
if confidence > 0.5:
cv.rectangle(frame, (xmin, ymin), (xmax, ymax), color=(0, 255, 0))
# Save the frame to an image file
cv.imwrite('out.png', frame)
Hi @nikogamulin, what HW did you test this on? Do you have inference times also? I am going to run this on Rapspberry Pi 3 B+ with multiple NCS 2.
@mpeniak This is the output from first example:
~/inference_engine_vpu_arm/deployment_tools/inference_engine/samples/build $ ./armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i ~/faces.jpg [ INFO ] InferenceEngine: API version ............ 1.4 Build .................. 19154 Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] /home/pi/faces.jpg [ INFO ] Loading plugin
API version ............ 1.5 Build .................. 19154 Description ....... myriadPlugin [ INFO ] Loading network files: face-detection-adas-0001.xml face-detection-adas-0001.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the plugin [ WARNING ] Image is resized from (620, 349) to (672, 384) [ INFO ] Batch size is 1 [ INFO ] Start inference (1 iterations) [ INFO ] Processing output blobs [0,1] element, prob = 1 (248.848,17.4883)-(297.891,82.7341) batch id : 0 WILL BE PRINTED! [1,1] element, prob = 1 (381.748,41.5801)-(424.736,97.0486) batch id : 0 WILL BE PRINTED! [2,1] element, prob = 1 (119.656,53.6366)-(166.807,122.184) batch id : 0 WILL BE PRINTED! [3,1] element, prob = 1 (473.477,74.9379)-(520.098,137.351) batch id : 0 WILL BE PRINTED! [4,1] element, prob = 0.0467224 (318.174,324.631)-(342.998,348.83) batch id : 0 [5,1] element, prob = 0.0184021 (454.707,322.586)-(480.137,352.408) batch id : 0 [6,1] element, prob = 0.0179443 (197.686,38.7044)-(253.086,102.672) batch id : 0 [7,1] element, prob = 0.0177307 (43.2153,157.544)-(61.7578,185.747) batch id : 0 [8,1] element, prob = 0.017395 (430.186,288.164)-(468.33,348.83) batch id : 0 [9,1] element, prob = 0.016098 (435.332,277.939)-(458.34,333.833) batch id : 0 [10,1] element, prob = 0.0160522 (455.918,277.598)-(478.32,325.995) batch id : 0 [11,1] element, prob = 0.015976 (520.4,204.833)-(608.799,397.737) batch id : 0 [12,1] element, prob = 0.0159302 (439.873,319.519)-(465.908,353.431) batch id : 0 [13,1] element, prob = 0.0159302 (209.189,304.864)-(288.506,353.431) batch id : 0 [14,1] element, prob = 0.0155487 (375.996,-4.43066)-(422.012,68.8457) batch id : 0 [15,1] element, prob = 0.0152969 (418.984,260.387)-(494.668,352.749) batch id : 0 [16,1] element, prob = 0.0151367 (441.084,286.289)-(457.432,314.918) batch id : 0 [17,1] element, prob = 0.0150909 (249.453,0.218338)-(272.764,19.7676) batch id : 0 [18,1] element, prob = 0.0150909 (230.684,-7.088)-(259.141,25.3698) batch id : 0 [19,1] element, prob = 0.0150909 (258.232,-0.67099)-(289.111,30.333) batch id : 0 [20,1] element, prob = 0.0149689 (235.83,-0.623062)-(252.48,16.8067) batch id : 0 [21,1] element, prob = 0.0149002 (-9.00635,314.577)-(28.7976,355.816) batch id : 0 [22,1] element, prob = 0.0148621 (439.873,311.169)-(459.248,341.843) batch id : 0 [23,1] element, prob = 0.0148621 (463.184,288.334)-(489.824,336.73) batch id : 0 [24,1] element, prob = 0.01474 (443.506,330.766)-(462.275,349.682) batch id : 0 [25,1] element, prob = 0.01474 (428.672,262.432)-(448.652,321.053) batch id : 0 [26,1] element, prob = 0.01474 (158.33,296.684)-(222.51,354.794) batch id : 0 [27,1] element, prob = 0.0146255 (441.387,273.338)-(457.129,295.15) batch id : 0 [28,1] element, prob = 0.0146255 (424.736,317.815)-(450.166,353.431) batch id : 0 [29,1] element, prob = 0.0145874 (457.432,331.107)-(474.99,349.341) batch id : 0 [30,1] element, prob = 0.014389 (418.076,223.749)-(455.615,291.913) batch id : 0 [31,1] element, prob = 0.014389 (130.479,295.321)-(175.737,346.444) batch id : 0 [32,1] element, prob = 0.0141144 (429.277,239.938)-(465,315.259) batch id : 0 [33,1] element, prob = 0.0140839 (427.158,330.084)-(445.322,349) batch id : 0 [34,1] element, prob = 0.0138702 (438.662,267.544)-(455.615,306.397) batch id : 0 [35,1] element, prob = 0.0138702 (448.35,297.195)-(485.889,349.341) batch id : 0 [36,1] element, prob = 0.0137939 (458.34,289.527)-(474.082,311.339) batch id : 0 [37,1] element, prob = 0.0137939 (414.746,310.999)-(459.551,357.18) batch id : 0 [38,1] element, prob = 0.0137558 (455.615,233.462)-(481.65,278.791) batch id : 0 [39,1] element, prob = 0.0137177 (465.908,321.223)-(491.943,350.363) batch id : 0 [40,1] element, prob = 0.0135269 (441.689,257.66)-(458.037,282.54) batch id : 0 [41,1] element, prob = 0.0133896 (455.615,248.288)-(482.256,301.456) batch id : 0 [42,1] element, prob = 0.0133896 (239.766,306.057)-(317.568,353.431) batch id : 0 [43,1] element, prob = 0.0133133 (577.012,204.833)-(660.566,411.37) batch id : 0 [44,1] element, prob = 0.0132828 (-6.13037,245.05)-(44.3506,309.806) batch id : 0 [45,1] element, prob = 0.0132446 (267.314,-2.70526)-(300.312,25.1568) batch id : 0 [46,1] element, prob = 0.0132446 (25.4297,164.361)-(56.8384,197.335) batch id : 0 [47,1] element, prob = 0.0132446 (13.1122,242.153)-(75.9106,304.864) batch id : 0 [48,1] element, prob = 0.0132446 (229.775,24.5391)-(547.646,269.248) batch id : 0 [49,1] element, prob = 0.0131683 (439.873,238.745)-(458.643,270.1) batch id : 0 [50,1] element, prob = 0.0131683 (463.486,257.49)-(490.127,314.066) batch id : 0 [51,1] element, prob = 0.0131378 (427.461,243.346)-(445.02,264.136) batch id : 0 [52,1] element, prob = 0.0131378 (468.027,270.271)-(487.402,299.24) batch id : 0 [53,1] element, prob = 0.0131378 (467.119,299.922)-(486.494,326.847) batch id : 0 [54,1] element, prob = 0.0131378 (426.553,309.465)-(445.322,340.479) batch id : 0 [55,1] element, prob = 0.0131378 (277.305,332.641)-(295.166,350.363) batch id : 0 [56,1] element, prob = 0.0130615 (273.823,322.246)-(299.858,354.794) batch id : 0 [57,1] element, prob = 0.0130234 (428.369,270.611)-(444.717,300.604) batch id : 0 [58,1] element, prob = 0.0129318 (393.555,297.025)-(437.754,349) batch id : 0 [59,1] element, prob = 0.0129013 (428.369,283.903)-(444.717,313.214) batch id : 0 [60,1] element, prob = 0.0127335 (457.734,255.274)-(475.898,285.607) batch id : 0 [61,1] element, prob = 0.0127335 (292.593,329.403)-(311.816,350.704) batch id : 0 [62,1] element, prob = 0.0127335 (450.166,220.17)-(490.732,291.061) batch id : 0 [63,1] element, prob = 0.0126266 (-5.28839,167.428)-(24.1809,195.972) batch id : 0 [64,1] element, prob = 0.0126266 (225.083,269.93)-(278.667,346.273) batch id : 0 [65,1] element, prob = 0.0125275 (15.2881,213.013)-(76.6675,274.701) batch id : 0 [66,1] element, prob = 0.0124893 (8.81714,168.195)-(43.6316,197.846) batch id : 0 [67,1] element, prob = 0.0124588 (457.432,245.22)-(473.779,264.647) batch id : 0 [68,1] element, prob = 0.0124588 (393.252,261.75)-(465.303,351.727) batch id : 0 [69,1] element, prob = 0.0124588 (-7.13318,211.99)-(70.0073,396.374) batch id : 0 [70,1] element, prob = 0.012352 (246.426,189.155)-(272.158,234.144) batch id : 0 [71,1] element, prob = 0.012352 (346.934,269.589)-(395.977,345.592) batch id : 0 [72,1] element, prob = 0.0122528 (262.622,160.526)-(292.29,208.582) batch id : 0 [73,1] element, prob = 0.0122528 (261.714,286.119)-(285.327,331.448) batch id : 0 [74,1] element, prob = 0.0122528 (190.117,296.173)-(253.691,355.476) batch id : 0 [75,1] element, prob = 0.0121841 (458.037,270.952)-(476.201,305.034) batch id : 0 [76,1] element, prob = 0.0121841 (29.0247,257.66)-(115.645,347.296) batch id : 0 [77,1] element, prob = 0.0121613 (248.545,182.509)-(269.131,209.434) batch id : 0 [78,1] element, prob = 0.0121613 (427.461,259.875)-(444.414,281.688) batch id : 0 [79,1] element, prob = 0.0121613 (461.367,258.683)-(517.676,347.296) batch id : 0 [80,1] element, prob = 0.0120926 (232.197,195.972)-(252.783,222.215) batch id : 0 [81,1] element, prob = 0.0120621 (28.4381,157.8)-(48.5132,182.339) batch id : 0 [82,1] element, prob = 0.0120621 (216.152,189.837)-(242.49,230.735) batch id : 0 [83,1] element, prob = 0.0120621 (-2.5354,156.437)-(42.2693,223.578) batch id : 0 [84,1] element, prob = 0.0120621 (285.327,268.226)-(334.824,343.888) batch id : 0 [85,1] element, prob = 0.0119705 (455.312,221.533)-(473.477,251.866) batch id : 0 [86,1] element, prob = 0.0119705 (465.605,284.585)-(485.586,313.555) batch id : 0 [87,1] element, prob = 0.0119705 (220.391,167.769)-(265.195,222.896) batch id : 0 [88,1] element, prob = 0.0119705 (13.9447,271.634)-(73.1104,339.798) batch id : 0 [89,1] element, prob = 0.0119705 (274.429,307.25)-(346.934,353.09) batch id : 0 [90,1] element, prob = 0.011879 (465.605,229.372)-(497.695,282.881) batch id : 0 [91,1] element, prob = 0.0118484 (8.07922,179.271)-(37.5201,214.376) batch id : 0 [92,1] element, prob = 0.0117798 (441.084,300.604)-(457.432,321.734) batch id : 0 [93,1] element, prob = 0.0117798 (161.055,41.6014)-(214.336,104.547) batch id : 0 [94,1] element, prob = 0.0117798 (-4.7113,214.035)-(45.3345,280.495) batch id : 0 [95,1] element, prob = 0.0117798 (220.845,211.309)-(308.638,406.258) batch id : 0 [96,1] element, prob = 0.0117798 (19.9805,32.2075)-(335.732,291.401) batch id : 0 [97,1] element, prob = 0.0116882 (248.999,271.123)-(310.605,342.013) batch id : 0 [98,1] element, prob = 0.0116501 (408.389,329.403)-(428.975,349.682) batch id : 0 [99,1] element, prob = 0.0116196 (457.432,297.877)-(473.779,322.075) batch id : 0 [100,1] element, prob = 0.0116196 (258.384,230.395)-(305.61,320.712) batch id : 0 [101,1] element, prob = 0.0115585 (265.044,171.773)-(282.905,190.859) batch id : 0 [102,1] element, prob = 0.0115585 (230.986,182.85)-(251.875,208.071) batch id : 0 [103,1] element, prob = 0.0115585 (511.318,114.516)-(556.123,159.504) batch id : 0 [104,1] element, prob = 0.011528 (262.168,107.614)-(304.551,165.042) batch id : 0 [105,1] element, prob = 0.011528 (-1.46637,258.512)-(37.4634,345.421) batch id : 0 [106,1] element, prob = 0.0114594 (334.824,307.59)-(406.875,352.749) batch id : 0 [ INFO ] Image out_0.bmp created!
total inference time: 155.109 Average running time of one iteration: 155.109 ms
Throughput: 6.44708 FPS
I have used RPi 3 B+ with single NCS 2.
Thanks @nikogamulin! It seems a little bit slow. I got mobilnet-ssd running on Myriad-X (UP Squared) at 40FPS. I know Pi is a slower host but I would not expect it to be this slow. I wonder if running a different model would yield similar FPS...
@nikogamulin I got very similar results with the same setup / test.
The sad part is an NCS provides ~6.24 FPS whereas an NCS2 provides ~6.44 FPS.
I imagine there is still a bit of work to do on optimisation and properly using the new hardware?
@njern I would suggest trying different mobilenet-ssd from the intel models to see if it applies to all models...they not all exactly the same type of mobilnet-ssd I think so perhaps we should see a difference. Indeed Myriad-X is said to be 6-10x faster so this is not good...
Folks, thank you for your interest and experiments with Raspberry Pi and NCS.
Original issue was about build failure on ARM/Raspberry Pi and it is not valid because VPU (Myriad) plugin is not part of the open source distribution. Please, read https://01.org/openvinooolkit/faq
I propose to close this issue, any objections?
P.S. Engineering team is discussing right now potential ability to open source Myriad plugin and if it is happened you will see news about it here: https://opencv.org/news.html. Stay tuned! No public commitments or ETA for now.
I do hope the engineering team decides to open source the Myriad plugin, after all is just the USB comms protocol between host and Myriad firmware, and my guess is similar based on the original Movidius NCSDK. I mean is not like we are asking for the source for firmware itself, although that will be really nice and community will make hundred times better at zero cost for Intel, not to mention that myriad solution will instantly become extremely popular.
Just to make sure I understood you correctly @moslex :
The OSS version of OpenVINO (this repository) is missing some proprietary plugins, specifically the "VPU plugin". This plugin is required in order to see the claimed 6-10x performance boost when using an NCS2 rather than an NCS.
Furthermore, the Intel® Distribution of OpenVINO™ will not be made available for ARM so if someone has purchased an NCS2 with the express purpose of using it on an ARM platform... They are basically SOL?
I am using OpenVINO R5 and NCS2 but not using OpenCV for inference so I think I will see the expected speedup.
Hello, @njern and @larrylart. We are fully understand the importance of Myriad plugin open source to unlock the OSS community and we have it in our internal backlog of feature requests. Sorry, no more comments from my side. This place is to report issues. Thank you for your understanding and patience.
P.S. ALL, you will inspire engineering team if you make cool and creative projects using OpenVINO toolkit with Raspberry Pi + NCS2. Please, surprise us as we surprised you with this package https://software.intel.com/articles/OpenVINO-Install-RaspberryPI. Good luck! :)
I just tried to implement "real-time" face detection on RPi by modifying the script the provided in the instructions. The code is available here
Hope it helps and above all hopefully, we will be able to achieve speedups in the following days and implement other models for useful edge AI cases. Thanks again!
guys can we run tensorflow and caffe models? on Rpi rather than only face detection model using openCV
@kakubotics, of course OpenVINO is not only about a single face detection network. There is an entire repository of pre-trained Intel models: https://github.com/opencv/open_model_zoo. @nikogamulin, BTW you may be interested in different face detectors (i.e. https://github.com/opencv/open_model_zoo/blob/2018/intel_models/face-detection-retail-0004/description/face-detection-retail-0004.md).
If you have your own network in one of the popular formats, you may convert it to Intermediate Representation (IR) using Model Optimizer or use OpenCV's readNet
plus setPreferableBackend(DNN_BACKEND_INFERENCE_ENGINE)
.
Read more about:
@moslex
Unfortunately I'm not using Raspberry Pi platform, rather another ARM variant, ODROID. And I agree, I will rather spend my time building interesting projects with the NCS2 sticks then trying to hack it to work on my edge device. But for that I need to be able to compile the toolkit first as for now the two ncs2 sticks I bought only decorate my shelf ...
@larrylart same here. Working with an ODroid and using the open source DLDT would simplify things...
I’m now happy to say that I’ve used R5 on Pi 3B+ with two NCS2 running two models in parallel. The Pi has a PoE hat and I used it all to create a powerful AI Edge Camera, temporary name is CortexiCam as I work at Cortexica. I’ll publish a blog about all this today so check it out later www.timeless.ninja
@kakubotics, of course OpenVINO is not only about a single face detection network. There is an entire repository of pre-trained Intel models: https://github.com/opencv/open_model_zoo. @nikogamulin, BTW you may be interested in different face detectors (i.e. https://github.com/opencv/open_model_zoo/blob/2018/intel_models/face-detection-retail-0004/description/face-detection-retail-0004.md).
If you have your own network in one of the popular formats, you may convert it to Intermediate Representation (IR) using Model Optimizer or use OpenCV's
readNet
plussetPreferableBackend(DNN_BACKEND_INFERENCE_ENGINE)
.Read more about:
- Model Optimizer (MO) + Inference Engine (IE): https://software.intel.com/en-us/openvino-toolkit/deep-learning-cv
- OpenCV + IE: https://github.com/opencv/opencv/wiki/Intel%27s-Deep-Learning-Inference-Engine-backend
thanks
@larrylart, @r0l1, May I ask you to show the output from cat /proc/cpuinfo
? Does ODROID have 32-bit OR or 64-bit?
@dkurt
32 bit ARMv7 Samsung Exynos5422 processor.
processor : 0
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 90.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xc07
CPU revision : 3
processor : 1
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 90.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xc07
CPU revision : 3
processor : 2
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 90.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xc07
CPU revision : 3
processor : 3
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 90.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xc07
CPU revision : 3
processor : 4
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 120.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x2
CPU part : 0xc0f
CPU revision : 3
processor : 5
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 120.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x2
CPU part : 0xc0f
CPU revision : 3
processor : 6
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 120.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x2
CPU part : 0xc0f
CPU revision : 3
processor : 7
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 120.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x2
CPU part : 0xc0f
CPU revision : 3
Hardware : ODROID-XU4
Revision : 0100
Serial : 0000000000000000
@r0l1, Have you tried to validate RPI package on it?
Is this project intended for Intel-only architecture, or should/will it work on ARM?