morxa / rosfed

ROS RPM package generator for Fedora
9 stars 4 forks source link

Include aarch64 architecture to COPR repository #7

Closed martinezjavier closed 3 years ago

martinezjavier commented 3 years ago

@morxa are you planning to include aarch64 packages in your ROS copr ?

I think that could be interesting for Fedora IoT and other embedded use cases.

morxa commented 3 years ago

I haven't tried at all, I'd expect some build failures. But I'll give it a try!

martinezjavier commented 3 years ago

Awesome! I was planning to do the same on COPR I forked from yours. But since you are the expert with this, it would be much better if can try it.

Please feel free to file issues and Cc me so I can attempt to fix any issues you may find.

nullr0ute commented 3 years ago

I haven't tried at all, I'd expect some build failures. But I'll give it a try!

I'd personally be surprised if there were many, aarch64 on Fedora has been basically identical to x86_64 in terms of package set since F-26, of course there's always bugs, but they're generally quick to fix. Let's track any issues here and happy to assist in quick resolutions.

morxa commented 3 years ago

Do you have a test system where you can try if the packages actually work? If so, which Fedora release?

morxa commented 3 years ago

I've started an initial set of test builds in ros-testing: https://copr.fedorainfracloud.org/coprs/thofmann/ros-testing/builds/

martinezjavier commented 3 years ago

Do you have a test system where you can try if the packages actually work? If so, which Fedora release?

I do yes, I can test it in either a Rockpro64 or Raspberry PI 4 boards. Rawhide is OK but I see you did all the way up to F33, cool!

nullr0ute commented 3 years ago

Do you have a test system where you can try if the packages actually work? If so, which Fedora release?

Probably one of the easiest way is on AWS Graviton aarch64 instances if you want to test package failures or build issues. You can find Fedora images for that from https://alt.fedoraproject.org/cloud/

morxa commented 3 years ago

It looks like you were right:

0/256/256: Successful build: desktop_full

All 256 packages built successfully on Fedora 34 aarch64.

I'll rebuild in thofmann/ros and also include rawhide and Fedora 33.

martinezjavier commented 3 years ago

All 256 packages built successfully on Fedora 34 aarch64.

Woah, exciting news! I don't have access to my aarch64 boards now but I'll give a try later today.

morxa commented 3 years ago

The Fedora 34 packages are now in the main ROS COPR (thofmann/ros). There was a build failure on Fedora 33. It may be something trivial but I currently don't have the time to look into it. So I'll continue there later.

nullr0ute commented 3 years ago

Well there goes my weekend :fireworks:

martinezjavier commented 3 years ago

@morxa I finally had time to test your packages and it does work!

On a F34 server image installed in my Rockpro64 I did the following:

$ dnf copr enable thofmann/ros

$ dnf install ros-ros_core -y

then started roscore on a terminal:

$ source /usr/lib64/ros/setup.bash

$ roscore 
... logging to /root/.ros/log/2642a16a-cb00-11eb-b035-f297461bfcdd/roslaunch-rockpro64-10980.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

[ 1176.450452] systemd-journald[627]: Successfully sent stream file descriptor to service manager.
[ 1176.450452] systemd-journald[627]: Successfully sent stream file descriptor to service manager.
started roslaunch server http://rockpro64:43493/
ros_comm version 1.15.11

SUMMARY
========

PARAMETERS
 * /rosdistro: noetic
 * /rosversion: 1.15.11

NODES

auto-starting new master
process[master]: started with pid [11249]
ROS_MASTER_URI=http://rockpro64:11311/

setting /run_id to 2642a16a-cb00-11eb-b035-f297461bfcdd
process[rosout-1]: started with pid [11399]
started core service [/rosout]

and from another terminal I listed the available ROS topics:

$ source /usr/lib64/ros/setup.bash

$ rostopic list
/rosout
/rosout_ag

So I think we can close this issue as resolved, thanks a lot again for this!