Open mhaberler opened 8 years ago
I'm currently attempting to restructure hal_pru_generic to support both PRUs simultaneously, in order to use the currently-unused one to handle one-wire strings with almost no CPU overhead. This means that I have already familiarized myself with hal_pru-generic and with the actual PRU code, so logically I should be able to do this project. Since I have not built a Machinekit or a BBB kernel for over a year, I may need some basic help.
@DanClemmensen - superb, that would be great! happy to help here as needed, BB's available galore
re BB X15: @cdsteinkuehler happens to own one - but I think most of the exercise can be done with a stock BB and only testing/polishing on the X15 when done
one-wire - like the Maxim protocol?
Hi, Michael.
Yes, 1-wire is the TI/Maxim protocol. It can easily be implemented by adding a very inexpensive controller to a cape, using i2c, USB, UART, or simply by bit-banging a GPIO pin. But the kernel drivers are not that great and there appear to be dueling implementations (OWFS and w1), and the protocol consumes more CPU than you might think. My main motivation for using the second PRU is much simpler, though: that PRU is unused in my system. Since it's already paid for, it's by definition cheaper than adding a hardware controller, and it can do the entire low-level one-wire protocol for multiple one-wire devices with no CPU intervention after initialization. This means I consume a single PRU I/O pin to support a large number of devices, or two pins if some are more time-critical than others.
I had already e-mailed Charles about my dual-PRU project and sketched out an approach for refactoring hal_pru_generic. My approach did not contemplate the X15, and this changes things. The BBB has a single PRUSS which contains a pair of PRUs and some additional interesting peripherals. The PRUSS must be initialized and managed as a unit (to a first approximation) so simply adding the second PRU there is appropriate. However, it appears that the X15 may have a pair of PRUSS systems, each identical to the single PRUSS on the BBB. If so it is likely that the two PRUSS systems can be managed independently. For example you could use two separate instances of hal_pru_generic, or you could leave one PRUSS available and under control of the kernel or of a completely separate application. If this is the case, then the "only" thing we need to do for the X15 is make hal_pru_generic sensitive to which PRUSS it is using and make sure HAL as a whole can run zero, one, or two instances.
I have not yet looked at the new Linux PRUSS implementation, so I have no idea what it will do to us. I do have one question: must we be able to run the older Linux on the X15? If not, then we do not need to consider a multi-instance hal_pru_generic on the older Linux.
On Fri, Dec 18, 2015 at 11:50 PM, Michael Haberler <notifications@github.com
wrote:
@DanClemmensen https://github.com/DanClemmensen - superb, that would be great! happy to help here as needed, BB's available galore
re BB X15: @cdsteinkuehler https://github.com/cdsteinkuehler happens to own one - but I think most of the exercise can be done with a stock BB and only testing/polishing on the X15 when done
one-wire - like the Maxim protocol?
— Reply to this email directly or view it on GitHub https://github.com/machinekit/machinekit/issues/839#issuecomment-165959087 .
fyi, we do have a uio_pruss version for v4.1.x, so to help weed other issue.
Grab any newist wheezy/jessie image: (the wheezy machinekit would probally work too)
http://elinux.org/Beagleboard:BeagleBoneBlack_Debian#Debian_Image_Testing_Snapshots
4.1.x-bone/rt-bone with uio_pruss (from 3.8.x)
cd /opt/scripts/tools/
git pull
sudo ./update_kernel.sh --bone-kernel --lts
(rt)
sudo ./update_kernel.sh --bone-rt-kernel --lts
4.1.x-ti/rt-ti with ti's latest remoteproc_pruss
cd /opt/scripts/tools/
git pull
sudo ./update_kernel.sh --ti-channel --stable
(rt)
sudo ./update_kernel.sh --ti-rt-channel --stable
OK I just ordered another BBB to be delivered late today just to play with. My main BBB will remain inside the laser cutter. I can start to get this one running with the latest Linux and Machinekit tomorrow. I will continue my refactoring today. When/if I succeed, I will swap the new one into the laser cutter. It does not look like the uio_pruss-->remoteproc change itself will make a huge difference. Dual-pruss is a bigger issue.
Where can I find the driver library and API description for the application side of the remoteproc interface? Must I drag down the TI SDK for this?
@DanClemmensen re older Linux on X15
no, we need a kernel which has decent latency and supports what we need (SoC at hand, device tree, capes, remoteproc), wherever that is found and whatever version. That can be either a Xenomai or RT-PREEMPT kernel. So IMO no point in going for 3.8 support on X15 (if that is possible at all, which I think it is not).
I tried a 4.1something RT-PREEMPT kernel on an rpi2 and my impression was - RT-PREEMPT caught up big time - latency was around 40uS or so, so no reason left for Xenomai in that case
I have no idea what the status of RT-PREEMPT on X15 is; last I saw I think TI went dual-pronged (Xenomai as well as RT-PREEMPT); the person in the know would be @RobertCNelson though
On 12/19/2015 12:15 PM, DanClemmensen wrote:
Where can I find the driver library and API description for the application side of the remoteproc interface? Must I drag down the TI SDK for this?
Remoteproc is not a TI thing, it's a standard Linux Kernel feature.
You can find documentation in the kernel tree and other usual places for kernel info:
https://www.kernel.org/doc/Documentation/remoteproc.txt
The two main functions that need to be converted are pru_init() and setup_pru():
All that really needs to happen is to get the code loaded into the PRU instruction memory and start the PRU. There are currently no interrupts or other special handshake mechanisms, the ARM code talks to the PRU directly via shared memory (the PRU data memory).
Ideally, the hal_pru_generic driver can be made to try the remoteproc interface and fall back to using uio_pruss if remoteproc is not found.
Charles Steinkuehler charles@steinkuehler.net
Conversion for remoteproc sounds easy, but we will need to see if (and how) remoteproc handles dual PRUSSs in the 57xx series processors, and exactly how the memory mapping is done (i.e., how do we ask remoteproc where each PRU's data mem is mapped into the CPU address space. It should all be fairly trivial.
On Sat, Dec 19, 2015 at 3:44 PM, cdsteinkuehler notifications@github.com wrote:
On 12/19/2015 12:15 PM, DanClemmensen wrote:
Where can I find the driver library and API description for the application side of the remoteproc interface? Must I drag down the TI SDK for this?
Remoteproc is not a TI thing, it's a standard Linux Kernel feature.
You can find documentation in the kernel tree and other usual places for kernel info:
https://www.kernel.org/doc/Documentation/remoteproc.txt
The two main functions that need to be converted are pru_init() and setup_pru():
All that really needs to happen is to get the code loaded into the PRU instruction memory and start the PRU. There are currently no interrupts or other special handshake mechanisms, the ARM code talks to the PRU directly via shared memory (the PRU data memory).
Ideally, the hal_pru_generic driver can be made to try the remoteproc interface and fall back to using uio_pruss if remoteproc is not found.
Charles Steinkuehler charles@steinkuehler.net
— Reply to this email directly or view it on GitHub https://github.com/machinekit/machinekit/issues/839#issuecomment-166034185 .
@DanClemmensen - since you think it's fairly trivial, let me move the goalpost a bit in case you finish way too early ;)
hal_pru_generic was just a first stab, and is certainly not a grandiose idea when it comes to making use of several PRU's (or other co-processors at hand like DSP's) - more flexible configurations might employ a varying number of PRU's with different blobs
I could think of a two-stage method to decouple things a bit and enable a more flexible arrangement. It might look like so:
this would also encapsulate the method (PRUSSv2 vs remoteproc) in a single comp, assuming use of PRU's can be achieved via a normal API and shared memory, which I think is possible
The vtable feature is currently in use and stable - the trajectory planner is a vtable-based component and motion uses this scheme so tp's could be replaced load time; happy to help over design/example bumps
could be hostmot has a similar two-stage scheme already which could be employed as well, I'm just not as familiar with that code
My initial inclination was to politely decline the scope creep: my initial goal is to get temperature monitoring on my laser cutter and the reason I volunteered for the remoteproc work is to protect my time investment in my PRU code (and of course to have fun and contribute to the community). However, I think I can generalize for any co-processor that is supported by remoteproc and which uses a memory-mapped interface, since there is no longer supposed to be any difference in these functions. However, I will still politely decline to extend this to the rtmsg API, because I have no reason to rewrite stepgen, pru_onewire would not use it, and we have no other examples of remoteproc/rtmsg users in machinekit. The first implementor that needs rtmsg can do that part. IMO, message passing is likely to be far too heavy for the PRUs and may even be inefficient for some other coprocessors and the CPU interface code, depending on the functionality to be implemented. On the PRUs, we have code space for maybe 2000 assembly instructions. I prefer to do high-frequency stuff there, not message passing.
I just got my test BBB. apparently, it is still incapable of writing its own bootable sdmicro card??! This was IMO its worst lack when I did this 2 years ago.The third step on the "getting started" page is "upgrade to latest", and its first sub-step is essentially "go find a USB flash writer. Some work and some don't. Good luck with that." And this is while the poor newbie user already has a device with a flash writer plugged into his USB port: the BBB itself!
Now I need to drive to Fry's and buy a USB flash writer that might work. I will plug it into a Linux computer: guess which Linux computer? Why yes, the BBB! (with a competent 5VDC PS, of course.)
Or is there an alternative?
@DanClemmensen - re scope creep: I aired what would be nice to have eventually, which is different from what's needed now - do not feel pressured about it
I do not see a need for rtmsg support atm either, and HAL mostly assumes a shared memory model
let's do it this way: once you have hal_pru_generic loading/start/stop working, let's see if I can devise a simple vtable API which covers our current use cases with both mechanisms, and we go from there - or not
re USB SD writing.. my method: I do it on a vanilla PC with an USB SD stick.
I'm assuming you've tried the below... Boot off the on-board, plug in your flash card after. become root in some terminal window. umount anything that gets automounted on my pc, looks like this: /dev/mmcblk0p2 on /media/daren/rootfs type ext4 (rw,nosuid,nodev,uhelper=udisks2) /dev/mmcblk0p1 on /media/daren/BEAGLEBONE type vfat (rw,nosuid,nodev,uid=1000,gid=1000,shortname=mixed,dmask=0077,utf8=1,showexec,flush,uhelper=udisks2) so.. umount /dev/mmc* works use lsblk to find your card mmcblk0 179:0 0 14.9G 0 disk ├─mmcblk0p1 179:1 0 96M 0 part └─mmcblk0p2 179:2 0 14.8G 0 part Write the image. In this case I would use: xzcat bone-debian-7.8-machinekit-armhf-2015-08-16-4gb.img.xz |dd of=/dev/mmcblk0
On Sun, Dec 20, 2015 at 11:30 AM, DanClemmensen notifications@github.com wrote:
I just got my test BBB. apparently, it is still incapable of writing its own bootable sdmicro card??! This was IMO its worst lack when I did this 2 years ago.The third step on the "getting started" page is "upgrade to latest", and its first sub-step is essentially "go find a USB flash writer. Some work and some don't. Good luck with that." And this is while the poor newbie user already has a device with a flash writer plugged into his USB port: the BBB itself!
Now I need to drive to Fry's and buy a USB flash writer that might work. I will plug it into a Linux computer: guess which Linux computer? Why yes, the BBB! (with a competent 5VDC PS, of course.)
Or is there an alternative?
— Reply to this email directly or view it on GitHub https://github.com/machinekit/machinekit/issues/839#issuecomment-166133814 .
Thanks, Daren! I have not yet tried anything: I had forgotten that all flash upgrades need the SD card as an intermediate as it's been over a year since I played with a BBB. I will try your method now.
Those instructions are if you wish to boot from the flash card (eventually), not the on-board memory. I believe that's the way the majority of people work.
On Sun, Dec 20, 2015 at 12:47 PM, DanClemmensen notifications@github.com wrote:
Thanks, Daren! I have not yet tried anything: I had forgotten that all flash upgrades need the SD card as an intermediate as it's been over a year since I played with a BBB. I will try your method now.
— Reply to this email directly or view it on GitHub https://github.com/machinekit/machinekit/issues/839#issuecomment-166140628 .
saw this fly by from the TI forum:
http://e2e.ti.com/support/arm/sitara_arm/f/791/t/478650 - apparently there is a 4.1.13-ti-rt-r77 kernel which sounds like rt-preempt to me
dont know if devicetree is in shape for us yet, though
@mhaberler Pretty sure he meant: "4.1.13-ti-rt-r36" with the way the 4.4.x-ti branch is coming along i probably won't hit "r77" anytime soon. ;)
Regards,
@RobertCNelson - would that be a viable base version for us, devicetree and all? or would you recommend something else?
@mhaberler yeap, that's the version i'm shipping by default (well non -rt version)..
In any reasonably current bb.org jessie image, just run:
cd /opt/scripts/tools/
git pull
sudo ./update_kernel.sh --ti-rt-channel --stable
and it'll install "4.1.13-ti-rt-r36"
Regards,
HELP! I haven't yet even gotten to my first build of mackhinekit.
I installed the latest machinekit distro on a fresh BBB and I'm running from the SDmicro card. It's Linux beaglebone 3.8.13-xenomai-r78 but it does have remoteproc compiled into the kernel, so it's probably a good enough target to play with initially.
I intend to also use this as the build machine, so I followed the instructions at https://github.com/mhaberler/asciidoc-sandbox/wiki/Machinekit-Build-for-Multiple-RT-Operating-Systems#installation
I continued to install missing packages for each complaint by ./configure, but I'm stuck at libczmq. I did add the directory containing libczmq.pc to PKG_CONFIG_PATH as suggested by ./config. It did not help. Suggestions?
first remove the mk packages - apt purge machinekit follow the instructions on http://www.machinekit.io/docs/building-from-source/ , in particular the mk-build-deps step which should pull in all dependencies properly if too many botched packages installed, consider starting from a fresh image copy
@DanClemmensen That image was pre-setup for machinekit to run out of the box...
Thanks. Michael. That seems to be working (make is underway). Sorry for the long turnarounds, but I'm babysitting my 2.5-year old granddaughter today.
Thanks, Robert. Yes, I knew it was pre-set-up. I therefore hoped that I could avoid a kernel rebuild, etc. and start with a known working kernel for the remoteproc port.
Recall that I am not a Debian expert and have done very little development work in this environent. My big Linux boxes are all Gentoo. Therefore, I will occasionally ask some fairly basic questions. However, once I have an initial build of a working machinekit, the actual modifications should be straightforward.
dont worry about the mechanics - enough people understand this is an important step forward and are able to help over any bumps
gut felling - it's unlikely you will actually need to build a kernel (modulo hitting a bona-fide bug); ideally we'd get away without any patches as there's also the drag of getting anything upstream
in case you do - a kernel build on an SD card is an overnight affair; an NFS mounted source repo helps a lot
@DanClemmensen I personally prefer to script such processes as it can give an overview of the whole process and is less error prone, along with the guide.
This is a fresh generic Machinek jessie install script that works on the coming armhf-soc-fpga platform and should be able to work also on BBB and PC.
It covers a full fresh install, but you are able to comment out the functions you don't need to run. In the bottom. so it can also be used only for re-compiling, etc .
in linux terminal / cli
@mhaberler I have been able to x86_64 crosscompile the altera soc kernel for years with the linaro 4.9-gnueabihf (or something) toolchain so swiftly with -J16 I can't even remember if it takes more or less than 2 min or so.
I will follow up on my word as I have a vanilla 4.13 + rt preempt + evt ltsi (rc1) patch mod planned in mind for my script.
http://lists.linuxfoundation.org/pipermail/ltsi-dev/2015-November/005581.html
I guess that would take a few or some minutes more to complete .... but still not hours or nights ...
OK, I compiled and ran linuxcnc and selected ARM->beaglebone->CRAMPS
This resulted, as expected, in termination when hal_pru_generic failed to find the uio_pruss. I can now start my development work. Thanks, all.
On Tue, Dec 22, 2015 at 5:03 PM, Michael Brown notifications@github.com wrote:
@DanClemmensen https://github.com/DanClemmensen I personally prefer to script such processes as it can give an overview of the whole process and is less error prone, along with the guide.
This is a fresh generic Machinek jessie install script that works on the coming armhf-soc-fpga platform and should be able to work also on BBB and PC.
It covers a full fresh install, but you are able to comment out the functions you don't need to run. In the bottom. so it can also be used only for re-compiling, etc .
in linux terminal / cli
— Reply to this email directly or view it on GitHub https://github.com/machinekit/machinekit/issues/839#issuecomment-166771162 .
On 12/22/2015 7:54 PM, DanClemmensen wrote:
OK, I compiled and ran linuxcnc and selected ARM->beaglebone->CRAMPS
This resulted, as expected, in termination when hal_pru_generic failed to find the uio_pruss. I can now start my development work.
Holler if you hit any roadblocks. I'm very interested in getting remoteproc support working, and have a bit more spare time than usual to review issues over the holidays.
Charles Steinkuehler charles@steinkuehler.net
@DanClemmensen btw, just encase anything else is causing an issue on the 3.8 -> 4.1 migration, also try
sudo apt-get update
sudo apt-get install linux-image-4.1.15-bone-rt-r17
sudo reboot
As it contains a 3.8 compatiable uio_pruss interface..
3.8 -> 4.1 is already a big enough migration, and verifying that "4.1.15-bone-rt-r17" at-least works might help debug things. ;)
@the-snowwhite I know - it's just that the torvalds tree has/had an issue when cross-compiling for a looong time: the tools built under /usr/src/linux* for generating out-of-tree kernel modules were accidentially built for the host, not target arch - so it used to be painful to use with out-of-tree kmod work
not sure if that's been fixed yet
Colleagues, are we all aware of this project? https://hackaday.io/project/5837/logs It appears to have already implemented PRU support using remoteproc. Basically, we will need to start with a kernel of 3.14 or newer. remoteproc does not appear to be able to support a procesor (i.e., the PRU) by itself: it needs another kernel driver per processor type: pru_remoteproc in our case, and each such driver appears to also interact with user space via its own API: libpru on our case. Or am I confused?
It appears I will need to go to 3.14+ or to a 4.x kernel anyway, so there is no longer an advantage with starting from a machinekit prebuilt image. Any suggestions on a preferred base kernel?
Given this amount of build effort, I will set up an NFS server instead of building in the flash.
@DanClemmensen start with the base "lxqt" image
http://elinux.org/Beagleboard:BeagleBoneBlack_Debian#Debian_Image_Testing_Snapshots
Install the 3.8/uio_pruss compatible kernel:
sudo apt-get update
sudo apt-get install linux-image-4.1.15-bone-rt-r17
sudo reboot
then add the machinekit repo, and install machinekit to see where we are at..
Regards,
Thanks, Robert. I assume that that image has remoteproc and pru_remoteproc in addition to uio_pruss? Is there a location where I can browse its source tree? (just for fun)
@DanClemmensen no, only one or the other, uio_pruss & pru_remoteproc have conflicting dts inclues..
https://github.com/RobertCNelson/linux-stable-rcn-ee/tree/4.1.15-bone-rt-r17
Regards,
OK, thanks. I have a uio_pruss kernel already. I need a reomoteproc/pru_remoteproc kernel. should do you think I should start from your source tree and build for remoteproc?
My problem here is that Michael's main(?) reason to move to remoteproc from uio_pruss is that we want to be able to go to a mainline unmodified kernel. Perhaps just adding a module is considered less disruptive than actual kernel patches? But is this is the case, there is no compelling reason to shift away from uio_pruss.
I found pruss_remoteproc support in the following tree: https://github.com/beagleboard/linux/tree/4.1/ Is this a good place to start? should I download and build this tree as a point of departure?
@DanClemmensen do whatever you want. ;)
Currently these two branches: aka = 4.1.13-ti-r36
https://github.com/beagleboard/linux/tree/4.1 https://github.com/RobertCNelson/linux-stable-rcn-ee/tree/4.1.13-ti-r36
are the same... (i push them at the same time..)
It's just for debugging it might be easier to jump:
3.8 uio_pruss -> 4.1 uio_pruss -> 4.1 remoteproc
Regards,
It could well been I did not understand all the dependencies when I wrote this up, and/or I was unclear - sorry about that
First, to explain the current situation: for the 3.8 kernel Charles and me worked with folks from the Xenomai list to work out the patches for the beaglebone 3.8 - bone kernel, and Robert has picked that up and kindly builds it for us; occasionally we add a patch there, like for instance the RTCAN driver for the AM335x CAN core - so quite specific patches and build (it's this branch)
my assumptions were:
Frankly I am bewildered by the number of repos, different sources and kernel versions all of which carry TI specifics; it's safe to say whatever Robert had worked for us, beyond that things got very fuzzy for me. You can tell from my rather soft facts above I am not particularly certain how to navigate this space..
Wait till i add the 4.4.x-ti variants. ;) (2 more, at-least they'll be 100% compatible with 4.1.x-ti for remoteproc)..
OK, I think I understand: --we are not attempting to converge with the kernel.org mainline in the short term --we shall create a machinekit kernel based on Robert's 4.x repo. That repo already has remoteproc pruss_remoteproc RT_PREEMPT --I may need to apply a machinekit-specific patch to this kernel source? --the initial target for my effort shall be the BBB. --the secondary target shall be X15.
I do need to figure out where to get the build environment for the remoteproc-targeted PRU code. It must already exist somewhere. I know how to use binutils to create tools to mess with ELF files, but no reason to re-invent the wheel.
(Today is another babysitting day, so not a lot of progress.)
The pruss_remoteproc we have in 4.1.x-ti, should hit mainline in the next few kernel merges.
@RobertCNelson How close is mainline Linux to running on Beagle* without patching? How about how close to running on Beagle* with PRU and other features MK depends on? Pretty exciting to think about TI support + RT_PREEMPT out of the mainline box in some not-too-distant future!
Just a quick update, (some my contacts at ti are now back in the office after winter vacation)..
So remoteproc_pruss is getting more internal changes... i wouldn't trust 4.1.x-ti to be set in stone anymore.. The changes will hit ti's "4.4.x-ti" branch..
I'd recommend anyone working on this switch from 3.8 take a look at either:
4.1/4.4-(rt)bone and uio_pruss interface *( just got 4.4.x-rt-bone working today)
cd /opt/scripts/tools/
git pull
sudo ./update_kernel.sh ${options}
Mainline:
4.1.x-bone:
--bone-channel --lts-4_1
4.1.x-rt-bone:
--bone-rt-channel --lts-4_1
4.4.x-bone:
--bone-channel --lts-4_4
4.4.x-rt-bone:
--bone-rt-channel --lts-4_4
Regards,
I have four interrelated goals:
Given the current state of remoteproc, I've decided to attack them in that order. This let's me get my feet wet without stepping into the current churn, although it may slightly delay the ultimate integration.
So far, I have encountered two minor design issues to work around at the HW level:
On Mon, Jan 4, 2016 at 12:33 PM, Robert Nelson notifications@github.com wrote:
Just a quick update, (some my contacts at ti are now back in the office after winter vacation)..
So remoteproc_pruss is getting more internal changes... i wouldn't trust 4.1.x-ti to be set in stone anymore.. The changes will hit ti's "4.4.x-ti" branch..
I'd recommend anyone working on this switch from 3.8 take a look at either:
4.1/4.4-(rt)bone and uio_pruss interface *( just got 4.4.x-rt-bone working today)
cd /opt/scripts/tools/ git pull sudo ./update_kernel.sh ${options}
Mainline:
4.1.x-bone:
--bone-channel --lts-4_1
4.1.x-rt-bone:
--bone-rt-channel --lts-4_1
4.4.x-bone:
--bone-channel --lts-4_4
4.4.x-rt-bone:
--bone-rt-channel --lts-4_4
Regards,
— Reply to this email directly or view it on GitHub https://github.com/machinekit/machinekit/issues/839#issuecomment-168797637 .
On 1/4/2016 7:44 PM, DanClemmensen wrote:
I have four interrelated goals:
- support 1-wire from the second PRU
- support two PRUs from HAL
- convert to 4.x
- convert to remoteproc
Given the current state of remoteproc, I've decided to attack them in that order. This let's me get my feet wet without stepping into the current churn, although it may slightly delay the ultimate integration.
Sounds good. I agree with delaying remoteproc, since details still seem to be in flux.
So far, I have encountered two minor design issues to work around at the HW level:
- The PRU's "GPIO" pins are not I/O Some are I and some are O. This means I will need two pins per 1-wire.
Yep.
- There is no obvious free-running counter/timer register. I must do without or use the IEP's timer, but I do not want to steal this valuable resource from the main PRU or burden it with providing timer service to the 1-wire PRU. Fortunately, 1-wire timing is so non-critical that I can almost certainly use cycle counting with almost no degradation in 1-wire speed.
There is a hardware eCAP timer in the PRU domain you can use if you want, but I think it's OK to use the IEP timer, particularly if you use both PRUs (what else is going to use it?). You might also find the CYCLE register useful for your needs, if all you need is a free-running counter, but note that it doesn't automatically wrap.
Charles Steinkuehler charles@steinkuehler.net
I'm using one PRU while the other PRU is still in use for the existing "tasklets," namely stepgen and PWM gen. One-wire is a minor luxury that is using an otherwise-idle PRU in the machinekit environent. I wish to leave the eCAP and IEP timers free for tasklets. The CYCLE register sounds interesting. Where is it documented?
On Mon, Jan 4, 2016 at 7:44 PM, cdsteinkuehler notifications@github.com wrote:
On 1/4/2016 7:44 PM, DanClemmensen wrote:
I have four interrelated goals:
- support 1-wire from the second PRU
- support two PRUs from HAL
- convert to 4.x
- convert to remoteproc
Given the current state of remoteproc, I've decided to attack them in that order. This let's me get my feet wet without stepping into the current churn, although it may slightly delay the ultimate integration.
Sounds good. I agree with delaying remoteproc, since details still seem to be in flux.
So far, I have encountered two minor design issues to work around at the HW level:
- The PRU's "GPIO" pins are not I/O Some are I and some are O. This means I will need two pins per 1-wire.
Yep.
- There is no obvious free-running counter/timer register. I must do without or use the IEP's timer, but I do not want to steal this valuable resource from the main PRU or burden it with providing timer service to the 1-wire PRU. Fortunately, 1-wire timing is so non-critical that I can almost certainly use cycle counting with almost no degradation in 1-wire speed.
There is a hardware eCAP timer in the PRU domain you can use if you want, but I think it's OK to use the IEP timer, particularly if you use both PRUs (what else is going to use it?). You might also find the CYCLE register useful for your needs, if all you need is a free-running counter, but note that it doesn't automatically wrap.
Charles Steinkuehler charles@steinkuehler.net
— Reply to this email directly or view it on GitHub https://github.com/machinekit/machinekit/issues/839#issuecomment-168885330 .
Found the CYCLE documentation. That's perfect. you can reset the count and it runs for about 20 seconds, so no problem. For example, I need to ( take action1 do a variable-time bunch of stuff wait until 45 us after action 1.
So, we simply take action 1 reset counter start counter do the variable stuff do (read_counter) until counter reaches 45 us.
@DanClemmensen - yes, one front at a time sounds reasonable
getting things going on the X15 is more important IMO than having remoteproc right away
@RobertCNelson commented 10 minutes ago
@ArcEye here's some more news.. @MarkAYoder is working on a book to help document the pru: https://markayoder.github.io/PRUCookbook/index.html @dlech posted a first pass at remoteproc_pruss for upstream: https://www.spinics.net/lists/linux-omap/msg143820.html Regards,
Thank's for the info @RobertCNelson
I was in the process of moving Issues around when you commented, so have put this back on the main issue.
The BB PRU support is currently tied to the legacy TI PRUSS driver which has been superseded by the mainstream remoteproc facility
Currently we still use the PRUSS driver on the BB xenomai 3.8 kernel.
As PRUSSv2 is being phased out in favor of remoteproc, we need to adapt the PRU support to remoteproc so we can switch to higher kernel version numbers, for instance to support the BeagleBoard X15 - which we currently cannot, as we are stuck with the 3.8 kernel.
Affected code: mostly hal_pru_generic (stepgen, pwmgen, encoder etc): https://github.com/machinekit/machinekit/tree/master/src/hal/drivers/hal_pru_generic
Rough outline:
Prerequisites:
Effort:
Potential coaches: @cdsteinkuehler @mhaberler (possibly @RobertCNelson, did not ask yet..)