machinekit / mksocfpga

Hostmot2 FPGA code for SoC/FPGA platforms from Altera and Xilinx
30 stars 40 forks source link

Missing Z-Turn Project #20

Closed dkhughes closed 1 year ago

dkhughes commented 8 years ago

I think it's time for an anchor issue to gather data for the Z-turn port since it is a very popular board.

dkhughes commented 8 years ago

@claudiolorini Is this a good board spec for the Z-Turn? We need ps7xxx.h and ps7xxx.c for the u-boot spl port I started here:

https://github.com/JDSquared/u-boot-xlnx/tree/zturn_wip

Those same board specs can be used to define the hm2 project. I have an io carrier coming in the mail - is this a good target or is everyone running a different board?

claudiolorini commented 8 years ago

yes, we're currently using this: https://github.com/francescodiotalevi/projects/tree/master/P2012_07_XZy/zturn it is our most up to date repository. The IO board can be useful but i haven't one (ask Michael he should have bought one) i have designed some carriers for the z-turn for different projects with specific functions and i have 'in the oven' an 8CAN+Ethercat+IO that i'd like to use with MK...

dkhughes commented 8 years ago

Okay, Z-Turn is booting but a little buggy. I want to get systemd back in the jessie image, and the work done by Robert C. Nelson looks like it could be promising. I'm now working on porting the firmware ROM to the zynq side before I finish the boot issues, and then I will take a look at cleaning the build process up. Right now, you just run ./make_bitfile zturn/zturn_carriername/config inside the docker image and it will kick out the linux ready firmware bitfile.

dkhughes commented 8 years ago

Here is a list of what we need to make the Z-Turn (or any Zynq design) run:

dkhughes commented 8 years ago

Systemd testing is all that remains now. The u-boot port is functional, and I was able to avoid the ugly myir u-boot gpio reset hacks. Instead, the reset toggle is handled by the FSBL (spl) and it's transparent.

I've been working based on a custom carrier card, and now I'm going to make the myir iocarrier a 5ixxx emulator and start submitting PRs. The build is different than the altera side, but I wanted it all functioning and usable by others for testing before I worried too terribly much about making Vivado jump through Altera-style hoops.

The biggest difference is that on the Xilinx side, the input/output io are separated, and it takes an extra ip component to handle the tristate logic that Vivado can properly synthesize into functioning buffers at the fpga pin pads. This lets me add in extra logic into the fpga and avoid ANDs, ORs, and input debouncing in the hal layer wasting clock cycles.

@mhaberler Can you hit me with a really quick checklist of packages you need for the zynq side? I think we need:

1) a zynq firmware package with dtbs and the overlays 2) u-boot and FSBL package 3) kernel package for zynq (we've got this one already from the microzed work) 4) The final image build scripts with the proper uio_gen_irq compatible strings in the rootfs

PRs for the zturn projects to come as soon as they pass some tests this morning.

mhaberler commented 8 years ago

1-3 yes; for 4) happy to adapt the omap-image-builder branch I use for the socfpga

just point me to repo/branch/commit and build commands, I'll do the jenkins stuff and packaging

dkhughes commented 8 years ago

U-boot and the kernel are ready to update packages. I am writing a first try at firmwareid rom that is synthesizable in both altera and xilinx, then the zturn and microzed fpga projects will be ready. I expect to have that done in a few hours.

U-boot with the updates for zturn are on the master branch now here:

https://github.com/JDSquared/u-boot-xlnx.git

I recommend a new clone of that repo, as I had to rebase to fixup the history from some buggy commits. To build, change to the root directory and cross compile. The docker image I use to build the kernel and u-boot is here https://github.com/dkhughes/ubker-docker - it has gcc 5.3 instead of the older one from the debian packages, but I see no reason why we have to use it. I would think we want to match the compiler to the same one packaging the kernel.

#u-boot commands from top directory
make zynq_%boardname%_config   # %boardname% should be microzed or zturn based on target
make -jN
# This gives you u-boot.img in root directory, and boot.bin in the spl directory.
# Write both to the first partition of the sd card with names boot.bin and u-boot.img

The linux kernel packager you setup before should work provided I can figure out why my qemu is throwing git core dumps. All we need for that package is to pull the latest changes from my tree off the zynq_4_4_rt branch that includes the zturn relevant dts files. Repo is here: https://github.com/JDSquared/linux-xlnx.git

You have all of the kernel package pieces we need setup, except we have to add in the base device tree binaries to the /lib/firmware/zynq folder. There are two we care about right now, and they must be named correctly or u-boot won't find them. The proper files are:

# Both dtb files we care about are in arch/arm/boot/dts after compiling with the kernels makefiles
# We need a dtc that supports symbols (-@) so 1.4.1 or newer or overlays won't work.
# The dtc included in the kernel tree works correctly. 
Copy:
arch/arm/boot/dts/zynq-microzed.dtb  to target's /lib/firmware/zynq/zynq-microzed.dtb
arch/arm/boot/dts/zynq-zturn.dtb to target's /lib/firmware/zynq/zynq-zturn.dtb

Overlay instructions and bitfile compile instructions coming right after I get fwid working.

dkhughes commented 8 years ago

Oh, I was just thinking. If you want the dtbs that I pointed to in the kernel tree to just live in a firmware package along with the bitfiles, I could copy the source into mksocfpgas zynq source and we compile the base device tree files, the device tree overlays, and the fpga bitfiles in one place. Sounds like a win to me, how would you prefer it?

mhaberler commented 8 years ago

@dkhughes I like the linaro docker image and will build that on mah.priv.at (in fact will try on dockerhub first) the arm gcc's on the mk-builder images are getting a tad rancid these days ;)

I'll see if I can switch the socfpga kernel and uboot build to that docker image as well, checking where the kernel builds are right now, it's been a while

root@cubox-slave:/home/mah# cat /etc/apt/sources.list
deb http://ftp.at.debian.org/debian stretch main
# temp for device-tree-compiler
deb [arch=armhf] http://repos.rcn-ee.com/debian/ jessie main
root@cubox-slave:/home/mah# apt-cache policy device-tree-compiler
device-tree-compiler:
  Installed: 1.4.1-0rcnee1~bpo80+20160224+1
  Candidate: 1.4.1-0rcnee1~bpo80+20160224+1
  Version table:
 *** 1.4.1-0rcnee1~bpo80+20160224+1 100
        100 /var/lib/dpkg/status
     1.4.0+dfsg-2 500
        500 http://ftp.at.debian.org/debian stretch/main armhf Packages

hope this works for you, could you check?

Omap-image-builder works off those packages in the image-building stage.

mhaberler commented 8 years ago

I see you use the kernel's dtc - I hope we can use the RCN one, because that is what I have on the ARM build slave.. omap-image-builder does not run with docker, so unfortunately has some dependencies on the build slave environment

dkhughes commented 8 years ago

https://github.com/JDSquared/u-boot-xlnx.git - which branch?

You want the master branch. In fact, I'm going to delete the other two as they are all merged now.

dtc: I use RCN's dtc from deb [arch=armhf] http://repos.rcn-ee.com/debian/ jessie main:

RCN's dtc should be just fine since it is 1.4.1. I used the kernel's dtc because it was included with source and I'm lazy.

zynq bitfiles: I assume you'll commit this to https://github.com/machinekit/mksocfpga

Yes, finishing firmware_id stuff then I'll submit PRs with all the new zynq work. This gives us microzed with one carrier, and zturn with two carriers one of which is the MYIR IO breakout.

mhaberler commented 8 years ago

ubker-docker - cloned, add in dtc 1.4.1, and fpm (jenkins insists running everything in docker, not just some steps :-/) - works great, just produced a working socfgpa-uboot with it!

looking into kernels now

mhaberler commented 8 years ago

ok, zturn uboot already jenkinized, builds fine! still need to package.

we need to decide what to do with all those kernel repos, uboot repos, omap-image-builders, and Dockerfiles - right now they are littered over private repos all over - that's a bit unsatisfactory; should we move all those to the machinekit organisation? it's going to be a lot.. or we consolidate branches into single repos, like Dockerfiles, kernels, uboots

dkhughes commented 8 years ago

Very cool.

As a side note, this path line https://github.com/mhaberler/ubker-docker/blob/master/Dockerfile.in#L69

Puts a dtc on the path if the kernel is compiled, and it adds u-boot tools to the path if the kernel is building for a uImage target. We don't need that anymore for the u-boot compilation since you included RCN's dtc package. Plus if we aren't using uImage format (kernels coming from the deb packages are in zImage now). Maybe change it to:

ENV PATH "/opt/gcc-linaro-hf/bin:/opt/bin:$PATH"
mhaberler commented 8 years ago

dtc idea stolen from: http://developer.toradex.com/knowledge-base/build-u-boot-and-linux-kernel-from-source-code#Linux_Image_Flashing_Tools

dkhughes commented 8 years ago

Ha, cool. I manually run that docker image with the run_terminal script, which autopopulated where the kernel git and u-boot git were, then added the correct paths so builds would work without me updating links inside the image. With the packages, that's all irrelevant now, so the variable substitutions can go away.

mhaberler commented 8 years ago

nice, kernel builds fine with ubker as well after adding fakeroot (not yet uploaded in repo)

mhaberler commented 8 years ago

@dkhughes - if you'd invite @machinekit-ci as collaborator on the repos we're building from - then I can set build status from jenkins (https://github.com/JDSquared/u-boot-xlnx.git & https://github.com/JDSquared/u-boot-xlnx.git)?

dkhughes commented 8 years ago

Okay, sent invites to machinekit-ci.

mhaberler commented 8 years ago

fine, status should show up sooner or later on 'branches' (green checkmark)

update: here it is - try clicking the checkmark: https://github.com/JDSquared/u-boot-xlnx/branches

dkhughes commented 8 years ago

Do you generate the firmware_id.mif file every build using python? Or, do you just run the script to generate the constant mifs when they get updated and save the generated file to the git tree? I'll need to add python to my docker image if it's generated at build time.

mhaberler commented 8 years ago

yes, in the jenkins job - see 'Build'

yes, had to add a few packages to Charles' Docker images, take clues here: https://github.com/mhaberler/QuartusBuildVMs/commits/master

(notably python-protobuf which pulls in pyhon + fpm which needs ruby ruby-dev - a pita but kind of cheesy to workaround in jenkins)

mhaberler commented 8 years ago

the u-boot packages are done:

root@links:/home/machinekit# apt search u-boot-zturn
Sorting... Done
Full Text Search... Done
u-boot-zturn/stable,jessie 0.4813 armhf
  u-boot bootloader for the zturn board, https://jenkins.machinekit.io/job/u-boot-xilinx/9/

root@links:/home/machinekit# apt search u-boot-microzed
Sorting... Done
Full Text Search... Done
u-boot-microzed/stable,jessie 0.4813 armhf
  u-boot bootloader for the microzed board, https://jenkins.machinekit.io/job/u-boot-xilinx/9/

installing them right now only copies the respective u-boot.img and boot.bin files to /boot if needed we can add some clever postinstall.sh script later

mhaberler commented 8 years ago

the kernel from https://github.com/JDSquared/linux-xlnx/tree/zynq_4_4_rt is ready as well:

root@mksocfpga:~# apt search zynq-rt
Sorting... Done
Full Text Search... Done
linux-headers-zynq-rt/stable 4.4.0~rt3-1471367217.gita3d5eca armhf
  Linux kernel headers for 4.4.0-rt3-jd2-ga3d5eca on armhf

linux-image-zynq-rt/stable 4.4.0~rt3-1471367217.gita3d5eca armhf
  Linux kernel, version 4.4.0-rt3-jd2-ga3d5eca

linux-libc-dev-zynq-rt/stable 4.4.0~rt3-1471367217.gita3d5eca armhf
  Linux support headers for userspace development

@dkhughes - hope they dont sink your ship ;)

for now pushes to both your repos does not trigger a jenkins build and I wish I understood why.. if there's a change, just manually trigger the build by hitting 'Build now' in the respective project page; both projects are now in the https://jenkins.machinekit.io/view/machinekit/ view.

dkhughes commented 8 years ago

That's great! The only thing the u-boot install would maybe need to do is copy the base dtb file, which is the one required to boot the processor. But, since that file can live in a firmware package or in the kernel package I think we're fine without a post_install script.

mhaberler commented 8 years ago

well since the omap-image-builder script writes a uEnv.txt file anyway we could pass the dtb path there

I think a dtb=<path> line in the zynq equivalent of https://github.com/mhaberler/omap-image-builder/blob/a6e0f8d61a4e8133909608f1a5a2eeb789e0d7bb/target/boot/post_machinekit-de0-dtbo.txt does it

I'll give omap-image-builder a stab after you tell me them u-boots and kernel works for you

dkhughes commented 8 years ago

In a rootfs kernel boot, I check uEnv for the rname of the kernel we want. Other than that, you could override the name of the dtb file if you wanted to, by default it used the configuration name from the u-boot build (zynq-zturn.dtb, zynq-microzed.dtb).

u-boot actually looks for those files to load from the /lib/firmware/zynq folder. I can make changes to this if that's not desirable behavior.

It's been a while, but the omap-image-builder scripts copy the u-boot.bin and boot.bin to the boot partition of the sd card after they are installed to /boot by the package?

mhaberler commented 8 years ago

I think it does, it handles a lot of variations; I could even coerce it to do a altera-style boot partition

btw.. one jenkins upgrade later I think a push should trigger a build.. we'll see. edit: nope, not yet.

dkhughes commented 8 years ago

My z-turn is hidden from the internet right now, where can I steal the packages from so I can copy over from my dev pc?

mhaberler commented 8 years ago

uh oh, hold the presses, I think the zynq kernel is built from the wrong config...

mhaberler commented 8 years ago

let me verify the right things happen first..

dkhughes commented 8 years ago

Which config are you using? jd2-mzed_defconfig is the correct config, but I want to rename it to something like mksocfpga-zynq_defconfig. The config is not specific to the microzed at the moment.

mhaberler commented 8 years ago

I accidentially pasted altera kernel build instructions .. repair in progress

dkhughes commented 8 years ago

fwid is now building correctly in Vivado. Just need to test it with the HAL, and then update the build script to generate the mif files. I made it so that the Xilinx side can read an Altera mif file since you already have all of the generation tools for that in place.

Turns out, text parsing in VHDL is really clunky, though...

mhaberler commented 8 years ago

jeepers, the Job Config History plugin is a godsend ;) all on even keel.. sent instructions by mail

mhaberler commented 8 years ago

config: jd2-mzed_defconfig

dkhughes commented 8 years ago

config: jd2-mzed_defconfig

I think we should rename that before it becomes a permanent piece of the CI system.

mksocfpga-zynq_defconfig sound good to you?

Kernel installed fine and rebooted no problem. Need to check u-boot packages. Did jenkins upload that package somewhere I can get to easily?

mhaberler commented 8 years ago

fine - push the change, and please change the jenkins config yourself, ok?

mhaberler commented 8 years ago

how should I do the same for the altera? atlas isnt it, or is it?

right now its socfpga_defconfig :-(

dkhughes commented 8 years ago

I will push the file change and update the jenkins script no problem. Doing that now.

The altera devices in the project so far are all cyclone v devices. Maybe mksocfpga-cv_defconfig?

mhaberler commented 8 years ago

yes, that makes sense, will do

btw rebuilt the cv u-boot and kernel with your dockerfile, works great - and kernel builds down to 4minutes!

dkhughes commented 8 years ago

In the jenkins file in the linux kernel you have:

DOCKER_IMAGE=machinekit/mk-builder:wheezy-armhf

Is that the correct docker image?

mhaberler commented 8 years ago

no, I'm not using that anymore - direct instructions in the jenkins build step I should either update or remove that

dkhughes commented 8 years ago

RIP is great, but man, do the builds take a long time on these little arms... Gonna have to look at getting the mkbuild cross compiling for me...

Kernel is running well, and I think the u-boot package will be fine, excellent work @mhaberler.

You mentioned before you use an arm host to run the omap-image-builder scripts? I've been cross compiling them on an amd64 machine. Did you hit problems cross compiling that forces the use of an arm processor? I know docker images won't work (found that out the hard way a couple of months ago), but cross compilation on my dev machine ran pretty quick when I built a stripped down image before.

mhaberler commented 8 years ago

good, will give it a stab tomorrow for an sd image

Ideally I'd love to create SD images on amd64 - cross or qemu ; bonus if possible to run the docker image non-root ;)

I think I ran into problems with loop mounts, or partition mounting? cant remember. Anyway, had a cubox-i setup with stretch and that I'm using

I had ELBE suggested to me: http://elbe-rfs.org/ - well worth a look: https://github.com/Linutronix/elbe

but for now, I think I'll suffer through the current procedure, so at least the bugs are identical for both platforms ;)

dkhughes commented 8 years ago

Sounds, great. I hit problems with the instantiable hm2_soc driver. Getting unknown parameter errors, etc. I'm trying to work through them before I can debug hardware.

cdsteinkuehler commented 8 years ago

On 8/16/2016 3:13 PM, dkhughes wrote:

RIP is great, but man, do the builds take a long time on these little arms... Gonna have to look at getting the mkbuild cross compiling for me...

Only the first build is really slow...subsequent builds are pretty quick. When I'm actively developing ARM code, usually I'll start off building a RIP build before I really get working, then just do incremental builds of the stuff I'm actively coding.

I'll also sometimes run off an NFS mount, so I can edit/debug on the target or on my normal desktop machine (often I'll even have editors up on both systems at the same time).

You mentioned before you use an arm host to run the omap-image-builder scripts? I've been cross compiling them on an amd64 machine. Did you hit problems cross compiling that forces the use of an arm processor?

I haven't tried an actual cross compiler. I have run Machinekit builds via qemu, which works fine but is slow. IIRC, there are two reasons for native builds on the ARM:

1) Some of the uSD image builds need to happen on a native ARM platform or some things break (like node/npm) and won't install.

2) If you don't use a cross compiler (and AFAIK that's only easy to use for the Kernel builds) a decent native ARM board is as fast or faster than a high-powered x86 running arm compiles under qemu.

...but that was a while ago, things may have changed by now.

Charles Steinkuehler charles@steinkuehler.net

dkhughes commented 8 years ago

Only the first build is really slow...subsequent builds are pretty quick.

Yeah, I was just being whiny because on the zynq devices with 1GB of memory that first build takes about 45 minutes. You can't compile dual core because of memory constraints. -j8 etc on an amd64 spoils me :).

1) Some of the uSD image builds need to happen on a native ARM platform or some things break (like node/npm) and won't install.

The only images I've created have been with the omap-image-builder scripts cross compiling / chroot on an amd64 host. I have an RPIv3 I could run one on but I haven't seen any trouble yet. I do know that the image scripts do not work in a docker image which would be my favorite solution since setting up a dev environment is practically free that way.

mhaberler commented 8 years ago

I just tried http://elbe-rfs.org with the armhf-ti-beaglebone-black config and booted the image on a BB - zero problems, builds out of the box on an amd64!

I think this is worth exploring - getting rid of the ARM builders would be a boon

maybe we should move the image builder discussion to a mk/mk issue

cdsteinkuehler commented 8 years ago

On 8/16/2016 1:14 PM, dkhughes wrote:

fwid is now building correctly in Vivado. Just need to test it with the HAL, and then update the build script to generate the mif files. I made it so that the Xilinx side can read an Altera mif file since you already have all of the generation tools for that in place.

Turns out, text parsing in VHDL is really clunky, though...

I wouldn't worry too much about parsing mif files. I just used the mif format because it's what the Altera tools supported (and Intel hex files are kind of ambiguous for word lengths that are not 8-bits). If we're inferring a ROM via generic VHDL code the file format can be pretty much anything.

...and it will be a lot easier to write some python code to make a text format VHDL can parse easily than do text processing in VHDL! :)

Charles Steinkuehler charles@steinkuehler.net