Open WillNilges opened 1 year ago
Are you building with the toolbelt (the docker builder) or have you installed locally? If you have a local install there is maybe some difference in versions?
This was a baremetal install (before I realized that toolbelt was a thing :P). 99% sure it's an issue with version differences. Building with a container yields more expected results (timing errors).
ERROR: Max frequency for clock 'Core_clk': 47.03 MHz (FAIL at 48.00 MHz)
ERROR: Max frequency for clock 'Slow_clk_$glb_clk': 21.16 MHz (FAIL at 24.00 MHz)
For that, I'm just pulling the fpga-builder
container and running it in podman. I chose to do it this way because Copypasta'ing the toolbelt commands from Docker to Podman didn't immediately work, but that's kind of off topic.
In an attempt to remedy the errors, I tried running search_seed.py
, but didn't have a lighthouse.json file. After mounting my project directory inside the container and running the build, I realized that was some kind of build artifact. So now I'm running search_seed.py inside the container. How long should it take to find a seed?
Ok, it was taking too long so I parallelized it and ran it on the biggest computer I had access to. After running all 1000 seeds three times for the V6 tag, I got 129, 280 and 577, neither of which appear to be actually working?
I am not sure what you mean about the found seeds not working. If a seed is found by the script, it should be working to build again with that seed in the Makefile.
The current design is really close to the limit. An easy way to ease a little bit the pressure is to set the 'speedMultipler' parameter of the PulseOffsetFinder to 1: https://github.com/bitcraze/lighthouse-fpga/blob/4f0f6d4dc9a70525837cb89dc40672e456d58102/src/main/scala/lighthouse/Lighthouse.scala#L218.
This will slow down the pulse processing but it should work fine nonetheless: it was designed to be able to cope with receiving 8 lighthouse base-station at the same time in the worst case. This should be relaxed with what we know now:
So setting the multiplier to 1 should work. I have not had time to test it yet though which is why it is not yet the default.
Thanks for the reply. What I meant by the found seeds not working was that even with a seed that the script said was good, I was still getting timing failures.
ERROR: Max frequency for clock 'Core_clk': 43.99 MHz (FAIL at 48.00 MHz)
ERROR: Max frequency for clock 'Slow_clk_$glb_clk': 22.84 MHz (FAIL at 24.00 MHz)
I set the speed multiplier to 1, and it compiles! I'm still pretty far off from properly testing this thing, but I'll let you know if I run into more problems, thank you!
Have you been using the same compiler to find the seed and then to try to compile again later? A seed is only good for one version of a compiler and the search_seed.py
script is by default searching the seed in a docker container (this same container is then used in CI. So using the same container later should yield the same result.
I had reverted to the V6 tag, and was using a parallelized version of the search_seed.py
script I had found therein. Admittedly, I haven't really messed around with the latest code and included docker container, primarily because I don't want to use Docker.
#!/usr/bin/env python3
import subprocess
import sys
from joblib import Parallel, delayed
def pnr(seed: int):
result = subprocess.run(["nextpnr-ice40", "--seed", str(seed),
"--package", "sg48",
"--up5k", "--json", "lighthouse.json", "--asc",
"lighthouse.asc", "--pcf", "lighthouse4_revB.pcf"])
if result.returncode == 0:
with open(f"seed_{seed}.txt", "w") as file:
file.write("Seed is {}".format(seed))
print("Seed is {}".format(seed))
sys.exit()
results = Parallel(n_jobs=48)(delayed(pnr)(i) for i in range(1000))
Searching all 1000 seeds on my laptop would have taken hours, maybe a day or two.
I apologize for making what seems like RTFM mistakes 😅 Suppose I was just impatient and wanted to use Podman.
I should explicitly ask, though: how should I compile this? The readme suggested install the tools baremetal, but that didn't really work. Should I use toolbelt?
You can use either a local tool or the toolbelt, the important things is to use the exact same build when searching the seed and when building, for this Docker is quite useful but as long as you use the same system to find the seed and building new bitstream you can use your locally-installed toolchain.
If you find a seed and then it does not work anymore though, you might have found a bug in the router: my understanding is that the seed makes routing repeatable.
I had the same issue (installed toolchain myself on mac m2) and then I tried tools/search_seed.py
I just let it run and finally, after 14 hours of searching I got the result:
Info: Program finished normally. python3 tools/update_bitstream_comment.py lighthouse.asc "6" icepack lighthouse.asc lighthouse.bin Seed is 42
I should have known ;-)
Getting the following error when running
make
:It does, indeed, appear that some components of the device are at 100% utilisation.
I checked the most recent CI run, and it looks like the numbers are slightly different than mine... That's weird?
This does not seem like the kind of thing that changing the seed, as mentioned in the README, would help, but I thought I would open this issue while I investigate.