Closed dhand-galois closed 4 years ago
Currently, I have everything up to and including building AFIs running locally. I am skipping toolchain generation and some software build steps. FireSim builds the kernel modules necessary to communicate with the FPGA as part of its setup, but that requires having the specific version of Linux kernel headers available. We could either pre-package this and skip building it, or pull in the proper kernel-headers package.
Waiting for my image to finish building to see if it'll be able to push to AWS for final encryption/AFI generation.
The builds are pushing to Amazon and being signed/compiled into AFIs. The FireSim "AFI Done" notification e-mail is also working, so we are all set on local generation.
This does require quite a few packages to be installed to get fully up and running, so someone will have to help with either fixing up the setup scripts to work on more OS's or make a docker image? It currently relies on having our bitstream_gen docker image. So someone will have to put some thought into how / if this can be distributed.
Based on the experience getting this much working, launching the actual "run farm" (at least the single instances we'd need) from a non-EC2 box should also be very possible. But not planning to work on that at the moment.
On-premise FireSim is now ready to go with PR #97.
This configuration relies on two docker images that have been shared on the Galois artifactory. The changes do not require the use of docker, but it makes getting the build environment properly configured significantly easier.
The on-premise FireSim is able to build from chisel to AFI completely on a single box (the AFI blessing still happens at AWS, of course). It also adds a new firesim buildlocalsw
task that will generate the host-side software that runs on the F1 instance to communicate with the FPGA. The build process packages a local .tgz
file as well as uploads it to AWS S3.
Many more details are in the README.
This was merged into develop
and should be available for general use now.
If Centos supports DKIM you can use that to install kernel modules that get built to match the kernel on the instance. If you do on-premise FireSim builds, then the instance could run Ubuntu or Debian, which do support dkim.
I think you mean DKMS (rather than authenticated email)?
Yes, dkms.
On Wed, Jun 3, 2020 at 1:40 PM Jessica Clarke notifications@github.com wrote:
I think you mean DKMS (rather than authenticated email)?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/DARPA-SSITH-Demonstrators/BESSPIN-CloudGFE/issues/89#issuecomment-638348440, or unsubscribe https://github.com/notifications/unsubscribe-auth/AASTPMZALG25W6CRANSM5LTRU2DI5ANCNFSM4NBBTSMQ .
To save costs and complexity, it would be good to have the majority of FireSim run locally. This will also aid in scripting AFI creation, building software, and running tests.
This issue will be used to track determining how much to localize. We can potentially have FireSim run completely on a local machine, using awscli to start/stop/communicate with running F1 instances. But getting all the plumbing for that working is non-trivial.