GaloisInc / BESSPIN-CloudGFE

The AWS cloud deployment of the BESSPIN GFE platform.
Apache License 2.0
2 stars 2 forks source link

Where should code of CloudGFE live? #77

Closed podhrmic closed 4 years ago

podhrmic commented 4 years ago

As you all work on different parts of CloudGFE, what is a good place to store/manage your code?

Should there be a single repo (such as this one), with different branches or is there more value in having several different repositories, at least until things stabilize?

Where do you currently keep your code? Github, gitlab-ext, somewhere else?

Knowing all that will help us with the CI and at least indirectly with the FETT-target.

rtadros125 commented 4 years ago

Currently, all of FETT-Target codes are on github, and all the codes that it relies upon, are in submodules which all live on github as well. Is this what you are asking about?

podhrmic commented 4 years ago

Yes, although I am more curious about the CloudGFE parts and HW designs - I tagged you to keep you in the loop about this

rsnikhil commented 4 years ago

Currently all my new code is in : https://github.com/DARPA-SSITH-Demonstrators/BESSPIN-CloudGFE/tree/develop/AWSteria The 'Makefiles' contain a reference to $FLUTE which should point at any clone of: https://github.com/bluespec/Flute. Also it assumes you've done the 'hdk_setup.sh' script in any clone of : https://github.com/aws/aws-fpga.git All of this is easily changeble, nothing written in stone.

dhand-galois commented 4 years ago

IMO, we should hold off on combining into a top-level repo until we have at least decided on one or more "approved" implementations. We are still actively exploring Connectal, AWSteria, and FireSim, each with their own structure and considerations for building into a repeatable flow.

On the FireSim side, there is simply an absurd number of submodules down the hierarchy. Would not be surprised if it breaks the 100 mark. So I am trying to keep all modifications to a minimum number of custom repos and re-use the rest from the originals on GitHub. The two main ones are firesim (will be forking this somewhere soon) and chipyard. I have been using gitlab-ext, but the repos can move around as necessary. Existing repos we may have to reuse from gitlab-ext: chisel_processors, riscv-pk, riscv-linux, gfe, and rocket-chip.

jameyhicks commented 4 years ago

All the MIT repos needed are on github.

The top level of the hardware and its host software is: https://github.com/acceleratedtech/ssith-aws-fpga

That repo also contains build targets for Bluespec P2 and CHERI P2 using the same connectal-based host connection.

On Mon, May 11, 2020 at 5:09 PM dhand-galois notifications@github.com wrote:

IMO, we should hold off on combining into a top-level repo until we have at least decided on one or more "approved" implementation. We are still actively exploring Connectal, AWSteria, and FireSim, each with their own structure and considerations for building into a repeatable flow.

On the FireSim side, there is simply an absurd number of submodules down the hierarchy. Would not be surprised if it breaks the 100 mark. So I am trying to keep all modifications to a minimum number of custom repos and re-use the rest from the originals on GitHub. The two main ones are firesim (will be forking this somewhere soon) and chipyard https://gitlab-ext.galois.com/ssith/chipyard. I have been using gitlab-ext, but the repos can move around as necessary. Existing repos we may have to reuse from gitlab-ext: chisel_processors, riscv-pk, riscv-linux, gfe, and rocket-chip.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/DARPA-SSITH-Demonstrators/BESSPIN-CloudGFE/issues/77#issuecomment-626965691, or unsubscribe https://github.com/notifications/unsubscribe-auth/AASTPM74AZB6OTDSMWWCF3TRRBSR3ANCNFSM4M6HVYOA .

joestoy commented 4 years ago

My current work is as @jameyhicks says (apart from packages from other repos -- e.g. Flute -- incorporated as submodules by reference). The only exception is small modifications to scripts in the aws-fpga repo, to enable Vivado to meet timing at the speeds we want to run the designs at. Jamey suggested we could create a "fork". I don't know enough git to be confident about setting this up, but if we could keep any of our own modifictaions to AWS sciripts somewhere central, with some mechanism for keeping the unmodified parts (almost all of it) up to date, that would be good.

podhrmic commented 4 years ago

Thanks all for the updates!

@dhand-galois do you happen to have a working AFI that for example boots Busybox or Debian? I would like to pass it to the CI devs to have something to test.

joestoy commented 4 years ago

I don't know of any Debian, but we have one for FreeBSD.

podhrmic commented 4 years ago

@joestoy that would be perfect! Can you share it on a Google drive, github, or an S3 bucket (whatever is the easiest)?

joestoy commented 4 years ago

Let me just check that the standard bluespec_p2 one still works (I've been working with the CHERI_Flute for a few days). But my S3 bucket is in Virginia (because I signed up before I knew everyone else was on the Oregon system). Is there an easy way to move the AFI around the country, or (once I've checked it) should I send you the tarfile for you to upload yourself?

podhrmic commented 4 years ago

@brian-fivetalent - Joe has the AFI you were asking about. Can you help him get the AFI to you?

jrtc27 commented 4 years ago

See https://docs.aws.amazon.com/cli/latest/reference/ec2/copy-fpga-image.html. The afi- id will change but the same agfi- id will then be made available in Oregon.

joestoy commented 4 years ago

Thanks Jessica!

joestoy commented 4 years ago

The boot was successful (but painfully slow -- I hope we can speed it up soon). The image turning up in Oregon should be

 "FpgaImageId": "afi-0fe05c0162ce538c6"

I'll quickly try to put together a tarball containing the other files you'll need.

joestoy commented 4 years ago

@brian-fivetalent What's the best way to get a 400MB (compressed) tarball to you?

brian-fivetalent commented 4 years ago

@joestoy will this work? Fully encrypted transfer. https://send.firefox.com/?utm_source=blog.mozilla.com&utm_medium=referral&utm_campaign=firefox_frontier

joestoy commented 4 years ago

The tarball contains an elf-file binary of FreeBSD, and two file system images.

./riscv64.img
./bbl-riscv64.FETT
./minimal-riscv64.img

riscv64.img is a full system, but the minimal one might be sufficient for yout purposes (I've never tried it). I recommend making a copy of the one you intend to use, as it will be changed by the live system. If you close the booted linux down cleanly (poweroff) you can continue to use the live image; but if it crashes you should start again with a fresh copy.
On AWS, assuming you've installed the connectal drivers on your instance, after starting the instance I do

$ fpga-load-local-image -S 0 -I agfi-05ea160cab071da4d -a 97
$ sudo modprobe pcieportal
$ sudo modprobe portalmem
$ cd git-repos/ssith-aws-fpga/
$ dtc -I dts -O dtb -o build/devicetree.dtb src/dts/devicetree.dts
$ ./build/ssith_aws_fpga --dtb build/devicetree.dtb -B /home/ubuntu/riscv64-live.img --elf /home/ubuntu/bbl-riscv64.FETT --uart
joestoy commented 4 years ago

@brian-fivetalent Yes. Where shall I send it?

brian-fivetalent commented 4 years ago

once you have the link brian.mccall@fivetalent.com will work

brian-fivetalent commented 4 years ago

@joestoy I see the AFI as available in Oregon.

{
    "FpgaImages": [
        {
            "UpdateTime": "2020-05-12T22:33:06.000Z",
            "Name": "copy-P2",
            "Tags": [],
            "PciId": {
                "SubsystemVendorId": "0xfedd",
                "VendorId": "0x1d0f",
                "DeviceId": "0xf000",
                "SubsystemId": "0x1d51"
            },
            "FpgaImageGlobalId": "agfi-05ea160cab071da4d",
            "Public": false,
            "State": {
                "Code": "available"
            },
            "ShellVersion": "0x04261818",
            "OwnerId": "845509001885",
            "FpgaImageId": "afi-0fe05c0162ce538c6",
            "CreateTime": "2020-05-12T22:33:00.000Z",
            "Description": "20_05_07-084519"
        }
    ]
}
joestoy commented 4 years ago

I've sent the link to you (Firefox said "something went wrong" when the first attempt to upload it had nearly finished).

jameyhicks commented 4 years ago

MIT and CHERI (Cambridge and Cambridge) are now both at minimum viable platform stage with the connectal approach. Should we add it as a submodule here?

kiniry commented 4 years ago

Yes, as I understand it? Has this been added yet? Can this issue be closed now given the progress made on all platform variants? CC @joestoy @jameyhicks @dhand-galois @rsnikhil

podhrmic commented 4 years ago

AFAIK Firesim part is complete, not sure about the others.

jameyhicks commented 4 years ago

I think this issue can be closed. Connectal is going into CloudGFE repo.

On Tue, Jun 9, 2020 at 2:57 PM Michal Podhradsky notifications@github.com wrote:

AFAIK Firesim part is complete, not sure about the others.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/DARPA-SSITH-Demonstrators/BESSPIN-CloudGFE/issues/77#issuecomment-641507609, or unsubscribe https://github.com/notifications/unsubscribe-auth/AASTPM2D6SX2HK6HDQVRHN3RV2AZZANCNFSM4M6HVYOA .

kiniry commented 4 years ago

Acknowledged; closing.