TUD-OS / NRE

NOVA runtime environment (official branch)
GNU General Public License v2.0
33 stars 12 forks source link

Quick Question on Vancouver #50

Closed lonnietc closed 3 years ago

lonnietc commented 3 years ago

Hello,

I am running through the

./b qemu boot/(test examples)

and just noticed that when I am testing:

./b qemu boot/tinycore and ./b qemu boot/linux

that they are hugely slower than the "-native" versions:

./b qemu boot/tinycore-native and ./b qemu boot/linux-native

while things like

boot/disktest and boot/cycleburner

run significantly faster/

The immediate main difference that I can see is that both tinycore and linux are both using "vancouver" as the VMM and I am wondering if you also remember noticing these huge differences in loading and running of the OS's in the VM's.

Any thoughts on this?

Nils-TUD commented 3 years ago

Yes, that's to be expected if you run it in QEMU or similar. The reason is that boot/tinycore runs a VM in a VM without any hardware virtualization. That is, QEMU (first VMM) runs on your host and executes NRE. NRE uses Vancouver (second VMM) to run Tinycore-Linux. All that is done without hardware virtualization (cannot be used, because then you can't run another VM inside) and uses dynamic translation instead. Thus, it is significantly slower than directly running Tinycore-Linux in a VM on your host.

As has been shown by the NOVA paper, running Linux (kernel compile benchmark) on NOVA+Vancouver on real hardware has only an overhead of about 1% compared to running Linux natively.

lonnietc commented 3 years ago

Thanks for that info as it too will be helpful.

I have been able to now run and test all of the "boot/(examples)" and am very impressed with what I have seen.

For a point of clarification as I was looking into the "boot" scripts and if I am understanding the way that NRE does things, it seem that it allows to load various "modules" for use in the PDs across the hypervisor VMs.

For example in the "cycleburner" script (below) the flags will be passed to the "novaboot" script:

!tools/novaboot

QEMU_FLAGS=-m 64 -smp 4 HYPERVISOR_PARAMS=spinner serial bin/apps/root bin/apps/acpi provides=acpi bin/apps/keyboard provides=keyboard bin/apps/reboot provides=reboot bin/apps/pcicfg provides=pcicfg bin/apps/timer provides=timer bin/apps/console provides=console bin/apps/sysinfo bin/apps/cycleburner

From this, it also seems that various NRE component modules like:

apic keyboard reboot pcicfg timer console

are for the "driver" components within the PDs for loaded across the hypervisor for all of the Apps that use them would be, in this case, like

sysinfo cycleburner

Does this mean that the component nature of NRE will allow for me to just include the components that are needed?

Also, in the script, a particular drive might say:

/bin/apps/keyboard provides=keyboard

I am wondering what is the purpose of the "provides=keyboard" in this case?

Finally, I was looking at the directory layout and I can see that:

  1. nre/app are for the applications
  2. nre/services for the applications

But is the nre/libs/libseoul use in the apps as well as for the vancouver VMM for used throughout?

Basically, I am trying to differentiate what is just for the application development and what is needed just for the VMM development as I did not see a "vancouver" directory and the libs/libseoul directory has a lot of hardware drivers which I do not know if they are part of the core NRE framework or particular only used by the VMM when compiled.

The reason is so that I will know what is required for core NRE driver development (i.e. apps/keyboard, apps/network, etc.)

Any information on the on the directory layout would also be helpful.

Sorry for the long post on this but it should be the last major question for a while. Best Regards

Cheers

Nils-TUD commented 3 years ago

Thanks for that info as it too will be helpful.

I have been able to now run and test all of the "boot/(examples)" and am very impressed with what I have seen.

Thanks :)

For a point of clarification as I was looking into the "boot" scripts and if I am understanding the way that NRE does things, it seem that it allows to load various "modules" for use in the PDs across the hypervisor VMs.

Almost. The way it works is that root starts all the components given in the bootscript (passed as boot modules to NOVA and root). And each component is put into a separate PD. That has nothing to do with VMs, though. These only come into play when Vancouver is used.

For example in the "cycleburner" script (below) the flags will be passed to the "novaboot" script:

!tools/novaboot

QEMU_FLAGS=-m 64 -smp 4 HYPERVISOR_PARAMS=spinner serial bin/apps/root bin/apps/acpi provides=acpi bin/apps/keyboard provides=keyboard bin/apps/reboot provides=reboot bin/apps/pcicfg provides=pcicfg bin/apps/timer provides=timer bin/apps/console provides=console bin/apps/sysinfo bin/apps/cycleburner

From this, it also seems that various NRE component modules like:

apic keyboard reboot pcicfg timer console

are for the "driver" components within the PDs for loaded across the hypervisor for all of the Apps that use them would be, in this case, like

sysinfo cycleburner

Right. Many of these are services that are provided to be used by other applications or services.

Does this mean that the component nature of NRE will allow for me to just include the components that are needed?

Exactly, you can choose the components you need for every usecase. In particular, an application only needs to trust the services that it uses.

Also, in the script, a particular drive might say:

/bin/apps/keyboard provides=keyboard

I am wondering what is the purpose of the "provides=keyboard" in this case?

That just means that this component will register a service called "keyboard" so that root will wait with the start of the following components until this has been registers. So, it's a really simple way to resolve dependencies, so that a component's dependencies are available before the component is started.

Finally, I was looking at the directory layout and I can see that:

  1. nre/app are for the applications
  2. nre/services for the applications

But is the nre/libs/libseoul use in the apps as well as for the vancouver VMM for used throughout?

libseoul (Vancouver is the old name actually, Seoul is the new one; I should probably stick to Seoul :)) is indeed a bit strange, because the source provides both a library and an application. And I think it's in libs, because it is also used by console to interpret BIOS code.

Basically, I am trying to differentiate what is just for the application development and what is needed just for the VMM development as I did not see a "vancouver" directory and the libs/libseoul directory has a lot of hardware drivers which I do not know if they are part of the core NRE framework or particular only used by the VMM when compiled.

The reason is so that I will know what is required for core NRE driver development (i.e. apps/keyboard, apps/network, etc.)

libseoul does not provide drivers, but virtual device models. So, if a VM accesses a disk, for example, Seoul provides a virtual disk model that handles the access.

lonnietc commented 3 years ago

Thanks as your explanation clears up a lot.

My plan is to actually replace the Seoul (formally Vancouver) VMM with something like TinyEMU (from Fabrice the developer of QEMU) to see if I can get it to work and basically wanted to see what parts of the old Seoul VMM and libraries that are not needed so that the code will not get confusing with too many directories and legacy code that is not needed.

Also, I plan to work on new new network "drivers" as there is only one now (NE2K) in the services and I will need a "network bridge" so that one service can manage all of the VM network needs.

Another, eventual service driver will be a Virtual GPU service driver. ACRN and Xen has this so that a single service VM can provide vGPU devices to VMs giving them the impression that a GPU is present in the hardware. This is a bit further out, but on my radar.

For now, I just want to look into what it might take to get TinyEMU setup as the VMM and if all goes well the maybe QEMU, or Bhyve as the next VMM as they are more powerful, but also much more complicated.

I still have a LOT to learn, but its slowly coming together and your wonderful replies make things clearer.

On a side note, I read at one time somewhere, but cannot find it now, about what the main NOVA Hypervisor page is displaying, in particular at the bottom:

Screenshot from 2021-07-01 14-27-37

Do you know where that documentation is located?

If not, then that is fine and I will keep looking until I find it again.

I also think that we can close this ticket as well, my friend. Cheers

Nils-TUD commented 3 years ago

Do you know where that documentation is located?

This shows the different events that occur, per CPU core. But this is actually documented. Look into kernel/nova/doc/specification.pdf in Appendix C :)