Closed ionutnechita closed 3 years ago
Is possible to view grub console/prompt with acrnctl?
Just noticed this on the mailing list (from @jsun26intel)
Hi @gvancuts,
I tried to setup from this wiki page: https://projectacrn.github.io/latest/tutorials/using_zephyr_as_uos.html But I didn't see anything appear on the console. ( acrnctl start launch_zephyr )
Using the HV ACRN console: vm console 2 I saw that it remained in EFI Shell. I have to give commands in the UEFI shell to start the Zephyr application.
FS0: cd efi cd boot SHELL:> grub_x86_64.efi press enter in GRUB for Zephyr Kernel and the application started. Hello World! acrn
From what i said, the wiki is not complete to start an Zephyr application from the first. Stays in SHELL and does not enter in GRUB.
Can the issue be fixed in the wiki?
It used to work just fine and start Zephyr automatically. Perhaps there is a regression in newer OVMF, I'll try to reproduce this on my side.
In the meantime, I noticed that you use grub_x86_64.efi
(in the zephyr.img
, under efi/boot
), can you rename that file into bootx64.efi
(as in the tutorial). One possibility for what you're observing is that the OVMF.fd
firmware is automatically picking up grubx64.efi
, but not grub_x86_64.efi
).
Is possible to view grub console/prompt with acrnctl?
I guess you're asking about the Grub from within the VM, aren't you?
I must admit I never use acrnctl
myself so I don't know... :-) Which actually makes me wonder, do you use acrnctl
extensively?
Yes, I use the acrnctl command more.
For the name file issue, I'll try again tomorrow. grub_x86_64.efi -> grubx64.efi to rename in zephyr.img
If you have time, try to reproduce this problem too.
Thanks.
I'll try this tomorrow (I need to set-up another system from scratch for that... too late for that today ;-)
Hi @ionutnechita , I got around to testing this today. I can reproduce your problem, and it is solved by using grubx64.efi
instead of grub_x86_64.efi
. Can you try this on your side and close this ticket if that solves your problem too?
Yes, I use the acrnctl command more.
Just a heads-up that this tool may be deprecated in the future. The idea is that ACRN could/would be managed through an API and higher-level libraries and utilities such as libvirt
. It's still around today and actively used by Kata Containers when using ACRN (and it only uses the blkrescan
function if I'm not mistaking) but that's also an area where we'd like to change the Kata Containers implementation so that it uses a proper API for this.
Is a very good idea with API and libvirt. I would be interesting for such an implementation with API in the future on ACRN.
But until then, I will test Windows, Linux, Zephyr, Linux-RT with acrnctl.
Thanks.
Is a very good idea with API and libvirt. I would be interesting for such an implementation with API in the future on ACRN.
I agree, this feels like the right way to go to me too ;-)
But until then, I will test Windows, Linux, Zephyr, Linux-RT with acrnctl.
Yes, I just wanted to make sure you were not building up a huge dependency on acrnctl
on your side and that we would be pulling the rug from under your feet ;-)
As mentioned in: https://projectacrn.github.io/latest/tutorials/using_zephyr_as_uos.html /efi/boot/bootx64.efi should work.
Hi ACRN Team,
I create environment with Zephyr, but vm not started. Flag VM is Init, not Started or Created.
Boot for ./launch_zephyr.sh
cpu1 online=1 cpu2 online=1 cpu3 online=1 SW_LOAD: get ovmf path /usr/share/acrn/bios/OVMF.fd, size 0x200000 vm_create: zephyr_vm1 VHM api version 1.0 vm_setup_memory: size=0x8000000 open hugetlbfs file /run/hugepage/acrn/huge_lv1/zephyr_vm1/D279543825D611E8864ECB7A18B34643 open hugetlbfs file /run/hugepage/acrn/huge_lv2/zephyr_vm1/D279543825D611E8864ECB7A18B34643 level 0 free/need pages:0/65 page size:0x200000 level 1 free/need pages:28/0 page size:0x40000000 to reserve more free pages: to reserve pages (+orig 0): echo 65 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages now enough free pages are reserved!
try to setup hugepage with: level 0 - lowmem 0x8000000, biosmem 0x200000, highmem 0x0 level 1 - lowmem 0x0, biosmem 0x0, highmem 0x0 total_size 0x140200000
mmap ptr 0x0x7f4f5cbb3000 -> baseaddr 0x0x7f4f5cc00000 mmap 0x8000000@0x7f4f5cc00000 touch 64 pages with pagesz 0x200000 mmap 0x200000@0x7f505ca00000 touch 1 pages with pagesz 0x200000
really setup hugepage with: level 0 - lowmem 0x8000000, biosmem 0x200000, highmem 0x0 level 1 - lowmem 0x0, biosmem 0x0, highmem 0x0 vm_init_vdevs No correct pm notify channel given pci init hostbridge pci init lpc pci init virtio-blk pci init virtio-console
SW_LOAD: entry[0]: addr 0x0000000000000000, size 0x00000000000a0000, type 0x1 SW_LOAD: entry[1]: addr 0x0000000000100000, size 0x0000000007f00000, type 0x1 SW_LOAD: entry[2]: addr 0x000000003b800000, size 0x0000000004004000, type 0x2 SW_LOAD: entry[3]: addr 0x000000007f800000, size 0x0000000000800000, type 0x2 SW_LOAD: entry[4]: addr 0x00000000e0000000, size 0x0000000020000000, type 0x2 SW_LOAD: entry[5]: addr 0x0000000140000000, size 0x0000000000000000, type 0x2 SW_LOAD: entry[6]: addr 0x0000000000000000, size 0x0000000000000000, type 0x0 SW_LOAD: ovmf_entry 0xfffffff0 add_cpu
I think the problem would be with the virtual bios. /usr/share/acrn/bios/OVMF.fd Can you help me with this?
OS: OpenSUSE Tumbleweed acrn-dm --version DM version is: 2.4-unstable-94a980c9-dirty (daily tag:acrn-2021w05.5-180000p), build by root@2021-02-04 11:08:46
commit 94a980c923cb235ccdb7bf62c13ea86ff90aca05 (HEAD -> master, origin/master, origin/HEAD) Author: Li Fei1 fei1.li@intel.com Date: Mon Feb 1 11:29:14 2021 +0800
commit 0b6840d1be927023d808b798fa6ae1ff8803ec68 Author: Xie, nanlin nanlin.xie@intel.com Date: Tue Feb 2 23:07:21 2021 +0800
... [ 62.856294] IRQ 125: no longer affine to CPU1 [ 62.857574] smpboot: CPU 1 is now offline [ 63.861164] vhm: try to offline cpu 1 with lapicid 2 [ 63.881516] IRQ 123: no longer affine to CPU2 [ 63.883231] smpboot: CPU 2 is now offline [ 64.885449] vhm: try to offline cpu 2 with lapicid 1 [ 64.903123] IRQ 128: no longer affine to CPU3 [ 64.904414] smpboot: CPU 3 is now offline [ 65.907416] vhm: try to offline cpu 3 with lapicid 3 [ 65.931156] vhm_dev_open: opening device node [ 65.931660] vhm-ioreq: init request buffer @ 00000000122dae95! [ 65.931663] vhm-ioreq: created ioreq client 1 for ioeventfd-1 [ 65.931702] ACRN vhm ioeventfd init done! [ 65.931706] ACRN vhm irqfd init done! [ 65.931708] vhm: VM 1 created [ 66.000111] vhm-ioreq: created ioreq client 2 for acrndm