Open dakota opened 3 years ago
Really hope the PCIe card can be made to work but I'm going to passthrough a M.2 asmedia USB3 card instead to run the USB Coral device. Passing just the device via ESXI proved to be un-reliable. Lots of USB bus errors after running Frigate for a while and it would take down the entire VM, make ESXI unresponsive until I physically unplugged the USB coral device from the server.
I didn’t believe everything I read, but the cable was my problem. You need excellent quality cable, and the ones from Amazon did not work. When I used one from my M2 to USB enclosure, never had any more problems with esxi. I also tried native port then added a pci card - both failed because of the cable :-(On Aug 29, 2023, at 13:21, goldserve @.***> wrote: Really hope the PCIe card can be made to work but I'm going to passthrough a M.2 asmedia USB3 card instead to run the USB Coral device. Passing just the device via ESXI proved to be un-reliable. Lots of USB bus errors after running Frigate for a while and it would take down the entire VM, make ESXI unresponsive until I physically unplugged the USB coral device from the server.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>
@lamw Is there any update? I really don't want to move from ESXi to Proxmox only because of that issue.
@Real-Taz Sorry, for some reason I thought I had responded but looks like it was another GH thread. Please see https://github.com/blakeblackshear/frigate/discussions/3604#discussioncomment-6867730 and ultimately, the fix needs to come from Google/Coral team which I had file bug back in Aug https://github.com/google-coral/libedgetpu/issues/48 with no response :(
Hi, I had the same problem on the esxi. I have 2 HWs that I tried that on
First I was trying the usb coral, I somehow managed to workaround all issues on the esxi, but after few weeks it got unstable and crashed and whole setup needed to be repeated, then after few days the same.
So I ordered mini pcie coral variant and tried that on the higher end pc via minipcie->pcie adapter card, but with no luck. I was able to passthrough it to the VM, but in the VM the drivers wouldn't load up as in the first comment in this thread.
Then I tried it also on the mini-pc (this one https://www.aliexpress.com/item/1005004848553416.html) because it has direct mini pcie slot. But ended at the same place as with the higher end pc, the same issue.
So I ordered new 128GB nvme to try proxmox instead of esxi (didn't want to purge the old 128GB if this would have been the same dead end)
BUT! It seems I got beyond this problem. :) So, this mini pcie now has proxmox host, pfsense as a main router and now frigate vm which already has passthrough of the minipcie coral and the drivers seems to work here!
I will update you guys If I manage to get it working with frigate. Didn't go that far yet 😉
I'm using a M.2 Accelerator B+M key on Proxmox for months now. Only difference with @jakubsuchybio is that I'm not using a VM, but LXC. Never tried a VM, but in case it doesn't work there is a fallback scenario.
I have read about the LXC way, but as I'm more familiar with VMs, so I started this way. Great to know that LXC works fine 👍
The issue is when using PCI pass through to a virtual machine. ESXi is affected and I wish it was working. Xen seems to have the same issue: https://xcp-ng.org/forum/topic/6304/google-coral-tpu-pcie-passthrough-woes/20 Promox via LXC is not impacted as you install the driver on the host itself, but I don't want to switch to Promox so.. :D
The issue is when using PCI pass through to a virtual machine. ESXi is affected and I wish it was working. Xen seems to have the same issue: https://xcp-ng.org/forum/topic/6304/google-coral-tpu-pcie-passthrough-woes/20 Promox via LXC is not impacted as you install the driver on the host itself, but I don't want to switch to Promox so.. :D
The only problem I'm having with LXC is that with every kernel update of Proxmox I have to reinstall the kernel headers, because the driver is on the host (as you stated). I wished I never started with ESXi, I will never switch back!
Alright, can confirm, that the mpcie coral works on proxmox vm:
Alright, can confirm, that the mpcie coral works on proxmox vm:
I jumped ship from ESXi to Proxmox and its working for me too.
The only problem is, that I have intel N5105 which has some instruction bug and the VM in the proxmox is freezing. More about it here: https://forum.proxmox.com/threads/vm-freezes-irregularly.111494/page-1
Everything I tried from that thread didn't help me, so I'm stuck at this freezing and is not usable for my case.
Seems my decision to move from ESXi to proxmox came just in time https://kb.vmware.com/s/article/2107518?lang=en_US
I'm using the mini-pcie version with this Ableconn adapter that has multiple reports of working. It is in Dell PowerEdge T420 running ESXi, and being passed through to a Ubuntu 20.04 VM. Followed the official guide to install the drivers.
The
/dev/apex_0
device is however not showing. Any ideas (bunch of debug info below)?lscpu
uname -a
dmesg | grep apex
lspci
lspci -vvv
modinfo gasket