rockchip-linux / kernel

BSP kernel source
Other
917 stars 1.07k forks source link

PCIe-port not working on RK3399 #116

Open k-a-z-u opened 6 years ago

k-a-z-u commented 6 years ago

We have a RockPro64 board here and tried to get the pcie port working. Depending on the card that was inserted, the port was either simply disabled, or the kernel panicked during boot.

Dmesg when no card is in the slot (4.4.138-1094):

With cards like Dell PowerEdge Perc 5i SAS RAID Controller, the kernel seldomly boots, ignoring the card and giving the dmesg-output from above, but most of the times, the kernel crashes with a couple of different stack-traces.

This is a stack-trace with this card using a mainline kernel. In contrast to this kernel, mainline continues booting, thus we were able to copy it out:

[    7.044117] Hardware name: Pine64 RockPro64 (DT)
[    7.044656] pstate: 60000085 (nZCv daIf -PAN -UAO)
[    7.045223] pc : rockchip_pcie_rd_conf+0x18c/0x1f8 [pcie_rockchip_host]
[    7.045989] lr : rockchip_pcie_rd_conf+0x17c/0x1f8 [pcie_rockchip_host]
[    7.046748] sp : ffff00000e6f3730
[    7.047137] x29: ffff00000e6f3730 x28: 0000000000000001 
[    7.047754] x27: 0000000000000000 x26: 0000000000000000 
[    7.048372] x25: 0000000000000000 x24: ffff00000e6f3854 
[    7.048990] x23: ffff8000f1557800 x22: ffff00000e6f37b4 
[    7.049607] x21: ffff8000f0f5c398 x20: 0000000000000004 
[    7.050225] x19: ffff000010100000 x18: ffffffffffffffff 
[    7.050842] x17: 0000000000000000 x16: 0000000000000000 
[    7.051460] x15: 0000000000000000 x14: 000000000000024e 
[    7.052077] x13: 0000000000000001 x12: 0000000000000000 
[    7.052694] x11: 0000000000000001 x10: 0000000000000960 
[    7.064723] x9 : 0000000000000000 x8 : 0000000000000000 
[    7.076596] x7 : 0000000000000000 x6 : 0000000000000000 
[    7.088340] x5 : 0000000000100000 x4 : 0000000000c00008 
[    7.099979] x3 : ffff000013000000 x2 : 000000000080000a 
[    7.111498] x1 : ffff000013c00008 x0 : ffff000010000000 
[    7.122902] Process systemd-udevd (pid: 2317, stack limit = 0x(____ptrval____))
[    7.134519] Call trace:
[    7.145583]  rockchip_pcie_rd_conf+0x18c/0x1f8 [pcie_rockchip_host]
[    7.157044]  pci_bus_read_config_dword+0x84/0xe0
[    7.168278]  pci_bus_read_dev_vendor_id+0x2c/0x1a0
[    7.179422]  pci_scan_single_device+0x78/0xf8
[    7.190431]  pci_scan_slot+0x34/0xf0
[    7.201243]  pci_scan_child_bus_extend+0x50/0x290
[    7.212087]  pci_scan_bridge_extend+0x2ec/0x4e0
[    7.222814]  pci_scan_child_bus_extend+0x1e4/0x290
[    7.233469]  pci_scan_root_bus_bridge+0x58/0xd8
[    7.244022]  rockchip_pcie_probe+0x60c/0x750 [pcie_rockchip_host]
[    7.254833]  platform_drv_probe+0x50/0xa0
[    7.265419]  driver_probe_device+0x208/0x2e8
[    7.275995]  __driver_attach+0xd4/0xd8
[    7.286409]  bus_for_each_dev+0x74/0xc8
[    7.296708]  driver_attach+0x20/0x28
[    7.306848]  bus_add_driver+0x1ac/0x218
[    7.316892]  driver_register+0x60/0x110
[    7.326831]  __platform_driver_register+0x40/0x48
[    7.336604]  rockchip_pcie_driver_init+0x20/0x1000 [pcie_rockchip_host]
[    7.336621]  do_one_initcall+0x5c/0x178
[    7.355610]  do_init_module+0x58/0x1b0
[    7.364755]  load_module+0x1e14/0x2210
[    7.373778]  sys_finit_module+0xcc/0xe8
[    7.382700]  __sys_trace_return+0x0/0x4
[    7.391247] Code: 7100129f 54fff921 f94002a0 8b130013 (b9400273) 
[    7.399680] ---[ end trace 706cbd252753b386 ]---
foundObjects commented 6 years ago

I'm having exactly the same issue with 3 of the 4 cards I've plugged into my RockPro64. The only card that functions as expected is an Intel I350-T4, the three Mellanox cards I've attempted to use all cause pcie initialization to fail.

I'll get a serial cable out later and dump a full boot log with the 4.4 kernel and mainline.

foundObjects commented 6 years ago

Bootlogs below.

Kernel 4.4.132-1075 (ayufan 0.7.9): Intel I350-T4 -- works perfectly Mellanox ConnectX-2 MPNA19-XTR -- crashes, trace included Mellanox ConnectX-2 MHQH29C -- "rockchip-pcie: probe of f8000000.pcie failed with error -110" doesn't crash

Kernel 4.18.0-rc8-1060 (ayufan): Intel I350-T4 -- working perfectly Mellanox ConnectX-2 MPNA19-XTR -- fails with call trace, doesn't crash Mellanox ConnectX-2 MHQH29C -- "rockchip-pcie: probe of f8000000.pcie failed with error -110" doesn't crash

hopkinskong commented 6 years ago

Having the same issue, cross ref: https://github.com/ayufan-rock64/linux-build/issues/254

luckcolors commented 6 years ago

Can we please have some updates on this issue?

foundObjects commented 6 years ago

Anyone? I'm about ready to sell my RockPro64 and just use x86_64 for my project.

I'm 100% willing to supply any debug information you might need, and I've got about a dozen different PCIe network cards here I can test with.

rich0 commented 5 years ago

I'm running into similar issues with an LSI HBA card. It works without issue in a standard x86 motherboard. gen1 training times out when the board is inserted and nothing shows up in lspci. If I plug in a USB3 host card it seems to work fine, so the PCIe slot itself is fine (gen2.1 rockpro64 board).

I built 4.4.154-1124-rockchip-ayufan with PCI_DEBUG enabled and captured this dmesg output with the LSI card installed: https://pastebin.com/SAZPFpXD

Now, this card is an 8x card in a 4x slot, so as an experiment I ran it through a 1x PCIe mining adapter. It works just fine in an x86 motherboard in this config. When I use this with the rockpro64 I get a new error: https://pastebin.com/QmyqyNNX

To try something different I rebuilt the kernel, but extending the gen1 PCIe training timeout from 500ms to 5s (drivers/pci/host/pcie-rockchip.c line 619). The board boots normally without the LSI card, just giving the usual gen1 timeout message. If I boot it with the LSI card installed directly (no 1x adapter) now I get the error again: https://pastebin.com/vixEZKr4

So, perhaps it is timing out too quickly or taking too long to train without the 1x adapter, which prevents it from getting to the error. For some reason it trains faster with the 1x adapter.

Apologies if this is an unrelated issue - if so I'm happy to create a new one. I'm happy to test anything at this point.

nuumio commented 5 years ago

Any news about this? I'm having same the same problem with LSI 9201 card. If it's of any help here's few logs from my Rockpro64.

With 4.4 kernel (ayufan's), serial console log (3 crashes):

With 4.20-rc6 (ayufan's + patch to disable mmc command queueing), serial console log (3 crashes) and dmesg from last attempt:

Edit: Like @rich0 above I tested the card in x86 setup (Ubuntu 18.04, 64bit). With it the card work in both PCIe3 16x and 1x slots and lscpi shows (full output pastebin: https://pastebin.com/fuyiB4Dm): 05:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [1000:0072] (rev 03)

wombat commented 5 years ago

I am having the same issue with a Delock PCI Express Card > Mini PCIe adapter connected to a Telit LM960 LTE module: https://pastebin.com/FHMGRgVG

foundObjects commented 5 years ago

Has there been any motion on this at all? I'm sitting on what's effectively a useless board for my application (10GbE routing) at the moment since I can't bring up any of the PCIe NICs I've tested (I've tried around 10 different NICs at this point.) The only NICs I've managed to use successfully are Intel i350 and similar boards.

samex commented 5 years ago

Didn´t debug why is my rockpro64 not booting with an LSI 3Ware 9650 , but I bet I have the same issues like above some people reporting.

Would be nice to know where we can start to solve that problem?

nuumio commented 5 years ago

Updating my status: Looks like I got my LSI 9201 working at least with one SSD drive. For some reason PCIe driver seems to need some delay between training and bus scanning. I built a test kernel with this workaround on top of ayufan's latest 4.4: https://github.com/nuumio/linux-kernel/releases/tag/nuumio-4.4-pcie-scan-sleep-02

The most relevant change is: https://github.com/nuumio/linux-kernel/commit/5a65b17686002dc84d461bffa324a2cb68e67aee (in branch: https://github.com/nuumio/linux-kernel/commits/nuumio-4.4-pcie-scan-sleep).

Last time I tried this with a bit older kernel I got the controller up but it kept resetting the connection to SSD every few seconds. Now with more patches it seems somewhat stable. I have no idea about the root cause but hopefully this gives ideas where the actual problem is. Curiously the delay needed for this is about the same that was needed earlier for deferring SDIO initialization to get WiFi/BT module and PCIe working at the same time for Rockpro64 (that was finally done so that SDIO driver waits until PCIe is finished).

My current setup:

4.4-development branch seems quite active currently. I hope you get this one resolved too :)

rich0 commented 5 years ago

Just to comment for the record and the benefit of the many others with this issue, nuumio's patch (which seems to be in line to be released on ayufan) fixes my issue. You just need to set a command line parameter to enable the delay (I haven't worked out the minimum required delay yet).

I was also having power issues which were solved by a 1x mining adapter. Using a 5A power supply is likely to address that problem though nobody has tested the whole thing under heavy load yet.

I'll be doing actual testing of the drives/etc but for now I can get the HBA to show up in lspci. ayufan also enabled LSI HBAs in his kernels.

RchGrav commented 5 years ago

I was also having power issues which were solved by a 1x mining adapter.

I bet the riser is shorting PCIe pin A1 to B17 to provide "presence" as a 1x card.. See https://imgur.com/a/AJB71Ih

Shorting pin A1 (PRSNT1) to the second presence pin B31 (PRSNT2) would make a PCIe card detect as a 4x.* (The presence pins are a little bit shorter.)

https://electronics.stackexchange.com/questions/201437/pcie-prsnt-signal-connection

Explanation: PCIe Cards short these presence pins to indicate the number of PCIe lanes / BUS width the connection will be using.

(Note: I don't yet have a RockPro64 to test this on yet.. but same thing would be happening on an Intel Chipset without these pins jumped. Here is an example of shorting the pin on the riser https://imgur.com/a/4rl7T5I taken from my plex server...)

rich0 commented 5 years ago

The card works fine on a powered 16x riser cable as well, like this one: https://www.amazon.com/gp/product/B01NAE4O7I/

The only downside to this powered riser is that it seems to drive power back into the rockpro64 such that it remains powered on even after disconnected from the power supply. I don't generally run it this way as I am not certain that drawing current in this way isn't harmful.

So, aside from the likely power issue, the current ayufan kernels address my issues.

foundObjects commented 5 years ago

I have similar back-powering issues just using a USB UART; I've found I have to disconnect everything from the board when powering it down.

I rarely shut mine off so it hasn't been much of a problem but if I were shutting off regularly I'd put everything powered on one power strip with a switch and just use that.

On Sun, Jun 2, 2019 at 7:21 AM rich0 notifications@github.com wrote:

The card works fine on a powered 16x riser cable as well, like this one: https://www.amazon.com/gp/product/B01NAE4O7I/

The only downside to this powered riser is that it seems to drive power back into the rockpro64 such that it remains powered on even after disconnected from the power supply. I don't generally run it this way as I am not certain that drawing current in this way isn't harmful.

So, aside from the likely power issue, the current ayufan kernels address my issues.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/rockchip-linux/kernel/issues/116?email_source=notifications&email_token=AG33PQO4FSICWVFRTKPHM2TPYPJPRA5CNFSM4FPC2VPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWXWX3Y#issuecomment-498035695, or mute the thread https://github.com/notifications/unsubscribe-auth/AG33PQPO7H3ZSQCKRP5JWBDPYPJPRANCNFSM4FPC2VPA .

StuartIanNaylor commented 5 years ago

I am the same with a rockpi4b

[    1.489159] of_get_named_gpiod_flags: parsed 'gpio' property of node '/vcc3v3-pcie-regulator[0]' - status (0)
[    1.489201] reg-fixed-voltage vcc3v3-pcie-regulator: Looking up vin-supply from device tree
[    1.489236] vcc3v3_pcie: supplied by vcc3v3_sys
[    1.489697] vcc3v3_pcie: at 3300 mV 
[    1.489857] reg-fixed-voltage vcc3v3-pcie-regulator: vcc3v3_pcie supplying 0uV
[    1.623989] phy phy-pcie-phy.9: Looking up phy-supply from device tree
[    1.623999] phy phy-pcie-phy.9: Looking up phy-supply property in node /pcie-phy failed
[    1.625451] rockchip-pcie f8000000.pcie: GPIO lookup for consumer ep
[    1.625461] rockchip-pcie f8000000.pcie: using device tree for GPIO lookup
[    1.625490] of_get_named_gpiod_flags: parsed 'ep-gpios' property of node '/pcie@f8000000[0]' - status (0)
[    1.625725] rockchip-pcie f8000000.pcie: Looking up vpcie3v3-supply from device tree
[    1.625736] rockchip-pcie f8000000.pcie: Looking up vpcie3v3-supply property in node /pcie@f8000000 failed
[    1.625748] rockchip-pcie f8000000.pcie: no vpcie3v3 regulator found
[    1.626340] rockchip-pcie f8000000.pcie: Looking up vpcie1v8-supply from device tree
[    1.626350] rockchip-pcie f8000000.pcie: Looking up vpcie1v8-supply property in node /pcie@f8000000 failed
[    1.626360] rockchip-pcie f8000000.pcie: no vpcie1v8 regulator found
[    1.626951] rockchip-pcie f8000000.pcie: Looking up vpcie0v9-supply from device tree
[    1.626960] rockchip-pcie f8000000.pcie: Looking up vpcie0v9-supply property in node /pcie@f8000000 failed
[    1.626970] rockchip-pcie f8000000.pcie: no vpcie0v9 regulator found
[    2.172391] rockchip-pcie f8000000.pcie: PCIe link training gen1 timeout!
[    2.173158] rockchip-pcie: probe of f8000000.pcie failed with error -110

lspci returns nothing then other times I will boot and

rock@linux:/boot$ lspci
00:00.0 PCI bridge: Fuzhou Rockchip Electronics Co., Ltd Device 0100
01:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02)
rock@linux:~$ dmesg | grep pci
[    1.489111] of_get_named_gpiod_flags: parsed 'gpio' property of node '/vcc3v3                 -pcie-regulator[0]' - status (0)
[    1.489153] reg-fixed-voltage vcc3v3-pcie-regulator: Looking up vin-supply fr                 om device tree
[    1.489188] vcc3v3_pcie: supplied by vcc3v3_sys
[    1.489649] vcc3v3_pcie: at 3300 mV
[    1.489808] reg-fixed-voltage vcc3v3-pcie-regulator: vcc3v3_pcie supplying 0u                 V
[    1.623688] phy phy-pcie-phy.9: Looking up phy-supply from device tree
[    1.623698] phy phy-pcie-phy.9: Looking up phy-supply property in node /pcie-                 phy failed
[    1.625172] rockchip-pcie f8000000.pcie: GPIO lookup for consumer ep
[    1.625182] rockchip-pcie f8000000.pcie: using device tree for GPIO lookup
[    1.625211] of_get_named_gpiod_flags: parsed 'ep-gpios' property of node '/pc                 ie@f8000000[0]' - status (0)
[    1.625452] rockchip-pcie f8000000.pcie: Looking up vpcie3v3-supply from devi                 ce tree
[    1.625462] rockchip-pcie f8000000.pcie: Looking up vpcie3v3-supply property                  in node /pcie@f8000000 failed
[    1.625475] rockchip-pcie f8000000.pcie: no vpcie3v3 regulator found
[    1.626067] rockchip-pcie f8000000.pcie: Looking up vpcie1v8-supply from devi                 ce tree
[    1.626077] rockchip-pcie f8000000.pcie: Looking up vpcie1v8-supply property                  in node /pcie@f8000000 failed
[    1.626087] rockchip-pcie f8000000.pcie: no vpcie1v8 regulator found
[    1.626675] rockchip-pcie f8000000.pcie: Looking up vpcie0v9-supply from devi                 ce tree
[    1.626684] rockchip-pcie f8000000.pcie: Looking up vpcie0v9-supply property                  in node /pcie@f8000000 failed
[    1.626694] rockchip-pcie f8000000.pcie: no vpcie0v9 regulator found
[    1.810499] PCI host bridge /pcie@f8000000 ranges:
[    1.812367] rockchip-pcie f8000000.pcie: PCI host bridge to bus 0000:00
[    1.813004] pci_bus 0000:00: root bus resource [bus 00-1f]
[    1.813531] pci_bus 0000:00: root bus resource [mem 0xfa000000-0xfbdfffff]
[    1.814190] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff] (bus addr                 ess [0xfbe00000-0xfbefffff])
[    1.815130] pci 0000:00:00.0: [1d87:0100] type 01 class 0x060400
[    1.815242] pci 0000:00:00.0: supports D1
[    1.815253] pci 0000:00:00.0: PME# supported from D0 D1 D3hot
[    1.815619] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), rec                 onfiguring
[    1.816531] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under [bus                  00-1f] (conflicts with (null) [bus 00-1f])
[    1.816574] pci 0000:01:00.0: [1b21:0612] type 00 class 0x010601
[    1.816628] pci 0000:01:00.0: reg 0x10: initial BAR value 0x00000000 invalid
[    1.817300] pci 0000:01:00.0: reg 0x10: [io  size 0x0008]
[    1.817321] pci 0000:01:00.0: reg 0x14: initial BAR value 0x00000000 invalid
[    1.817993] pci 0000:01:00.0: reg 0x14: [io  size 0x0004]
[    1.818013] pci 0000:01:00.0: reg 0x18: initial BAR value 0x00000000 invalid
[    1.818685] pci 0000:01:00.0: reg 0x18: [io  size 0x0008]
[    1.818705] pci 0000:01:00.0: reg 0x1c: initial BAR value 0x00000000 invalid
[    1.819377] pci 0000:01:00.0: reg 0x1c: [io  size 0x0004]
[    1.819397] pci 0000:01:00.0: reg 0x20: initial BAR value 0x00000000 invalid
[    1.820069] pci 0000:01:00.0: reg 0x20: [io  size 0x0020]
[    1.820090] pci 0000:01:00.0: reg 0x24: [mem 0x00000000-0x000001ff]
[    1.820111] pci 0000:01:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref]
[    1.828295] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    1.828341] pci 0000:00:00.0: BAR 8: assigned [mem 0xfa000000-0xfa0fffff]
[    1.828996] pci 0000:01:00.0: BAR 6: assigned [mem 0xfa000000-0xfa00ffff pref                 ]
[    1.829687] pci 0000:01:00.0: BAR 5: assigned [mem 0xfa010000-0xfa0101ff]
[    1.830340] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0020]
[    1.830939] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0020]
[    1.831570] pci 0000:01:00.0: BAR 0: no space for [io  size 0x0008]
[    1.832169] pci 0000:01:00.0: BAR 0: failed to assign [io  size 0x0008]
[    1.832817] pci 0000:01:00.0: BAR 2: no space for [io  size 0x0008]
[    1.833416] pci 0000:01:00.0: BAR 2: failed to assign [io  size 0x0008]
[    1.834048] pci 0000:01:00.0: BAR 1: no space for [io  size 0x0004]
[    1.834646] pci 0000:01:00.0: BAR 1: failed to assign [io  size 0x0004]
[    1.835278] pci 0000:01:00.0: BAR 3: no space for [io  size 0x0004]
[    1.835877] pci 0000:01:00.0: BAR 3: failed to assign [io  size 0x0004]
[    1.836525] pci 0000:00:00.0: PCI bridge to [bus 01]
[    1.837006] pci 0000:00:00.0:   bridge window [mem 0xfa000000-0xfa0fffff]
[    1.837725] pcieport 0000:00:00.0: enabling device (0000 -> 0002)
[    1.838611] pcieport 0000:00:00.0: Signaling PME through PCIe PME interrupt
[    1.839275] pci 0000:01:00.0: Signaling PME through PCIe PME interrupt
[    1.839901] pcie_pme 0000:00:00.0:pcie01: service driver pcie_pme loaded
[    1.840036] aer 0000:00:00.0:pcie02: service driver aer loaded
[    2.001348] ehci-pci: EHCI PCI platform driver

Seems completely spurious sometimes I hits runs of it working sometimes runs not. Its like when you get 2 services clash that by chance change order at times.

[Edit] I think the bridge has died on me as now with or without I can not get any listing on multiple tries

prusnak commented 4 years ago

I see the same issue with the following setup:

[    2.172391] rockchip-pcie f8000000.pcie: PCIe link training gen1 timeout!
[    2.173158] rockchip-pcie: probe of f8000000.pcie failed with error -110
PhoenixMage commented 4 years ago

I see similar issues to this on a Rock Pi 4 being unable to detect, pcie (error -110) and hence the NVMe m.2 drive running kernel 5.6.7.

The issue is intermittent as the NVMe drive is detected in linux about 5% of the time.

If I use the u-boot provided by radxa (rather then mainline with rockchip patches) this then becomes 100% so I am not sure if there is some form of PCIe initialisation that is happening in the u-boot that resolves the issue. I would still like to see my NVMe working with mainline kernel and mainline u-boot.

jkoppen-headsfirst commented 4 years ago

Here too a boot freeze when a PCIe adapter is present (PCIe x1 to Mini PCIe adapter with Coral Edge TPU). This is my output without:

dmesg | grep pci [ 1.473348] of_get_named_gpiod_flags: parsed 'gpio' property of node '/vcc3v3-pcie-regulator[0]' - status (0) [ 1.473399] reg-fixed-voltage vcc3v3-pcie-regulator: Looking up vin-supply from device tree [ 1.473442] vcc3v3_pcie: supplied by dc_12v [ 1.473509] vcc3v3_pcie: 3300 mV [ 1.473666] reg-fixed-voltage vcc3v3-pcie-regulator: vcc3v3_pcie supplying 3300000uV [ 1.892811] phy phy-pcie-phy.5: Looking up phy-supply from device tree [ 1.892821] phy phy-pcie-phy.5: Looking up phy-supply property in node /pcie-phy failed [ 1.894568] rockchip-pcie f8000000.pcie: GPIO lookup for consumer ep [ 1.894578] rockchip-pcie f8000000.pcie: using device tree for GPIO lookup [ 1.894607] of_get_named_gpiod_flags: parsed 'ep-gpios' property of node '/pcie@f8000000[0]' - status (0) [ 1.894856] rockchip-pcie f8000000.pcie: Looking up vpcie3v3-supply from device tree [ 1.894949] rockchip-pcie f8000000.pcie: Looking up vpcie1v8-supply from device tree [ 1.894960] rockchip-pcie f8000000.pcie: Looking up vpcie1v8-supply property in node /pcie@f8000000 failed [ 1.894974] rockchip-pcie f8000000.pcie: no vpcie1v8 regulator found [ 1.895002] rockchip-pcie f8000000.pcie: Looking up vpcie0v9-supply from device tree [ 1.895013] rockchip-pcie f8000000.pcie: Looking up vpcie0v9-supply property in node /pcie@f8000000 failed [ 1.895025] rockchip-pcie f8000000.pcie: no vpcie0v9 regulator found [ 1.895049] rockchip-pcie f8000000.pcie: bus-scan-delay-ms in device tree is 1000 ms [ 1.895084] rockchip-pcie f8000000.pcie: missing "memory-region" property [ 1.895121] PCI host bridge /pcie@f8000000 ranges: [ 1.942370] rockchip-pcie f8000000.pcie: invalid power supply [ 2.442415] rockchip-pcie f8000000.pcie: PCIe link training gen1 timeout! [ 2.442463] rockchip-pcie f8000000.pcie: deferred probe failed [ 2.442725] rockchip-pcie: probe of f8000000.pcie failed with error -110 [ 2.737720] ehci-pci: EHCI PCI platform driver [ 4.379833] vcc3v3_pcie: disabling

clarkis117 commented 4 years ago

Here's my output from a rockpro64 with a Compex WLE1216VX attached to the PCIE slot root@FarmBox:~# dmesg [ 0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034] [ 0.000000] Linux version 5.4.52 (builder@buildhost) (gcc version 8.4.0 (OpenWrt GCC 8.4.0 r14101-5d8fded26a)) #0 SMP PREEMPT Sun Aug 9 12:01:52 2020

root@FarmBox:~# dmesg | grep pci [ 0.286897] vcc3v3_pcie: supplied by vcc12v_dcin [ 0.314101] rockchip-pcie f8000000.pcie: no vpcie1v8 regulator found [ 0.314139] rockchip-pcie f8000000.pcie: no vpcie0v9 regulator found [ 0.868307] rockchip-pcie f8000000.pcie: PCIe link training gen1 timeout! [ 0.868506] rockchip-pcie: probe of f8000000.pcie failed with error -110 [ 0.990147] ehci-pci: EHCI PCI platform driver

StuartIanNaylor commented 4 years ago

@clarkis117 You prob want to check the difference between m.2 & mini pcie. RockPro64 is m.2 is it not?

clarkis117 commented 4 years ago

@StuartIanNaylor it is a mini pice card, which I have in a mini pcie to pcie adapter. The rockpro64 has a 4x pcie card slot on its board. It may be a power design issue as I was able to use an Intel wifi adapter in the same setup, and the compex card in an x86 pc with the same adapter. The compex card has a TDP greater than 10 watts.

nullr0ute commented 3 years ago

So one thing I found with testing is if you enable CONFIG_DEBUG_SHIRQ it shows up some issues on the driver. Some details here: https://patchwork.kernel.org/project/linux-rockchip/patch/1502353273-123788-1-git-send-email-shawn.lin@rock-chips.com/

vukitoso commented 2 years ago

@jkoppen-headsfirst

Here too a boot freeze when a PCIe adapter is present (PCIe x1 to Mini PCIe adapter with Coral Edge TPU).

Hello. Have you solved the problem with the "Coral Edge TPU"?

nullr0ute commented 2 years ago

@jkoppen-headsfirst

Here too a boot freeze when a PCIe adapter is present (PCIe x1 to Mini PCIe adapter with Coral Edge TPU).

Hello. Have you solved the problem with the "Coral Edge TPU"?

I've had reports the Coral Edge TPU does work on Fedora. Note the Edge TPU PCIe driver which was in staging upstream has now been dropped from the upstream kernel so the testing was by someone that built their own kernel to bring those drivers back.

vukitoso commented 2 years ago

@nullr0ute I don't have a coral edge yet, I'm still picking up a board. Have you tried installing drivers according to the instructions https://coral.ai/docs/m2/get-started? The drivers are not in the kernel, they are installed additionally.

daiaji commented 2 years ago

https://gitlab.manjaro.org/manjaro-arm/packages/core/linux/-/issues/34 There are some compatibility issues, a piece that can use an SSD on a PC will cause a kernel freeze/ on the RK3399. https://gist.github.com/daiaji/eafa111f4d6dd0079561f16107e555d0 The u-boot also seems to have some faults,