vincentmli / BPFire

BPFire development tree
37 stars 3 forks source link

BPFire With XDP #2

Open Eykalzz opened 8 months ago

Eykalzz commented 8 months ago

anyone have ipfire iso with xdp ?

vincentmli commented 7 months ago

this works the same way for tcp port, need to edit /var/ipfire/ddos/udp_ports and change/add the port you want, later make sure the permission of the file is like below:

# chown nobody.nobody /var/ipfire/ddos/udp_ports 

# ls -l /var/ipfire/ddos/udp_ports 
-rw-r--r-- 1 nobody nobody 184 Apr 23 14:54 udp_ports

show current ports

[root@bpfire ddos]# cat /var/ipfire/ddos/udp_ports 
domain           53/udp     # Domain Name Server
game1           10408/udp   # Domain Name Server
sip     5060/udp    # Voice over Internet
siptls      5061/udp    # Voice over Internet TLS
Eykalzz commented 7 months ago

this works the same way for tcp port, need to edit /var/ipfire/ddos/udp_ports and change/add the port you want, later make sure the permission of the file is like below:

# chown nobody.nobody /var/ipfire/ddos/udp_ports 

# ls -l /var/ipfire/ddos/udp_ports 
-rw-r--r-- 1 nobody nobody 184 Apr 23 14:54 udp_ports

show current ports

[root@bpfire ddos]# cat /var/ipfire/ddos/udp_ports 
domain           53/udp     # Domain Name Server
game1           10408/udp   # Domain Name Server
sip       5060/udp    # Voice over Internet
siptls        5061/udp    # Voice over Internet TLS

nice .. i try do first .. thanks for help

vincentmli commented 6 months ago

@Eykalzz just check in to see if you run into any issue :)

Eykalzz commented 6 months ago

@Eykalzz just check in to see if you run into any issue :)

Hi bro .. for now everything okay .. i dont have try udp for now .. i just try tcp first ..

vincentmli commented 6 months ago

have you put it into production use already for tcp?

Eykalzz commented 6 months ago

have you put it into production use already for tcp?

Yes, I did.

now my game has running with xdp ip fire

vincentmli commented 6 months ago

this is great news, you are the first BPFire/IPFire user with XDP in production use.

you run it in windows hyper v? if so, can you share command output like xdp-loader status or the bottom of the XDP UI page also shows the result. I am curious if XDP is in generic mode or native mode, native mode meaning hyper v nic is supported by XDP natively with better performance.

Eykalzz commented 6 months ago

this is great news, you are the first BPFire/IPFire user with XDP in production use.

you run it in windows hyper v? if so, can you share command output like xdp-loader status or the bottom of the XDP UI page also shows the result. I am curious if XDP is in generic mode or native mode, native mode meaning hyper v nic is supported by XDP natively with better performance.

sure .. here i share pictures

image

vincentmli commented 6 months ago

ok, it is XDP generic mode, looks hyper v virtual nic is not natively supported by XDP, but it is fine, no problem

Eykalzz commented 6 months ago

This screenshot from proxmox .. if hyper V i send u later .. now i outside home

vincentmli commented 6 months ago

if I understand proxmox correct, proxmox should use virtual nic virtio driver that should be natively supported by XDP, when you get time, you can run lspci -vvv | grep -i eth to show me the result, I can tell if proxmox is using virtual nic virtio driver for BPFire/IPFire or not

Eykalzz commented 6 months ago

if I understand proxmox correct, proxmox should use virtual nic virtio driver that should be natively supported by XDP, when you get time, you can run lspci -vvv | grep -i eth to show me the result, I can tell if proxmox is using virtual nic virtio driver for BPFire/IPFire or not

image

like this ?? this is proxmox

Eykalzz commented 6 months ago

lspci -vvv | grep -i eth

image

hyper V ,, got problem at hyper V ?

vincentmli commented 6 months ago

if I understand proxmox correct, proxmox should use virtual nic virtio driver that should be natively supported by XDP, when you get time, you can run lspci -vvv | grep -i eth to show me the result, I can tell if proxmox is using virtual nic virtio driver for BPFire/IPFire or not

image

like this ?? this is proxmox

@Eykalzz sorry missed your note, Intel 82540em might be old enough that is not supported by native XDP but generic XDP, maybe it depends on how you provisioned proxmox, for better performance, you can choose virtio type network from proxmox, I see someone asked similar question in proxmox forum https://forum.proxmox.com/threads/e1000-vs-virtio.80553/, I think you can try the virtio type network from proxmox, that would be supported natively by XDP for better performance.

vincentmli commented 6 months ago

lspci -vvv | grep -i eth

image

hyper V ,, got problem at hyper V ?

it seems lspci -vv | grep -i eth did not get the virtual network type info, you run the command on the BPFire OS, right? what about lspci -vvv output ?

Eykalzz commented 6 months ago

if I understand proxmox correct, proxmox should use virtual nic virtio driver that should be natively supported by XDP, when you get time, you can run lspci -vvv | grep -i eth to show me the result, I can tell if proxmox is using virtual nic virtio driver for BPFire/IPFire or not

image like this ?? this is proxmox

@Eykalzz sorry missed your note, Intel 82540em might be old enough that is not supported by native XDP but generic XDP, maybe it depends on how you provisioned proxmox, for better performance, you can choose virtio type network from proxmox, I see someone asked similar question in proxmox forum https://forum.proxmox.com/threads/e1000-vs-virtio.80553/, I think you can try the virtio type network from proxmox, that would be supported natively by XDP for better performance.

how to do this

Eykalzz commented 6 months ago

lspci -vvv

[root@Eykalzz ddos]# lspci -vvv 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) Subsystem: Red Hat, Inc. Qemu virtual machine Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx-

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] Subsystem: Red Hat, Inc. Qemu virtual machine Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort - SERR- <PERR- INTx-

00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] (p rog-if 80 [ISA Compatibility mode-only controller, supports bus mastering]) Subsystem: Red Hat, Inc. Qemu virtual machine Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort - SERR- <PERR- INTx- Latency: 0 Region 0: I/O ports at 01f0 [size=8] Region 1: I/O ports at 03f4 Region 2: I/O ports at 0170 [size=8] Region 3: I/O ports at 0374 Region 4: I/O ports at f0a0 [size=16] Kernel driver in use: ata_piix Kernel modules: pata_acpi, ata_generic

00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] ( rev 01) (prog-if 00 [UHCI]) Subsystem: Red Hat, Inc. QEMU Virtual Machine Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx- Latency: 0 Interrupt: pin D routed to IRQ 11 Region 4: I/O ports at f040 [size=32] Kernel driver in use: uhci_hcd

00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) Subsystem: Red Hat, Inc. Qemu virtual machine Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort - SERR- <PERR- INTx- Interrupt: pin A routed to IRQ 9 Kernel driver in use: piix4_smbus Kernel modules: i2c_piix4

00:02.0 VGA compatible controller: Device 1234:1111 (rev 02) (prog-if 00 [VGA co ntroller]) Subsystem: Red Hat, Inc. Device 1100 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx- Latency: 0 Region 0: Memory at fc000000 (32-bit, prefetchable) [size=16M] Region 2: Memory at fea90000 (32-bit, non-prefetchable) [size=4K] Expansion ROM at 000c0000 [disabled] [size=128K] Kernel modules: bochs

00:03.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon Subsystem: Red Hat, Inc. Virtio memory balloon Physical Slot: 3 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 10 Region 0: I/O ports at f000 [size=64] Region 4: Memory at fd600000 (64-bit, prefetchable) [size=16K] Capabilities: [84] Vendor Specific Information: VirtIO: BAR=0 offset=00000000 size=00000000 Capabilities: [70] Vendor Specific Information: VirtIO: Notify BAR=4 offset=00003000 size=00001000 multiplier=00000004 Capabilities: [60] Vendor Specific Information: VirtIO: DeviceCfg BAR=4 offset=00002000 size=00001000 Capabilities: [50] Vendor Specific Information: VirtIO: ISR BAR=4 offset=00001000 size=00001000 Capabilities: [40] Vendor Specific Information: VirtIO: CommonCfg BAR=4 offset=00000000 size=00001000 Kernel driver in use: virtio-pci

00:05.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge (prog-if 00 [Normal decode ]) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 11 Region 0: Memory at fea91000 (64-bit, non-prefetchable) [size=256] Bus: primary=00, secondary=01, subordinate=01, sec-latency=0 I/O behind bridge: e000-efff [size=4K] [16-bit] Memory behind bridge: fe800000-fe9fffff [size=2M] [32-bit] Prefetchable memory behind bridge: fd400000-fd5fffff [size=2M] [32-bit] Secondary status: 66MHz+ FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: [4c] MSI: Enable- Count=1/1 Maskable+ 64bit+ Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [48] Slot ID: 0 slots, First+, chassis 03 Capabilities: [40] Hot-plug capable

00:12.0 Ethernet controller: Red Hat, Inc. Virtio network device Subsystem: Red Hat, Inc. Virtio network device Physical Slot: 18 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 10 Region 0: I/O ports at f060 [size=32] Region 1: Memory at fea92000 (32-bit, non-prefetchable) [size=4K] Region 4: Memory at fd604000 (64-bit, prefetchable) [size=16K] Expansion ROM at fea00000 [disabled] [size=256K] Capabilities: [98] MSI-X: Enable+ Count=4 Masked- Vector table: BAR=1 offset=00000000 PBA: BAR=1 offset=00000800 Capabilities: [84] Vendor Specific Information: VirtIO: BAR=0 offset=00000000 size=00000000 Capabilities: [70] Vendor Specific Information: VirtIO: Notify BAR=4 offset=00003000 size=00001000 multiplier=00000004 Capabilities: [60] Vendor Specific Information: VirtIO: DeviceCfg BAR=4 offset=00002000 size=00001000 Capabilities: [50] Vendor Specific Information: VirtIO: ISR BAR=4 offset=00001000 size=00001000 Capabilities: [40] Vendor Specific Information: VirtIO: CommonCfg BAR=4 offset=00000000 size=00001000 Kernel driver in use: virtio-pci

00:13.0 Ethernet controller: Red Hat, Inc. Virtio network device Subsystem: Red Hat, Inc. Virtio network device Physical Slot: 19 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 10 Region 0: I/O ports at f080 [size=32] Region 1: Memory at fea93000 (32-bit, non-prefetchable) [size=4K] Region 4: Memory at fd608000 (64-bit, prefetchable) [size=16K] Expansion ROM at fea40000 [disabled] [size=256K] Capabilities: [98] MSI-X: Enable+ Count=4 Masked- Vector table: BAR=1 offset=00000000 PBA: BAR=1 offset=00000800 Capabilities: [84] Vendor Specific Information: VirtIO: BAR=0 offset=00000000 size=00000000 Capabilities: [70] Vendor Specific Information: VirtIO: Notify BAR=4 offset=00003000 size=00001000 multiplier=00000004 Capabilities: [60] Vendor Specific Information: VirtIO: DeviceCfg BAR=4 offset=00002000 size=00001000 Capabilities: [50] Vendor Specific Information: VirtIO: ISR BAR=4 offset=00001000 size=00001000 Capabilities: [40] Vendor Specific Information: VirtIO: CommonCfg BAR=4 offset=00000000 size=00001000 Kernel driver in use: virtio-pci

00:1e.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge (prog-if 00 [Normal decode ]) Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx- Interrupt: pin A routed to IRQ 10 Region 0: Memory at fea94000 (64-bit, non-prefetchable) [size=256] Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 I/O behind bridge: d000-dfff [size=4K] [16-bit] Memory behind bridge: fe600000-fe7fffff [size=2M] [32-bit] Prefetchable memory behind bridge: fd200000-fd3fffff [size=2M] [32-bit] Secondary status: 66MHz+ FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: [4c] MSI: Enable- Count=1/1 Maskable+ 64bit+ Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [48] Slot ID: 0 slots, First+, chassis 01 Capabilities: [40] Hot-plug capable

00:1f.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge (prog-if 00 [Normal decode ]) Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx- Interrupt: pin A routed to IRQ 11 Region 0: Memory at fea95000 (64-bit, non-prefetchable) [size=256] Bus: primary=00, secondary=03, subordinate=03, sec-latency=0 I/O behind bridge: c000-cfff [size=4K] [16-bit] Memory behind bridge: fe400000-fe5fffff [size=2M] [32-bit] Prefetchable memory behind bridge: fd000000-fd1fffff [size=2M] [32-bit] Secondary status: 66MHz+ FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: [4c] MSI: Enable- Count=1/1 Maskable+ 64bit+ Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [48] Slot ID: 0 slots, First+, chassis 02 Capabilities: [40] Hot-plug capable

01:01.0 SCSI storage controller: Red Hat, Inc. Virtio SCSI Subsystem: Red Hat, Inc. Virtio SCSI Physical Slot: 1 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 10 Region 0: I/O ports at e000 [size=64] Region 1: Memory at fe800000 (32-bit, non-prefetchable) [size=4K] Region 4: Memory at fd400000 (64-bit, prefetchable) [size=16K] Capabilities: [98] MSI-X: Enable+ Count=11 Masked- Vector table: BAR=1 offset=00000000 PBA: BAR=1 offset=00000800 Capabilities: [84] Vendor Specific Information: VirtIO: BAR=0 offset=00000000 size=00000000 Capabilities: [70] Vendor Specific Information: VirtIO: Notify BAR=4 offset=00003000 size=00001000 multiplier=00000004 Capabilities: [60] Vendor Specific Information: VirtIO: DeviceCfg BAR=4 offset=00002000 size=00001000 Capabilities: [50] Vendor Specific Information: VirtIO: ISR BAR=4 offset=00001000 size=00001000 Capabilities: [40] Vendor Specific Information: VirtIO: CommonCfg BAR=4 offset=00000000 size=00001000 Kernel driver in use: virtio-pci

[root@Eykalzz ddos]#

vincentmli commented 6 months ago

@Eykalzz is above lspci -vvv from hyper v guest? if so, the Virtio network device is provisioned from hyper v, which is good and natively supported by XDP.

00:13.0 Ethernet controller: Red Hat, Inc. Virtio network device
Subsystem: Red Hat, Inc. Virtio network device
Physical Slot: 19

I am not familiar with proxmox, I think there should be option in proxmox when you provision the guest to specify network type to be VirtIO (paravirtualized) network interface.

also by the way, I have added load balancer feature to BPFire, so you can run mulitiple game servers with same game server services, and setup load balancer on BPFire to load balance game services to multiple game server, if one of your game server goes down or need maintenance, you still have other game server to serve, see a quick demo here https://youtu.be/80jumLkhDWo?si=ZisD7p7SSPUPrb_E

Eykalzz commented 6 months ago

@Eykalzz is above lspci -vvv from hyper v guest? if so, the Virtio network device is provisioned from hyper v, which is good and natively supported by XDP.

00:13.0 Ethernet controller: Red Hat, Inc. Virtio network device
Subsystem: Red Hat, Inc. Virtio network device
Physical Slot: 19

I am not familiar with proxmox, I think there should be option in proxmox when you provision the guest to specify network type to be VirtIO (paravirtualized) network interface.

also by the way, I have added load balancer feature to BPFire, so you can run mulitiple game servers with same game server services, and setup load balancer on BPFire to load balance game services to multiple game server, if one of your game server goes down or need maintenance, you still have other game server to serve, see a quick demo here https://youtu.be/80jumLkhDWo?si=ZisD7p7SSPUPrb_E

what command for i add load balancer .. can help me

vincentmli commented 6 months ago

@Eykalzz here is the loxicmd to create load balancer, I could add WebUI feature in future https://loxilb-io.github.io/loxilbdocs/cmd/#how-to-run-and-configure-loxilb

vincentmli commented 2 months ago

@Eykalzz just follow-up with your XDP SYNProxy deployment, is it still running ok?

Eykalzz commented 2 months ago

bp fire now got new update ? where can i download latest version?

vincentmli commented 1 month ago

@Eykalzz a lot of feature has been added, here is the download link https://drive.google.com/drive/folders/1HPJTWP6wi5gPd5gyiiKvIhWipqguptzZ?usp=drive_link, feel free to reach out to me if you have problem

vincentmli commented 1 month ago

@Eykalzz I also relocated the download server to singapore, when you got chance, could you try to download ISO from here https://bpfire.net/download/

vincentmli commented 1 week ago

@Eykalzz are you able to try the new BPFire? another user tried it and it is working great for them

Eykalzz commented 1 week ago

@Eykalzz are you able to try the new BPFire? another user tried it and it is working great for them

Can protect udp ?

Eykalzz commented 11 hours ago

bro check your discord thank you

Message ID: @.***>