Great video. For those coming here after Dec 27, 2023 it seems the latest version of PVE (8.1.3) seems to have this fixed, on mine it loaded the r8152 driver without the workaround.
noice, thanks for sharing! I've got extra ports now on my 24 port switch and was hoping to seperate the mgmt and storage at least on my older nodes as well.
Dont forget to add '-M do' to any MTU ping test. -S 9000 will work on most adapters as it will just fragment the icmp request. The '-M do' will prevent the fragmentation and result in an error or no response it the src/dst/link truely does not support the icmp packet size. 1472 is max for 1500 mtu due to ipv4 ethernet overheads. 1452 for ipv6.
Thanks for this! I am a Fedora user and this resolved my issue after the 6.35 kernel was released. Adding the udev rule allowed the USB nic to be detected and once readdressed, I was able to get it working again.
Yeah, I ran into this same issue when I built out my Proxmox cluster this past year. Google sent me to a great article where someone pointed out this solution. I would note that the HP Elitedesk 805's (1-liter office desktops...) that I'm using have USB 3.1 (10Gbps SuperSpeed) ports. With that configuration, those Sabrent adapters have no trouble hitting 2.4Gbps, event without jumbo frames. Thanks for all the great content you've been creating recently!
your video was absolute perfect timing for me!, was trying to get these to work in a LACP and was having major issues so abandoned that plan, gonna try again now
Hopefully it helps you! I realized after editing this that I forgot to mention it requires kernel ~5.13 or so for the driver changes to be merged. Ubuntu is using 5.15 for 22.04 LTS and Proxmox has 5.15-pve, 5.19-pve, and 6.1-pve available, but stable Debian Bullseye is still on 5.10 so this won't work for Bullseye without the PVE kernel. I tested on all of the PVE and Ubuntu kernels (PVE kernels are actually Ubuntu based), but not 'bare' Debian stable.
This explains a lot... I ended up switching to a Pluggable USB 3 adapter and that worked but I'll have to pull the Sabarent adapters out of the drawer and try them again. Thanks!
Great stuff, thanks. It works perfect. I have now around 2.35 Gbits/sec in both directions. But one little thing is that you don't need to restart the Server for the changes. Only "udevadm control --reload-rules && udevadm trigger" and unplug/plug in the adapters do the same.
This video was really helpful! I was having trouble fixing my USB3 2.5Gbe network adapters on Linux / Proxmox, and your instructions were easy to follow. I was able to get everything working in no time. Thanks for sharing your knowledge!
This is my second video of you since I came accross your channel. Amazing. I am going to look through your other videos as well. I have two dual port Mellanox connectx-3 cards with 40/56Gbe firmware. One of them on Dell R720 and another is on my PC. Both run proxmox in cluster and also host Truenas VMs. How should I set my cluster network to take advantage of the speed for storage and VM migration? Adapters are connected via QSFP+ DAC cable.
funny - I returned my three 2G5 USB NICs last month telling my source "those don't have the RTL chips I ordered" - my bad XD Please continue to provide the solutions to my problems - it's nice to know I'm not alone 😘
Definitely a frustrating issue, and since 2.5G/5G were never really used in enterprise there isn't a decade of driver development like you get for old used 10G cards
Great video! I see that you are also a Network expert. I wonder if you could give a WireShark tutorial on how to see if IPcams are making connections outside our LAN, and how to avoid that by setting properly the firewall on our router.
I usually test things like that directly in the firewall, by adding a lot rule (or block + log) with the camera's MAC address and seeing what it tries to do. You don't get the packet contents, just the source/destination IPs, but that's usually all you need.
I'm a bit of a Newbe and I've been having a run of straight failures in Projects in my Homelab lately and it's starting to play on my morale! I had just purchased the same adapter for a second NIC to properly implement pfSence, and thought I better just check around about adding a USB NIC to Proxmox; and luckily ran into this. So Man, Thank You! - for solving my problem BEFORE I knew I had one!
@@apalrdsadventures An update - I seem to have a related issue I believe. I'm using the USB NIC as the LAN facing Interface for pfSense in a VM. I took a snapshot of the VM soon after I got it working. Now if the VM restarts - the LAN stops working, although ipconfig check on the Clients shows they are getting DHCP from pfsense. If I rollback to the original, (or later working) snapshot, all is well. I speculate that it might have something to do with the OS enumerating the USB ports differently to what the install of pfsense expects (?) However it is odd that the DHCP Server still works.
Are you doing USB passthrough or using the NIC in Proxmox and bridging it to pfSense with a vmbr? Linux will name the USB NIC with the MAC address so it is consistent across reboots (as long as you don't change hardware)
Update - This whole issue hs gone away and everything is cool now. After testing the setup for a while I decided to reduce the CPU Cores for pfSense/squid from 2 to 1. So due to the above issue I had to remove the VM and start again to do that. Well on the 2nd try I left the networking to the out-of-the-box Defaults, (I had just wanted to set the IP address to 192.168.1.5 - but again the same problem had occured) so putting up with the Router occupying 192.168.1.1, all is well and I don't have to worry about it not working after re-booting, etc. This is very strange and inexplicable and possibly nothing to do with that USB NIC at all...
I've got 2 Asustor AS-U2.5G2 dongles connected to my 2 nodes. Each is connected to a USB 3.2 gen 1 port. I checked dmesg and they are using the "cdc_ncm" driver and iperf3 bidirectional between the 2 nodes (over a Unifi 2.5GbE switch) gets 2.3Gbps one way and 2.1Gbps the other. After adding the UDEV rule and rebooting, it did start using the r8152 driver, and iperf did get a little better at 2.3Gbps one way and 2.2Gbps the other.
You might want to check which exact version of RTL8156 these adapters have - which you can by checking the bcdVersion (30.0 is 8156, 31.0 is 8156B, 31.04 is 8156BG, 31.05 as spotted in ASUS version might also be BG or BSG). The original version has some hardware related instability issues including random disconnections and hangs that had to be fixed in hardware...
Thanks for the info! Saved me from tearing my hair out when I finally get to set up Proxmox to use a usb port as lan port. I liked the vid just to move on from 666 - looked really terrible!!!
7:30 It could be that the limiting factor is the (small) amount of NIC queues that the Realtek driver can generate, or since iperf3 is single threaded, the CPUs in the proxmox nodes could be the bottleneck.
I was able to get ~2.1G bidir out of my M1 macbook, so it's definitely some limitation of the proxmox nodes. It could also be USB3 5G vs 10G related, or the host controller on the nodes.
Awesome video, so well researched 👏 After seeing this video i bought 2 usbc adapters of different brands. They work well with the newest windows driver, stable 283 Mb/s vs a samba share up and down. Following your advice, with linux i got similar results as shown in your video. So the windows driver is better?! Shame on Realtek🤔 btw they are picky about usb ports.
The cause of the issue is likely the bus type/class the device is enumerated from. I installed a Realtek Wifi-slot 2.5Gbe to RJ-45 adapter in a TFF (m900) pfSense box and I did have to install a kernel module package to get the adapter to work. Now I wonder if I can get one of these USB adapters to work in FreeBSD as well. Hmm.
Thanks for this - I run a three-node Proxmox ceph cluster utilizing these devices and I hadn't even spotted they were running in half duplex mode. Have you reported the issue to Proxmox?
It's probably more of an upstream Debian issue, but the root of the problem is that Realtek's driver won't load without the udev rules and got merged into the kernel anyway (there's no feedback to distros that this specific udev rule needs to be bundled even though the kernel should support this hardware)
Proxmox also pulls kernel updates ahead of Debian itself, so it's possible Debian has already fixed this for Bookworm, but since Bullseye is based on 5.10 and Proxmox pulled ahead the Ubuntu kernel releases (5.15, 5.19, 6.1) they might not have pulled this ahead as well. AFAIK support in the kernel was merged with 5.13, so Debian stable shouldn't support it.
followed along and now ethtools shows that USB 2.5 GbE as supporting 10/100/1000/2500 @ full, but still only shows "Speed: 1000Mb/ s". The dmesg shows the same driver as you do now.
update: got my USB 2.5 GbE to finally advertise that speed with command ethtool -s eth0 autoneg on advertise 0x80000000002f (change eth0 to the appropriate device). But just as you experienced I get roughly 1 Gb down and around 40 Mb up. Just keeping it updated in case others encounter this like you and I
ASUS USB-C2500 version uses a different device ID (0b05:1976), so you might need to force it by adding it to the list of valid IDs via extra rules in the same 50-usb-realtek-net.rules file: ATTR{idVendor}=="0b05", ATTR{idProduct}=="1976", ATTR{bConfigurationValue}!="$env{REALTEK_MODE1}", ATTR{bConfigurationValue}="$env{REALTEK_MODE1}" ACTION=="add", ATTRS{idVendor}=="0b05", ATTRS{idProduct}=="1976", RUN+="/sbin/modprobe r8152" RUN+="/bin/sh -c 'echo 0b05 1976 > /sys/bus/usb/drivers/r8152/new_id'"
systemctl status apalrd I appreciate you making these videos. Are you still planning on making a video about how to add these 2.5G nics to your ha culster for replication? I kind of followed your project, but I used zfs and not cephFS and I have 2 x 1tb nvme/ssd disks. How to force replication through one interface but other traffic through another interface. Or is it automatic and I'm overthinking all this?
I'm working on a video on that topic, been busy recently. It's on my todo list though and I'm nearly done with the script. But yes, you can force replication via a specific subnet+mask, not interface though. Replication uses whatever network you've configured for migration - see pve.proxmox.com/pve-docs/pve-admin-guide.html#_guest_migration
you really should take advantage of this and run 2 of these on each host and get closer to 3gbe over bond0 - shaken not stirred - you probably have the switch ports pls maake a opnsense forbidden router to blow away all others but still with great price/perf - i would suggest a hp8300 with 16gb ram and 6-8 2.5gbe - total cost about 250. lastly it would be super helpful to show the inff moving like 10gb of iso around the lan so people can see actual speeds in real world usage - going from 50mb/s to 120mb/s can be a semi gamechanger - with bonded 2.5 you may be talking more like 200mb/s and better overall cluster performance #lower cluster overhead #wmt
I'm planning on running 1 per host to a separate, so I have gigabit + 2.5Gig, on separate networks. This will give Ceph and cluster migration its own network, separate from the VM traffic.
After setting the mtu to 12000 you say that the bandwidth does not increase because of usb 3.0, are you test with a usb directly connected to the cpu and not a motherrboard usb ? Thanks for the vidéo :)
There are only two USB3 ports on the nodes I'm testing with and they are just the 5Gbps generation. It's a dell wyse 5060 thin client. It seems to perform better on 10Gbps ports on other computers I've tested on.
usb3 is 5Gb full duplex so theoretically it has more than enough bandwidth to handle it. but in practice, the master-slave nature of the usb protocol itself is likely the culpit, unless this dell pc is just not fast enough for these speeds. one way to find out is to try a 2.5gb nic card.
Realtek not only makes bad drivers, but also bad hardware (especially bad when you run Open-/Free-/NetBSD). They work, but if reliability and performance are important, Intel, Mellanox (Nvidia) and Chelsio NICs are a much better choice.
AFAIK Aquantia is the only other one that has (had?) USB3 2.5G chipsets. The other option is to go Thunderbolt to a pcie chip, and that's similar to the external 10G adapters on the market and is really only useful for laptops with Thunderbolt, not mini-PCs.
While I agree that Intel (early 2.5GBASE-T NICs were problematic) and Mellanox etc. generally make better NICs and that they often are a better choice, it's a bit weird to claim that Realtek makes bad hardware if it mostly affects BSD. :P It sounds to me that a lot of the problem is that BSD has much worse driver support in general. Regardless, there aren't many options for 2.5GBASE-T USB NICs. Realtek is basically the only game in town in that market at this point.
if you have some disconnection, try to add this at /etc/default/grub usbcore.autosuspend=-1 usbcore.quirks=0bda:8156:k,0bda:8156:k where 0bda:8156 is the model of the adaptator