A smart PCIe network interface card that adds full-fledged router capabilities to your servers. Proxmox. Druvis. Everything you need for unlimited knowledge in another episode of #MikroTips!
Another good thing to try if you want to maximize throughput to a single VM is to directly assign either individual interfaces or the whole PCIe card to a VM. This lets you skip the linux kernel bridge as a possible bottleneck.
and QEMU/KVM. There's at least 3% perf loss on CPU/RAM from those but potentially more on i/o, maybe also IOMMU groups related stuff. Likely the issue here is the load on the CPU i/o controller handling the NVMe disk on the same group. Another theory is PCIe bottleneck/overhead: this card appears to be x8 electrically, likely PCIe 3.0. That's almost exactly 8GB/s _bidirectional_ which is what we seem to be getting. Even though traffic generated shows beyond 8GB/s on the external router, only really
@@cldptpcie 3.0 8x is yes 8 Gigabytes/s but networking is Gigabits/s, 8GBps translate to 40Gbps. The pcie 3.0 8x is able to transmit both sfp28 to the host no problem, as it would be illogical to have chosen this interface other wise.
Ideally you have SR-IOV support and poke that into the VM directly rather than use a virtual Ethernet card in KVM. I think otherwise you won't get the full capability of PCIe because the KVM guest has to jump through the kernel in both directions. One thing I have been curious with these cards - is it possible to make the card work basically separately, and then communicate back to the host system via one of the PCIe interfaces? Think like a security appliance where normal packets just come in one interface and go out the other and don't touch the host system's CPU, but packets you want to inspect make a trip through the host system.
This card very useful in data center or network exchange environment where you pay per U. Rather than installing separate router and server that needed 2U spaces, with this card you only need 1U space.
@@hexatested Brilliant! You know of any server models that can accommodate this card? And do you have any idea what that could mean in terms of savings? Like, if all you needed was 1 server and 1 router, then you would cut costs in two? I imagine flexible rates - paying for bandwidth and electricity are common practice.
Product seems really interesting, but a bit hard to imagine solid use case for. As it lacks some features that other DPUs have. Albite this is more affordable. Make a version with more RAM and Storage, comparable to nVidia Bluefield and add NVMe-over-TCP support. Or show how it can be used to offload traffic encryption or firewalling. Make a video of more use cases for CCR2004-PCIe.
@@RB01-lite That's nice, but it's only a part of it. DPUs can interpose as a regular NVMe device to Host machine. This is the missing link, or it already can do it ?
@@andiszile If I understood you correctly a DPU could just have the host load an NVME drive on bootup, that is physically elsewhere. In the current ROSE implementation you can only access the NVMe-over-TCP drive after the boot process.
@@RB01-lite Ok. Looking into it. Maybe even other DPUs can't really be used as boot devices (unless UEFI can wait for drive to bootup :D ) but they can be used as storage device. Gain is that Host CPU doesn't need to process nvme-tcp protocol by itself.
@@RB01-lite But probably i am too focused on this one particular use case. That's why i would like to see showcase of more use cases that utilize this as more that just a NIC to broaden my view.
Product definitely looked interesting. However, the fact it simply stops working whenever it's rebooted kinda kills all use cases. Also, I experienced some kernel panics while running it. I suppose if they can fix the PCI-E initialisation issues (e.g. allow it to re-initialise after the host system has booted), it becomes a much more interesting product. Currently having two of these cards but not deploying them as it simply wasn't stable.
Thanks for the video. Since I saw this NIC announcement, I thought the idea was to run CHR directly on the nic, and not so much use it as a passthrough to other VMs. Is that possible?
This is in the style of other “SmartNIC” or “DPU” cards - having your network card do some amount of helper offload for you, although in Mikrotik’s case its just RouterOS and not a system designed to do trivial data manipulation on the fly or similar. I wish you could run containers on the card (for use in other systems, not with a hypervisor) but it only has 128MB internal storage and no USB.
2:00 Arriving a little bit late to this video, but you can fix the NIC naming forcing it using udev rules, which can force the device names using properties like PCI address or MAC addresses from the NIC.
If @mikrotik made a follow up video in 24-25 with SR-IOV backed pass-through to the Proxmox VM. The CPU probably could not coke eggs anymore. ᕕ(⌐■_■)ᕗ ♪♬
An update video would be greatly appreciated, this is a good card for mikrotik as well as for other open source router OSs. Example, would something like this work with my lenovo tiny p330, could i use this and a switch to have the ultimate router + proxmox + whatever else ?
So I was just wondering if you have tried tweaking the MTU size to fit 25Gb speed ??? I know for 10Gb the MTU can be shaped to 9000 but in my experience leaving it default in production environment is easy troubleshooting.
When it comes to the PCIe card itself, to attain the maximum possible throughput jumbo frames are required, but it should be possible to improve the throughput without resulting to that.
when we talk about performance, some words pop in my head, SR-IOV, multi-queue, OVS, DPDK as proxmox is a .... I mean compare to proxmox, vmware vsphere is a (more) enterprise ready platform, should perform best result out of the box (don't know if DirectPath I/O nic helps. but we seen vmxnet3 in vmware gives better performance compare to x520sriov, cause x520sriov driver only support 1 queue.) how ccr2004 pcie card running on that?
If I am not mistaken, Linux is the only platform this is supported on. This card requires extra drivers that are not available on VMware. If I am wrong, then please someone let me know, but I recall reading this on the Mikrotik website.
@@masterTigress96 Yes, one year later, this card is still not supported by enterprise virtualization platforms, It's a software emulated card without any hardware offload supported. If they can improve driver, it will be very promising, and we have seen the benefits brought by Bluefield and Amazon Nitro.
ordered it 06/2022 - still waiting. not available like many other products. i am certified for your stuff and need them for customerprojects, but cannot buy them anywhere. i am really pissed
Can the switching acceleration hardware on the board used to make a high-performance firewall? Can multiple boards be used on a signle system with the acceleration hardware on the boards to make a larger fabric across the boards?
Synthetic load is a not VM friendly by any means, maybe try passthrough the whole pci-e slot to the VM, or at least with IOMMU try to individually pass one of the cages
Despite my best efforts I cannot find one. I have checked several distributors and they are all telling me they have not had had one for close to a year
What @jblow530 said, so no not for XCP-NG itself if you want to use it to e.g. migrate VM's to another host in a speedy fashion. Linux is as far as I know the only OS this is supported on. XCP-NG and VMware also run a modified version of Linux, but you need something like Proxmox (which is a more standard, full fat Linux distro) to get the drivers. Maybe a custom kernel for XCP-NG of VMware could get it to work, but I haven't tried it.
Through many different evolutions of traffic generators we finally found that TREX was the most cost effective way to test devices at our ISP. A Dell R610 can generate about 10Gbps in ASTM mode using Intel Optical cards. TREX has been tested up into multiple tens of gigs and there are even anecdotes of it being used at 100Gbps but I cannot verify this.
The issue is none of these cheap cards have any offloading so you can never hit anywhere near wire speed anyway because your bottleneck will always end up being in the CPU shuffling bits around for no reason.
Am I right in thinking that this thing does not support sr-iov? And more importantly - since main selling point is that it's a router - what kind of speed one can expect when this thing is being used as a router?
Don't know much about sr-iov, but routing performance depends largely on setup. However it is safe to say that routing with the 25G interfaces will not deliver anything near the wire-speed that is possible in pass-through mode.