This is by far the simplest working instructions for GPU passthrough with Qemu KVM. I tried three other step-by-step tutorials before this and they all failed. This one worked perfectly!
Likewise, I spent a few days trying to make it work including searching via ChatGPT to no avail. I am thankful for these instructions, simple and found where my mistakes where, hint it was in GPU isolation.
Brilliant tutorial! Best one I found and combined with some info on systemd-boot, ACS patching, linux-jcore etc I finally succeeded in setting up GPU Passthrough on my Arch Hyprland system!! Now for Looking Glass! Thanks for your help!
If anyone has issues booting after the first step. Boot into recovery mode and "nano /etc/default/grub". Then delete the line that you modified before and save the file. After that run "update-grub" and you should be able to reboot. 👍
i also found taking the gpu out of the pc allows it to boot again, though i am not exactly sure why it didnt work, anyone get this working with a 3090 on debian 12?
@@512Bytes for me it was something with the proprietary nvidia driver being weird. It caused like a 5 minute long boot time for me and i had to switch to nouveau.
When doing this, you mess with something in the grub. Will this work if I'm also dualbooting at the same time? Just because I followed an outdated, but similar tutorial, and after doing the first step and rebooting my linux wouldn't boot.
hello i apply all steps but in verify gpu in 5:08 show: persist the nvidia drivers Subsystem: Micro-Star International Co., Ltd. [MSI] GP106M [GeForce GTX 1060 Mobile] Kernel driver in use: nouveau Kernel modules: nvidiafb, nouveau no vfio-pci, how fix this?
simple fix just put vfio things to here /etc/initramfs-tools/modules command should be "sudo nano /etc/initramfs-tools/modules " and that add those vfio vfio_iommu_type1 vfio_pci vfio_virqfd after that Update initramfs using this command and reboot your pc sudo update-initramfs -c -k $(uname -r)
I have a Ryzen 7 7700 and I have an NVIDIA 2060, I wanted to use the nvidia por linux and the integrated AMD for WIndows but it seems not to work. The AMD does seem to passthrough but windows doesn't recognize it as an AMD it just shows the devices as PCI\x
Hi,really nice tutorial.. I have some questions- how to get full refresh rate supported by monitor? & how can i switch mouse and keyboard input between host and the guest cause my m&k just locks into virt guest when I run it.
Nahh, you probably gotta buy another pair.. Or allocate a few usb ports.. then keep switching the usb plug from the free usb ports to the vm allocated ones you've allocated to the vm.
Hello, thanks for the tutorial! I'm gonna use that setup for clean arch install on my laptop with amd iGPU and nvidia dGPU and i have a question: Do i need to install nvidia proprietary drivers if i'm only gonna use nvidia gpu in windows vm?
Yeah.. bro didn't really provide vfio/sr-iov kinda solution. Bro essentially probably just provided entire pci-e passthrough. Which.. ain't very interestin. Neither as useful as true definition vfio/sr-iov. Btw vfio/sr-iov is supposed to let you use the device between the host (baremetal) and vm simultaneously.
I tried this tutorial but I wasn't able to bing the vfio-pci to the nvidia hardware, and mostly because, the modprobe described in the video expects to have the proprietary Nvidia drivers installed. If you are not using it, you have to changed it from `nvidia` to `drm` instead. My setup: - AMD with iGPU as host - With Nvidia card to guest OS
Can you do that the same using Slackware(64) 15.0, because ever on some step i ever see something fail on a pc of my job. Dunno this can be replicated using Internal GPU for Windows Virtual Machine ?
As far as I can understand, having both GPUs from AMD (7600xt + 7800x3d's iGPU) I can choose what to use as passthrough but when it comes to disable the Linux drivers in the kernel Linux (host) the Linux host will be withouth GPU acceleration. Am I wrong?
Hey man… unfortunately one didn’t work. I have a beefy laptop with an NVIDIA quadro card inside but unfortunately it’s says: host doesn’t support pass through pci devices
On 4:15 you prevent NVIDIA proprietary drivers from installing. I have AMD GPU - is there special syntax for that? Also one more question: let's pretend I isolated my second AMD GPU and passed it to VM. But still I want to play some games on my Linux OS. What are steps to un-isolate the hardware and bring it back to Linux?
to stop the AMD card from loading? use softdep radeon pre: vfio-pci and then under that softdep amdgpu pre: vfio-pci, that stopped my card from loading but my nvida is my second card and it loaded at the beginning I could see my kde plasma wheel then zippo no video at all on either card, the battle continues.
Im experiencing an issue... I followed this tutorial on Ubuntu 22.04.3 and everything worked as expected, but when I tried to start the previously working VM, there is just a black screen. Im trying to pass through a 3090, using iGPU of my Ryzen 7600x for the host. Anyone know what I did wrong?
I have a question. Lets say after i've done with windows vm, and shut it down. Then i want to openup second vm (windows or linux). Does the gpu stuck on the first vm? can i use the gpu on the second vm?
great tutorial. so if i assign my dedicated card to VFIO in grub, I will not be able to use that for the host machine? (unless i update grub to release it from VFIO)
Bluetooth is passed through automatically, if not, detect how it is connected and pass it through, also, make sure that Bluez drivers are installed on Linux.
I tried this and had zero video on reboot, do you have any instructions for doing this with AMD GPU and not nvidia? Luckily I could reboot to a generic instance and remove the lines from grub and reboot to get my main video back.
I have kind of a dumb question. Being that you are isolating you dgpu away from the host, doesn't that mean the host system won't have access to it, and do you have to change the grub config back to undo that or does this not cause such problems?
I am a bios user and when I tried to create a win10 UEFI VM it didn't boot. and when I tried the passthrough in win10 BIOS VM it didn't work, it showed me an error saying "Your host device doesn't support PCI passthrough". What to do?
Umm, everything went fine, the gpu gets detected in the devices in windows, but when i install the nvidia drivers the screen just goes black completely. then i have to remove the nvidia pcie device and start windows and uninstall nvidia drivers to make the screen visible....
I had the same problem. You can use a kernel patch (acs override or sth like that), this will seperate the devices into different IOMMU groups. Instead of manually patching the kernel, install the latest `xanmod` kernel (which includes the patch). With that I managed to launch my vm Edit: You will need to edit your grub confguration: `pcie_acs_override=downstream,multifunction`
I'm on Ubuntu 24, and have two AMD GPU's - 1) PCIe card(7900), and 2) onboard AMD GPU. That said, I tried substituting nvidia with AMD throughout the tutorial, though the isolation doesn't appear to have worked; ie, sudo update-initramfs -c -k $(uname -r), returns; _update-initramfs: Generating /boot/initrd.img-6.8.0-35-generic,_ and nothing else Any ideas?
I got it: not amd but amdgpu. Unfortunately when I get into Win and install the AMD driver I get the -43 error which should be simply bypassed installing amdgpbugreset with no success
I finished the tutorial and the drivers for my GPU got installed, but for some reason in device manager, NVIDIA Platform Controllers and Framework is not running and it is causing the GPU to not activate in win 11. Debian 12 Buster.
@@kskroyaltech sorry for late response. I've never figured out this issue. I have done this both on windows 10 and 11. That being said, I can still get full performance by plugging directly into a monitor via HDMI out of the GPU (laptop). It's not had an issue running a game in a VM even with the Error Code.
No, there doesn't need to be any cable plugged into your guest gpu. It's simply being used for its resources. Think about it - the guest PC isn't really plugged into your actual monitor; it's plugged into a virtual monitor / videocard created by the virtual machine host.
In the /etc/modprobe.d/vfio.conf file, on the second line, replace "nvidia" with "nouveau". Was just running into the same issue myself. I never switched to the nvidia drivers because I installed the Nvidia card specifically for this project.
@geonofone9816 thank you, I had this problem in ubuntu where I was running the nouveau driver, and ubuntu would start until after I followed your suggestion of replacing nvidia with nouveau in vfio.conf. Good call In my setup, I had to bypass both the nvidia and nouveau drivers.
After running the command "sudo update-initramfs -c -k $(uname -r)" it doesn't give me anything except "pdate-initramfs: Generating /boot/initrd.img-5.15.0-88-generic" and nothing else. What should I do?
I've been trying to follow every type of GitHub project and every tutorial trying to pass through a GPU to a virtual machine, but I had no luck. Every time I think it will work, I always get some type of error that I can't figure out, or I never properly pass through the GPU even though I thought I did. I just want to be successful for once.
This command only works on debian based distros. initially I had the same issue like yours and I gave up. I read some forums and I found that command and it worked for me. Initramfs is the crucial part It has to be updated . Also mind you that, on some debian based distros after running that command you may not see any output. Anyway, Can you share your log here what's an output after running the command ?!
I was wondering if you have Discord, so I could help you through there if possible. I'm new to Linux, and I want to learn what I'm doing wrong and correct it. I'm not sure how to show the output because all it gave me was what I said before.
@@nightstar9. Dualboot is convenient in my case as well, I have a mini-ITX motherboard and there is only 1x16 pcie lane and don't want to bother with multiple GPUs. If they can make a software that gives gpu to guest and then back to host, I am game.
this might sound weird, but theoretically: i have 2 4090s, and no igpu. assuming im deactivating the nvidia driver, wouldn't i also lose output of my other 4090 which i wouldn't pass on?
Windows working great with GPU path through, but the issue I can't solve is I am not able to shutdown Windows VM and return back to the host. I am not even able to reboot the host via ssh, it is just dead and the only option is to force shutdown.
@@kskroyaltech I can try it, thanks. So ssh remotely and try to shutdown VM with virsh command. What I found is it when I try to reattach PCI device with "virsh nodedev-reattach pci_0000_0a_00_0" I am getting the black screen and can't do anything else, but hard shutdown, even though VM is not even running.
@@kskroyaltech Hi, I tried it, but it is the same problem. Once I shutdown VM with virsh and try virsh -list the command hungs and I can't even reboot host anymore. I am pretty sure my problem is around detaching the VGA device: "virsh nodedev-reattach pci_0000_0a_00_0", which probably happens when you shutdown the guest. I tried to search internet, but no luck so far.
If Proprietary Nvidia drivers installed, it must be disabled through the VFIO.CONF. That way the Host OS rely on iGPU and leaving NVIDIA isolated. Then NVIDIA GPU can be passed through any VM .
@@kskroyaltech Exactly. How is it done? I wish to do some stuff using 2 monitors, but the 'screens' are from 1 VM. So that it's always isolated from my main OS/machine. I imagine, there should 2 Virt Manager windows for a single VM.
@@kskroyaltech i have AMD RX 580 (8GB flashing for mac boot) and opensuse thumbleweed... so can I run this tutorial with a single GPU? I'm worried about causing damage :)
@@SRECIBI I did, I had to jump into my arch install and chroot into Ubuntu to fix this. I'd back up anything you need like Brave settings or password managers before doing all this. Just some friendly advice. By the way editing the kernel parameters in grub during boot up did not help my case or trying previous kernels.
This really only works well with newer systems such as 9th gen Intel and compatible motherboards as well as 3rd gen Ryzen due to the amount of resource management that is available for IOMMU from these newer systems. You also have to be very careful setting up the GPU, because you have to use a secondary dedicated GPU for the virtualized pass through. You cannot use a single GPU system. For this you probably should set your PCIe Lanes to X8/X8/x1 and have a GPU with a UEFI bios enabled firmware. I was able to get this done with my Ryzen 7 3700X with an older Quadro FX card to serve as the pass-through GPU alongside my Radeon RX 5700 XT which serves as the primary graphics card.
@@ReaperX7 Yes, I know that! I have a Ryzen 9 7900X with all the bells and whistles, running an NVIDIA RTX 4070. I have everything setup as you listed. I got this working with a much better detailed walk through using Arch Linux. This tutorial is extremely bare bones and I would not recommend, instead go over to the r/VFIO subreddit and seek out some real knowledge about the subject.