Тёмный

Dell PowerEdge R710 build PART 9/9 | my mistake, LXC GPU pass-through, PLEX transcoding 

Art of Server
Подписаться 33 тыс.
Просмотров 15 тыс.
50% 1

In this video, I'm going to setup a LXC container running CentOS7 to run PLEX. I will configure GPU passthrough to allow the container access to the Quadro P2000 so that PLEX can use it for hardware transcoding. Along the way, I'll show you how to install PLEX on CentOS7, as well as install the necessary nvidia CUDA driver files to enable hardware transcoding in PLEX. Finally, I will demonstrate PLEX GPU transcoding works in this setup.
This will be the last video in this R710 build series. I'll give you my final thoughts in closing.
Here are the lines to add to the LXC container configuration:
lxc.cgroup.devices.allow = c 195:0 rw
lxc.cgroup.devices.allow = c 195:255 rw
lxc.cgroup.devices.allow = c 195:254 rw
lxc.cgroup.devices.allow = c 237:0 rw
lxc.cgroup.devices.allow = c 237:1 rw
lxc.mount.entry = /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry = /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
If you'd like to buy a pre-flashed ready-to-go LSI IT mode SAS HBA card from me, checkout my eBay store: ebay.to/3l4xlch
eBay searches for components shown in this video (not at my eBay store):
- Dell R710 6-bay server: ebay.to/38unmb9
- Dell R710 modified riser for GPU: ebay.to/3cnSM4a
- Nvidia Quadro P2000: ebay.to/2OMx1mg
eBay Partner Affiliate disclosure:
The eBay links in this video description are eBay partner affiliate links. By using these links to shop on eBay, you support my channel, at no additional cost to you. Even if you do not buy from the ART OF SERVER eBay store, any purchases you make on eBay via these links, will help support my channel. Please consider using them for your eBay shopping. Thank you for all your support! :-)

Наука

Опубликовано:

 

7 дек 2019

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 73   
@up2l8r1
@up2l8r1 Год назад
I'm late to the party, but this video was extremely valuable for figuring out plex in an lxc. Some notes from my experience, in pve7 use cgroup2 instead of cgroup. Also, I found it easier to just install the drivers again inside the container using the --no-kernel-modules option. That method avoided driver mismatch issues and everything worked right out the gate with my gtx 1660.
@ArtofServer
@ArtofServer Год назад
Glad this was helpful! What's the reason for cgroup2?
@flaviop17
@flaviop17 Год назад
I was able to configure a plex LXC container with GPU passthrougt on an HP Microserver G8 with a Quadro 620, I learn so much from this serie of video tks so much !
@ArtofServer
@ArtofServer Год назад
That's awesome! glad this series helped! thanks for watching!
@jdsim9173
@jdsim9173 2 года назад
Thanks to you and your videos, I now have a r710 with a quadro p2000, usb 3.0 card and a ssd boot drive
@ArtofServer
@ArtofServer 2 года назад
Nice!
@merlingt1
@merlingt1 4 года назад
These series have been great. I really like that you go into detail.
@ArtofServer
@ArtofServer 4 года назад
Ha ha ha.. thanks man! Glad there are folks who enjoy the details...
@kim0sabe
@kim0sabe 4 года назад
Binge watched the whole lot ,, really learnt heaps,, thanks for all the Proxmox stuff too :) Subbed and excited for more content
@ArtofServer
@ArtofServer 4 года назад
thank you for watching!
@geoffhalsey2184
@geoffhalsey2184 3 года назад
Thank you for this 9 part series. Really detailed. It's helped a lot with outlining the differences between purpose built servers and using a PC as a server. Learned a lot.
@ArtofServer
@ArtofServer 3 года назад
Glad you found these videos helpful! :-)
@davidlugner1045
@davidlugner1045 3 года назад
I just binged watched as I have a R710 and holy you are incredible in your videos as far as depth goes, clarity, and explanations. Thank you.
@ArtofServer
@ArtofServer 3 года назад
Thank you for watching! I'm glad these videos were informative. :-) Can't believe you binged watched this.... LOL
@locusm
@locusm 4 года назад
Great series mate, really enjoyed it.
@ArtofServer
@ArtofServer 4 года назад
Glad you enjoyed it. Thanks for watching.
@Kevin-wj1do
@Kevin-wj1do Год назад
This was a great series. I was able to get a T710 and a couple of R710s for super cheap. Once I get everything set up I will definitely be following these guides. You have made my life so much easier with this content as this is EXACTLY what I was planning on doing with it.
@ArtofServer
@ArtofServer Год назад
I'm happy to hear my videos have been helpful. :-) Good luck with your project!
@MahmoudHafez
@MahmoudHafez 4 года назад
Very informative series, straightforward, and easy to understand even for a new home lab comers like me. Thanks a lot for your efforts and dedication to share your knowledge and experience with the whole world. Hope to see more on some useful utilisation of this beast (ie. Self hosting services, creating proxmox clusters, establishing v-lan, pfsense or untangle firewall, pihole, AdGuard home and so on). Again, thank you very much for sharing such valuable experience.
@ArtofServer
@ArtofServer 4 года назад
Thank you for watching! :-)
@fmj_556
@fmj_556 3 года назад
Thanks again! Love this series!
@ArtofServer
@ArtofServer 3 года назад
Glad you enjoy it!
@280557445
@280557445 3 года назад
learned a ton, really hope to see some more videos about dell server and proxmox setups.
@ArtofServer
@ArtofServer 3 года назад
thanks for watching! check out the R410 series too. there's more to come...
@snibbo71
@snibbo71 2 года назад
You sir are an absolute bloody legend. I had no idea how I would get my graphics card working on the Proxmox server on my R710 and you just made it possible. (You were pretty much the person who convinced me to get one in the first place and I gotta say, apart from the electricity bill, I really love it!) Thank you!
@ArtofServer
@ArtofServer 2 года назад
Thanks for the kind words! Glad my vids have been helpful 🐱
@snibbo71
@snibbo71 2 года назад
@@ArtofServer Very much so - I was going to use my video card (on 1x PCIe slot with an extender) on the main host - which I was never really comfortable with as I didn't want to taint the Proxmox VE core. Doing it via an LXC is perfect. I have another video card in that machine which I use though for Windows that obviously has to be a VM, which is using GPU Passthrough on a Gen1 R710 - so it can be done, but you have to enable unsafe interrupts. Allegedly that's OK if you trust the VMs on the host. Some people have said it can be unreliable but it's not failed on me yet so your MMV on that one. Anyway, thanks again, your vids are great and very helpful! (Except for the RedHat bits, Debian FTW ;))
@fmj_556
@fmj_556 3 года назад
Great series! Thanks!
@ArtofServer
@ArtofServer 3 года назад
Glad you enjoyed it! thanks for watching! :-D
@timherrmann4628
@timherrmann4628 4 года назад
Yes very informative tutorial. I think best term for this procedure would be "lightweight vGPU concept" with Proxmox. Iommu is only useful for home server stuff but vGPU concepts are for professional use cases. Only vGPU could run with HA and doing this with LXC and Proxmox it opens a complete new range of a hardware portfolio with older NVIDIA Quadro and Tesla cards.
@ArtofServer
@ArtofServer 4 года назад
Thank you for watching! :-)
@zparihar
@zparihar 4 года назад
Hey 'Art of Server', great video series! I watched all of them too! I've setup a farm in my garage for a software development product that I'm building. It consists of R610s, R710s, R905s, 1950s, 2950s - all of them are running with 2.5 inch drive bays. I've been a little creative with my setup. I've purchased Patriot SSD Drives for the OS, and then setup NVME SSD Drives running on PCI. I'm a huge Proxmox, FreeNAS and PFsense Fan, so I appreciate what you demo'd with GPU Passthrough with LXC (also a huge LXC fan - during my career, I've architected massive migrations - 100s of servers and Petabytes of data - from Solaris 10 with Zones to CentOS 7 with LXC back in 2016 - with an overall 10% performance gain and massive license cost savings - also running with Zil and L2ARC). I'm running Plex on a DIY NAS box I built a few years ago, and I'm now considering moving it into my Server Rack and converting on of my R710's over to FreeNAS - FreeNAS has a Plex Plugin that runs in a FreeBSD Jail - Jails are very similar to LXC containers. I need to now run FreeNAS in my rack anyways because of the other Features its going to give me and thought that I'd like to move my Plex Media server to it as well, but didn't want Plex hogging the CPU resource. I'd really appreciate it if you had time to demo FreeNAS with a Plex Plugin and with a GPU passthrough with that same NVIDIA card (I have an NVIDIA Quadro K2000D - somewhat similar). Other features of FreeNAS maybe good to show to for the other viewers as well. I'm running all X5675 CPU's I've purchased over EBAY. However, larger number of Cores vs Power of cores is more important to me as this is all just for dev work. It would be interesting to know how much power I save per server by changing each x5675 to an L5640. Lets also chat, as I may have some future propositions for you. Cheers!
@ArtofServer
@ArtofServer 4 года назад
Hi Zubin: Unfortunately, my knowledge of FreeBSD is limited, and I'm not currently aware of how something similar to what I showed in this video can be accomplished with FreeBSD jails (the way PLEX plugin is run), nor do I know if the Plex pass plugin can do that or not. If i happen to come across this information and try it out, I'll surely make a video about it. But for now, I just don't know if it is even possible. thanks for watching!
@woxit6107
@woxit6107 2 года назад
Thanks for the time and effort. (Better than HBO.)
@ArtofServer
@ArtofServer 2 года назад
Lol thanks ☺️
@legal3450
@legal3450 4 года назад
Thank You too!!!!!
@TheRangeControl
@TheRangeControl 3 года назад
great to know.
@ArtofServer
@ArtofServer 3 года назад
thanks for watching!
@welbo9766
@welbo9766 4 года назад
Yeah I binged the whole series. Because I just went through all this on my new-to-me R710. Your videos helped me flash the H200 I had included so I could run the 6x 2TB SAS drives ($25 ea off eBay and very good smart date). I also ended up with an Intel SSD (256GB) for boot and slog/cache. I added a pci adapted nvme (256gb for $25, used but 0 wear reported on smart) and tried to move boot over to it. That failed, just before I left a comment on part 7 the there was another option for a boot drive. I can’t find a way to boot off this nvme. But it’ll make a great cache and slog for zfs. If you are able to pick up locally in the Dallas area, Garland computer on eBay is a great source at a great price and configures to your needs.
@ArtofServer
@ArtofServer 4 года назад
I have on my list of future videos, "How to run OS on NVMe when your system BIOS/UEFI can't boot NVMe for Linux" ... one of these days.
@welbo9766
@welbo9766 4 года назад
@@ArtofServer I moved my root over to the nvme because my SSD started generating a bunch of errors in the logs and the networking quit working on the server. Smart data showed no errors on the drive. I'm wondering if that SATA port may not be very stable for continuous usage (but OK for optical drive). I also wasn't able to get passthru to work for my CentOS plex container using bind mounts. Instead I used a method from the Plex forums that calls a script, mount_hook.sh, from the LXC conf file. It uses mknod to create the devices inside the container. The bind mounts just never would mount.
@ArtofServer
@ArtofServer 4 года назад
@@welbo9766 that's strange that the bind mounts didn't work for you. were you following this video or was that some previous attempt? as you saw in this video, using bind mounts worked for me. i do know about the mount_hook.sh script, but i figure since LXC has native way to do bind mounts, i opted for the "built-in" approach, instead of calling external script.
@welbo9766
@welbo9766 4 года назад
Art of Server I was ‘kind of’ following the video. I do not have a separate gpu so I was attempting to pass thru the intel gpu (/dev/dri/card0 and /dev/fb0). I like your approach better than making nodes inside the container. Definitely cleaner. The mounts just wouldn’t show up in the container. I haven’t completely troubleshot it yet though. Thanks for the replies and the great videos. The R710 is an awesome machine. I wouldn’t have gotten one if I hadn’t prepared via your videos.
@DennisPlagge
@DennisPlagge Год назад
Hello, your video series is amazing, everything you tell has a lot of substance. Thank you for this. By chance I bought 2 dirtycheap r710 in my city, one LFF and one SFF, both Gen2. The first one for backup purposes, the second one only because it was so cheap. Then yesterday I was so outrageously lucky to get the x16 riser card here in Europe for a mere 10 bucks, as an aftermarket dealer was selling it under the wrong serial number. So now I'm thinking of turning the 2nd R710 into a streaming server because I thought your Quadro P2000 plex version was so cool. However, I don't know much about graphics cards and have to do a lot of reading. Now I've stumbled across Tesla P4s and wonder if they wouldn't be better than a Quadro P2000, which really makes sense. For us, it would also be cheaper than the P2000. I would be interested in your opinion. Maybe you could give me some advice, I'm very unsure.
@ArtofServer
@ArtofServer Год назад
If you're using the GPU only for video transcoding, keep in mind that the NVENC/DEC stuff is roughly the same on all GPUs of the same generation. There are differences between generations, like improved image quality at the same bitrate in later generations, when it comes to the nvenc/nvdec stuff. But for example, my 1080Ti, P2000, and P400 all perform roughly the same when it comes to video transcoding because they are all of the same generation. The lower end cards do have an artificial restriction on the number of nvenc sessions, but that's about it. The artificial restriction can be removed if you search enough online.
@DennisPlagge
@DennisPlagge Год назад
@@ArtofServer Thanks for your answer! Since, as I said, I have no idea about graphics cards (although I'm a developer ^^), I consulted the Nvidia documentation. Yes, the Tesla does not have the inherent restrictions, but it also comes with a little more support for the hevc codec, so that the data traffic is of course relieved a bit with the Tesla. I have therefore decided to deviate from your variant at this point and try it with the Tesla. Also because in this context I read about the vGPU capability of such cards and this is the first time that I am more interested in the graphics card topic. In fact, it's like unplanned having another child that you're really happy about, because actually I just wanted some bare metal for the backup, because running servers privately with European electricity prices doesn't make much sense in itself. I have a hyperconvered cluster of thin clients instead. But suddenly, with the 2nd R710 with the Tesla, it fits into a coherent overall picture when the server can distribute performance-hungry graphics applications to the thin clients. What I'm still afraid of, however, is the uncertainty as to whether the R710 can provide the 75 watts for the Tesla via the PCIe. The lack of out of the box power supply for powerful graphics cards is one of the few things I'm a bit unhappy about. But well, that's how it turned out. I just hope this is enough and works. I'll report. Thanks again for your helpful videos, I will also watch your videos beyond the R710, exactly my channel. ,)
@DennisPlagge
@DennisPlagge 11 месяцев назад
​@@ArtofServerAs promised, here's an interim report, even though I've only just got around to installing the whole thing, but haven't tested it extensively yet. To cut a long story short, do you know that feeling when parts harmonise perfectly right from the start? That's exactly what I had when I slid the Tesla into the Riser 2 with the x16. As if the Tesla was built for the R710 alone. It should be explained that the Tesla P4 is incredibly small and slim, although at the same time it is as heavy as if you were hanging a gold bar in the server. As a result, the Tesla is much more compact in the Riser 2 than the big Quadro P2000 with the fan. It all seems like a tailor-made suit. The Tesla is recognised immediately and I haven't noticed any power problems either (although I also installed 2x 870 watts instead of the smaller one as a precaution). Of course, the detailed tests remain to be seen, but so far it looks like an ideal combination. ,)
@Zitz406
@Zitz406 4 года назад
Great Serie! A while back i started an attempt to make a ESXi cluster with 3 nodes (3x R710) all same configuration: dual X5660 cpu, 24 GB RAM, H200 HBA, 250 GB SSD, 2x 3 TB HDD, Dual 10 GB Network card. I went for samsung 970 NVMe with pci-e adaptor. No enterprise grade for this one $$$... Ram and Disks will be added later. I wanted to cross flash the H200 HBA's but ended up with cards who wouldn't boot in the integrated slot. So i moved them to another PCI-e x8 slot, but then the cable problem... I was searching for weeks to find cables, untill i had enough of it and stopped for an while, later, after some searching i found your video about the PCI config, so i tried it and could move them back to the integrated slot, no need for special cables. After this i went thru the hole series and learned much. Finally a good way to upgrade these things. Also i go for Porxmox now instead of ESXi. Thank you for all the knowledge you shared!! Hope you will make more of this video's. One small comment: Don't put the "waiting" music not so loud, it is a lot louder then your voice..
@ArtofServer
@ArtofServer 4 года назад
glad these videos helped you out! and thanks for the feedback about the music. thanks for watching! :-)
@johnsutton608
@johnsutton608 3 года назад
Thank you so much for this series! What are the best options for an L2ARC Drive in an R710 to be used with RAID-Z2?
@ArtofServer
@ArtofServer 3 года назад
There's not a lot of space for more storage in the R710, so other than what has been shown in this series, I think something using the PCIe slots. NVMe PCIe perhaps?
@fmj_556
@fmj_556 3 года назад
I have a question? How do you mount a truenas share to the Videos folder?
@levelnine123
@levelnine123 4 года назад
I like very mutch this series, but can you make a zfs pool with ah NFS share to the network too. with cache and log on a ssd.
@ArtofServer
@ArtofServer 4 года назад
well, this R710 is going to a new home. but I do have plans to make series of ZFS tutorials, which will cover L2ARC and ZIL/slog.
@levelnine123
@levelnine123 4 года назад
@@ArtofServer Thank you very much, that would be very nice. Are not many who explain that exactly so detailed and with a lot of background knowledge.
@dimitristsoutsouras2712
@dimitristsoutsouras2712 3 года назад
6:01 you let the check box at unprivileged mode. Doesn t that mean the CentOS container won t have access to your probably shared folders containing your media files? In general what are the use cases of checking / unchecking this option Thank you
@baudneo
@baudneo 3 года назад
You can enable PCIe passthrough for a GPU in proxmox on a gen 1 motherboard for R710. You just need to enable unsafe interrupt mapping. See this Reddit post --> www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/? All I did was follow the directions up until creating the VM last the post uses windows and I passedy GPU through I to an Ubuntu VM).
@locusm
@locusm 4 года назад
Got any good resources for mapping out what gen Dell servers map to what CPU generation etc.
@ArtofServer
@ArtofServer 4 года назад
maybe something like this: en.wikipedia.org/wiki/List_of_Dell_PowerEdge_Servers
@cunaz68
@cunaz68 3 года назад
Is it worth noting that one should determine their own major,minor numbers from /dev/nvidia* and use those numbers rather than copy/paste the container configuration verbatim as noted above? It wasn't clear to me and so don't want others to make a mistake.
@ArtofServer
@ArtofServer 3 года назад
That's a good point, but I thought I mentioned that in the video?
@ryanj2743
@ryanj2743 4 года назад
I think I saw 17 watts usage on Nvidia smi while you were HW transcoding. Would you happen to know the CPU wattage estimate when it was doing non HW transcoding and your usage went up to 80+%? If accurate I was just wondering how much it might be worth to invest in a P2000, 1660, etc vs CPU transcoding regarding overall power usage efficiency.
@ArtofServer
@ArtofServer 4 года назад
That's a good question, but I did not measure the system power consumption at the time PLEX was transcoding. In PART 5 of this series, I showed that the system at idle (with drives) used about 163W and at full load used about 306W. I don't know if PLEX CPU transcoding loaded the server up to 306W or not, but as you saw the CPU spiked and most definitely increased power significantly more than 2W. It was not 17W by the way, it was 17W with software/cpu transcoding, and 19W when GPU transcoding (seen @26.48) That's another advantage of GPU transcoding is that it is much more energy efficient.
@muhammadamohsin
@muhammadamohsin 4 года назад
Is there a way to pass through multiple GPUs through vms or containers? And what OS is the container or vm? I want to make a multi client Minecraft server
@ArtofServer
@ArtofServer 4 года назад
If using PCIe passthrough, I imagine that should be possible, but your server needs to be able to take the GPUs. If doing LXC containers like I did, I think you just need to pass the device nodes, so that is probably also possible but I haven't tried something like that. that said, I don't understand your meaning of multi-client minecraft server.... aren't all minecraft servers multi-client? and why would you need a GPU? is there some version of java/minecraft that can make use of GPU on the server side?
@tbhinteractieve
@tbhinteractieve 4 года назад
unfortunately my CPU kicks in instead of my GPU (GTX 750 TI) :(
@RichardOpokuEngineer
@RichardOpokuEngineer Год назад
This is great but I have a challenge. I am unable to mount cifs shares. I have my movies on OMV which I share through nfs and cifs but attempting to do that, I get operation not permitted. I have a privileged container with Ubuntu, both are able to share the GPU but Plex on the privileged Ubuntu that has access to smb mounts cannot still make good use of the GPU without throwing network errors at me s1001, etc. It is rather killing my CPU and my GPU is Quadro K2200.
@at3tb
@at3tb 4 года назад
Too bad that it has CentOS and not DEBIAN as a container. Would have been nice to see CUDA instalation on DEBIAN. Translated with Google translator. Super PROXMOX videos Thank you.
@timherrmann4628
@timherrmann4628 4 года назад
This Tutorial works even for Debian but in this way only for Quardo/GTS Cards >> Tesla Driver are different and more complex for this LXC Setup
@welbo9766
@welbo9766 4 года назад
The driver installation on the host was proxmox, which is installed in debian.
@timherrmann4628
@timherrmann4628 4 года назад
@@welbo9766 The Key for the CUDA LXC workflow with Tesla is CUDA version equality, because the Tesla driver includes the CUDA Driver and the Quadro Driver doesn't include it. For Quadro the CUDA driver is optional to install.
@mikkelgeorgsen
@mikkelgeorgsen 4 года назад
@@welbo9766 It is not the same procedure inside the LXC - too bad he didnøt include how he makes it work in Debian/Ubuntu containers.
@topstarnec
@topstarnec Год назад
Complicated. I would just use Red Hat linux and install dockers and kvm.