Тёмный

Manage your Media Collection with Jellyfin! Install on Proxmox with Hardware Transcode 

apalrd's adventures
Подписаться 63 тыс.
Просмотров 104 тыс.
50% 1

Опубликовано:

 

26 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 184   
@kyleolsen3305
@kyleolsen3305 Год назад
Your videos have helped and inspired me so much. This time last year i barely knew how to use a terminal and now i'm daily driving linux and amassing a homelab proxmox setup with multiple nodes. my home network and i thank you.
@apalrdsadventures
@apalrdsadventures Год назад
Glad you like it!
@donsurtube
@donsurtube Год назад
new to proxmox and have to express how much I appreciate your videos which have helped me enormously. There are lots of videos on the subject but you bet them all, Thanks again and keep it up
@TheUkeloser
@TheUkeloser 7 месяцев назад
I know this is an older video, but I just set up my jellyfin server and got QSV hardware transcoding using this guide. Thanks!
@apalrdsadventures
@apalrdsadventures 7 месяцев назад
Glad it was helpful!
@BeansEnjoyer911
@BeansEnjoyer911 Год назад
Crazy how much can change in just 6 months. Jellyfin docs are now different, proxmox 8 has some weird oddities. Either way, all transcoding was failing, and was able to track it down in the Jellyfin Admin Dashboard logs. After checking the permissions of the GPU, in "ls -l /dev/dri/" I noticed renderD128 was owned by sgx. Idk what sgx is, but switching it to the render group fixed it. > chgrp render /dev/dri/renderD128 New Jellyfin docs will recommend adding to render group. However if you didn't do that, you can add jellyfish user to it: > usermod -a -G render jellyfin Edit: for search-ability: mp4 mpeg4 transcode fail
@apalrdsadventures
@apalrdsadventures Год назад
I'll try to run through this and pin a comment with the updated instructions for Debian 12 / Proxmox 8.
@hieroclesthestoic
@hieroclesthestoic 11 месяцев назад
Another change here in the Jellyfin docs is that they're excluding the line passing card0 to the container. Adding that in and then changing the group to render fixed transcoding for me.
@AdamBramley
@AdamBramley 8 месяцев назад
I used this to get things working in Proxmox 7, then updated to 8 about 5 minutes after and immediately had to make further changes. The changes in these comments worked great for a 5th gen i3. One more thing worth noting is that you can get stats on the GPU by installing intel-gpu-tools in the LXC. Thanks for the writeup and the 7>8 additions!
@daviddunkelheit9952
@daviddunkelheit9952 4 месяца назад
I think sgx could be system gaurd extensions…
@NickSale-q6y
@NickSale-q6y 4 месяца назад
I'm working this out right now, instead of chgrp render /dev/dri/renderD128 shouldn't we change the /etc/pve/lxc/xxx.conf file to map the users correctly? (Or am I totally off?)
@mindshelfpro
@mindshelfpro Год назад
I run jellyfin on docker on a Core 2 Duo 4GB DDR2 laptop and a spinning 500GB SATA drive. Actually there are about 10 docker containers on this 2008 laptop, including homeassistant, cloudflared, sonarr, prowlarr, jellseer, and qbittorrent. In the future I will build a proper jellyfin docker stack.. but in the mean time the containers are configured to work together manually (JF, PL, SN, and QB)... that also runs X. I access Jellyfin from all over the country. Its amazing what little hardware specs are required to run some very useful software.
@chyldstudios
@chyldstudios 10 месяцев назад
Dude, super cool video. I just started getting into Jellyfin and Proxmox and ran across your video.
@bokami3445
@bokami3445 4 месяца назад
Just wanted to say Thanks for this video. Using the information you present, I managed to get JellyFin HW decoding working on my Proxmox cluster.
@SandboChang
@SandboChang Год назад
You can also do trasncoding with an unprivileged LXC for better security, the only additional steps will be to assign the corresponding group ID between the host and the LXC for the video and render groups.
@EnlightenedBitFox
@EnlightenedBitFox Год назад
And how?
@limebulls
@limebulls 10 месяцев назад
@@EnlightenedBitFoxyeah how
@UsernameWasLost
@UsernameWasLost 2 месяца назад
thank you for explaining this so clearly. I was struggling to integrate my different containers, this helped a ton
@MinishMan
@MinishMan Год назад
So so sooooo helpful! Doing it in July 2023 - just 4 months after you - and the website instructions are a bit different. This homelab thing is a minefield! But I guess that's the point. Jellyfin didn't ask me to add the dev/dri/card0 mount.entry, so I didn't, and my group for the dev/dri/renderD128 mount.entry was "sgx". No idea why this is different but it all worked first time on a 7th gen Intel CPU. Thank you again!
@apalrdsadventures
@apalrdsadventures Год назад
the card0 node makes sense, since that node can do rendering and also kernel modesetting (for video output) and the render node can just do rendering.
@Immortus27
@Immortus27 7 месяцев назад
Thanks for the guide, it really helped a lot. Now i'm finally able to set up proxmox to the hardware, where Jellyfin was installed on Ubuntu server just because i didn't know how to pass iGPU to container and make it work properly. I've tried to solve it using other guides, but they always covered a part of the solution. Your guide is really all-in-one solution, and i'm glad that i've finally found it
@apalrdsadventures
@apalrdsadventures 7 месяцев назад
Glad it helped!
@JimothyDandy
@JimothyDandy Год назад
You rock, my man. Love your adventurous ways. Skimming by my "normal" subs to see what my man, Apalrd, may have for for us today.
@gravisan
@gravisan 8 месяцев назад
In the deep canadian cold, I'm choosing to stay with CPU decoding as this provides a side benefit of heating my room :)
@imzsoul
@imzsoul Год назад
legend, i finally have transcoding working!
@W1ldTangent
@W1ldTangent Год назад
Still think we may be brothers separated at birth 😂 Been on the Jellyfin train for a while now after getting disillusioned with Emby. I also have Sonarr, Radarr, Lidarr and Bazarr (all with Jackett search) mixed into the stack. I've been running this setup for a few years now, it's absolutely been the best in a long line of "torrentbox" iterations I've created over the years for my... uhhh.. linux ISO downloads, ya... Basically zero-maintenance, I've even had watchtower auto-updating my containers and besides the occasional breakage requiring pinning a version tag for a while until things get sorted upstream, you just keep feeding it more storage and consume... Linux.
@protacticus630
@protacticus630 8 месяцев назад
This is my whish to set up too. Mind you to share your hardware and software approach... I have same terramaster device with 2x8TB with Proxmox on it,I would like to have samba shares where all downloads will be saved from *arrsuite. Can you please advise?
@sebastianleangres1799
@sebastianleangres1799 7 месяцев назад
@@protacticus630I'm doing the same thing this did, basically followed this channels playlist for samba/fileshare and jellyfin/hardware passthrough setup. Then all Arr programs + jellyfin in one container sharing a mountpoint with a dedicated torrent+vpn container pct set [vm id] --mpX /yourpool/,mp=/mnt/yourpool is the mount point command. Took some doing to get everything working right but it wasn't terrible.
@chetramsteak
@chetramsteak Год назад
I can't thank you enough for this! I tried following a couple other guides and spent hours trying to get Jellyfin set up and working properly on my Proxmox server, but I was running into issue after issue. But this one worked perfectly, and was really thorough! I'm still trying to work out how to get QVS working with my Core i9-11900K, but in the meantime, I at least have Jellyfin up and running. Thanks again!
@sbsaylors8
@sbsaylors8 Год назад
Thanks!
@apalrdsadventures
@apalrdsadventures Год назад
Glad you like it!
@ChromeBreakerD
@ChromeBreakerD Год назад
thank you very much. I got it working thanks to you. Only thing that I had to do that you did not mention is rebooting the proxmox host once.
@MrRedWA
@MrRedWA 9 месяцев назад
Awesome! Thanks for the instructions! worked like a charm.
@fa.miebelzwett400
@fa.miebelzwett400 Год назад
Hello apalrd, I appreciate your videos very much and like your content. When I followed along your adventure, I got an overflow on my LXC's boot disk since transcoding uses disk space.* I just want to mention for others following along that there is an option to throttle transcoding and that there also is a task within jellyfin to delete old transcoding files. (You can use this as a blueprint for a cronjob.) *I enlarged my boot disk because I thought, apt had filled it, but later I found the transcoding issue. Now I cannot revert to the small size since recovering from backup with --rootfs parameter doesn't work with ZFS subvolumes. 😅
@apalrdsadventures
@apalrdsadventures Год назад
With ZFS it's still thin provisioned, so as long as you don't use the space it won't take more space if the disk is larger. Proxmox does not let you shrink volumes easily since they are supporting a bunch of backends, many of which don't support shrinking FS easily. Sorry about the transcode space issue, someone else in the comments suggested making a dataset just for transcode cache.
@KeithTingle
@KeithTingle Год назад
you have the most interesting topics on YT (in my geeky opinion)
@apalrdsadventures
@apalrdsadventures Год назад
thanks!
@seimeianri
@seimeianri Год назад
thanks for the video, I was about to start a research to install jelly on an mac mini 2011 and the video went live
@apalrdsadventures
@apalrdsadventures Год назад
Glad I could help
@markkoops2611
@markkoops2611 Год назад
You may want to change the transcode cache folder, by default it will use a folder on the OS mount point
@vidmonkey
@vidmonkey Год назад
where do you recommend the cache folder be located? I suspect that if its on an SSD, it may shorten the life. Would a standard mechanical HDD be better?
@NetBandit70
@NetBandit70 Год назад
How much caching do you need if you can do it on the fly? I'd think 64MB RAM would handle a few minutes of cache, to keep your disk untouched.
@ertibimbashi9135
@ertibimbashi9135 Год назад
Awesome work - Wish I stumbled into your profile sooner.
@housy2
@housy2 Год назад
Thanks a lot for this tutorial! Working well on an odroid H3
@IamJeffrey
@IamJeffrey Год назад
Subscribed! Thank you for the video. Was planning to buy a terramaster nas also but I was hesitant because of the transcoding performance until I watch this video.
@apalrdsadventures
@apalrdsadventures Год назад
It's a really modern CPU, just a low end one for the 2-core unit
@IamJeffrey
@IamJeffrey Год назад
​@@apalrdsadventures what makes you to decide to install proxmox on that nas instead of truenas scale?
@apalrdsadventures
@apalrdsadventures Год назад
I'm focused more on VMs/applications instead of file sharing, and TrueNAS isn't as good at virtualization and containers as Proxmox is. Both are using ZFS under the hood anyway. TrueNAS is great at being a storage server, but that's not the primary use case for an all-in-one server.
@mikerollin4073
@mikerollin4073 8 месяцев назад
Great stuff, this is perfect for my setup
@ndupontnet
@ndupontnet 9 месяцев назад
Excellent, thank you very much for that !
@apalrdsadventures
@apalrdsadventures 9 месяцев назад
Glad it's working well for you!
@Maisonier
@Maisonier 7 месяцев назад
Amazing video! Thank you
@proteinman1981
@proteinman1981 Год назад
Thanks alot for this guide, your a champion
@DarrylGibbs
@DarrylGibbs Год назад
Dude, you are a legend!
@FTLN
@FTLN Год назад
Been waiting for this one :)
@diegofelipe2119
@diegofelipe2119 11 месяцев назад
Awesome video, thanks!
@GutsyGibbon
@GutsyGibbon Год назад
Excellent, haven't tried it yet but the detailed explanation is perfect. I have Optiplex 3040 as a Proxmox server (at least in the testing phase) and it looks like it has Intel GPU Skylake, so what you did here should also work for me. Thanks!
@markbifferos2765
@markbifferos2765 Месяц назад
Would love to know how to avoid the 20 second delay on every login on the container. I'm using DHCP instead of the static addresses used in the example here.
@ikorbln
@ikorbln Год назад
Thx for this video, it work´s easy as that.
@i7andy
@i7andy 7 месяцев назад
thanks dude
@Trains-With-Shane
@Trains-With-Shane 2 месяца назад
Is there a way to retroactively add an already in use mountpoint to the new container? For instance I'm using Cockpit exactly how you showed us how to do and it's been working brilliantly. But I've already got shares, etc. set up and running, Can I share that existing mountpoint with the new Jellyfin container? It's not a huge deal I could probably work around it by creating a mount point in fstab or something within the jellyfin container, and just use a CIFS mount, a bit sloppy but at least I think I can. lol
@apalrdsadventures
@apalrdsadventures 2 месяца назад
Yes, but not through the GUI. Create the container first without the mount point, then add it later. Find the mount point from the existing container using `zfs list` (it will probably be something like rpool/data-subvol-503-disk-1), and note the Mountpoint column (usually it's the same as the name, but not always). Next, add it to the container config using `pct set`. Choose a free mount point ID (0-31) which isn't in use for the Jellyfin container. The command is: `pct set 500 -mpX /rpool/data/subvol-xxx-disk-y,mp=/mnt/video`, where 500 is the ID, mpX is mp0 through mp31 (not in use by the container already), the first path is the path to the existing disk, and the second path is the path within the container.
@Trains-With-Shane
@Trains-With-Shane 2 месяца назад
@@apalrdsadventures Thanks! I'll give this a try hopefully if I get time later this week.
@InsaiyanTech
@InsaiyanTech 7 месяцев назад
should you do a arr stack in lxc or should i do a docker lxc and put them in there or seperate everything?
@elcapitanomontoya
@elcapitanomontoya 7 месяцев назад
Solid guide, but I have a question about LVM/ZFS for the mountpoint. I have a server on which I run Proxmox with a ServeRAID_M1215 RAID controller and two storage drives set up on RAID-1 configuration. All of the documentation surrounding volume/filesystem setup on Proxmox says not to use ZFS on top of a hardware RAID controller. What are my options for creating a pool from which to share mountpoints? If the answer is lvm-thin, what would be the correct commands using lvm-thin for a shared media mount?
@HerrFreese
@HerrFreese 7 месяцев назад
Could you please explain, why you use mount points in Proxmox and then bindmounts to the LXCs? Is this because you are using ZFS? As far as I tried you can also (bind?)mount single LVM-Volumes and/or volumes from storage defined in proxmox directly to different containers (via the .conf files). This should also be possible with other "devices"? Am I missing something?
@apalrdsadventures
@apalrdsadventures 7 месяцев назад
In general I use bind mount points in lxc to share data between containers without going over the network. It does of course work for devices as well as directories.
@YannMetalhead
@YannMetalhead 9 месяцев назад
Good video.
@jaroslavchytil5732
@jaroslavchytil5732 Год назад
nice ... one silly question, if i will unmount drive and mount it to different proxmox client will be data visible?
@MikeDeVincentis
@MikeDeVincentis Год назад
Should be. As long as you mount to the same share and that share is accessible to the other client.
@neail5466
@neail5466 Год назад
I don't understand why people are obsessed about transcoding so much, all of the modern devices can playback almost 10bit 4k even over 2.4ghz network. The hastle with transcoding not only unnecessary but also futile. If someone is concerned about formats, VLC plays most of them. The only used case is accessing the videos over the internet, where bandwidth is a limitation. I don't personally believe you should open your NAS to the internet even if you could. That is another added risk.
@apalrdsadventures
@apalrdsadventures Год назад
In general it's for smart TVs, since they can only playback what they can decode in hardware. Depending on the age of your collection, you might have a big collection of xvid or other codecs that don't have hardware implementations even if they aren't that hard to software decode.
@Superman12321
@Superman12321 11 месяцев назад
I was following your tutorial and setting up the container in the network section you lost me. You didn't explain where and how you got the static IPv4 and the IPv6. i don't know where to get those from. I am new to this.
@PierricDescamps
@PierricDescamps Год назад
I'm struggling to get the same done on a VM rather than container , using pci passthrough for a thin client (Fujitsu s740) with an older celeron. Passthrough is set up but proxmox keeps saying the resource is already in use and refuses to boot the VM. If anyone has pointers...
@GeoffSeeley
@GeoffSeeley Год назад
The host is binding a driver to the video card hence the reason it is in use. You can use the driverctl package to bind the card to vfio-pci driver early in the kernel boot and this allows the device to be used for pass through and stops the host from binding to the card.
@apalrdsadventures
@apalrdsadventures Год назад
If you use a container instead of a VM, you don't have to do any PCI passthrough.
@liqm88
@liqm88 Год назад
The way you explained this has helped me a lot. How can I know wich formats my igpu will support?
@apalrdsadventures
@apalrdsadventures Год назад
Running `vainfo | grep VAEntrypointEncSlice` will give you a list of encoders the HW supports with VA-API. But you can also comb through the specs for that generation of CPU+GPU.
@aslanbarsk
@aslanbarsk Год назад
Ok, these videos are great, but hard to understand. even with the nice work you have done. I'm trying to get my NUC 13 going wtih Homeassistant, ZigbeeMQTT separated, Jellyfin, Synology NFS storage etc. But cannot for the life of me do this. Please make such a video!!! :D
@raddude1743
@raddude1743 8 месяцев назад
Are you still waiting for these answers?
@trevsweb
@trevsweb 3 месяца назад
Hi thanks for the tutorial but I'm majorly stuck now. Got my samba mount working with Plex in the same method but not able to connect sonarr and radarr any tips?
@apalrdsadventures
@apalrdsadventures 3 месяца назад
Is the username the same as the share name? It creates a 'shadow' share of the same name as the user, so if the user and share name matches it will have problems. Other potential issues are related to using sonarr / radarr in unprivilaged LXC containers without mount permissions. But if you are using the same system for LXC containers, you can share the mount point directly.
@henriquehff
@henriquehff 7 месяцев назад
Your videos are so inspiring, aa while ago I didn't even think about creating my own server, nowadays I'm configuring new services on a daily basis, right now I'm trying to configure tdarr, I think the problem is the intel drivers, what's the output of the command "vainfo"? I was trying to transcode to HEVC and I couldn't figure out why it wasn't working, then I saw that in the "vainfo" command the "VAEntrypointEncSliceLP" output means that I can encode, and the "VAEntrypointVLD" means I can decode, but here it appears I can only decode HEVC but not encode, only h264 is available for decode/encode, I have a geminilake cpu that was supposed to be able to encode in HEVC right?
@henriquehff
@henriquehff 7 месяцев назад
finally, after hours trying to get this working, all I needed to do was to add the debian unstable source repository, and then install the "intel-media-va-driver-non-free", and now I'm able to decode/encode in every format supported by the intel gpu, even in jellyfin it's working transcoding to HEVC
@nights2walk
@nights2walk Год назад
Please make a video on how to mount google drive folder as a media folder to jellyfin in proxmox
@quintinignatiusfourie2308
@quintinignatiusfourie2308 6 месяцев назад
How do i what you did with the transcoding config but with an Intel i5 6200u cpu
@CaseyHancocki3luefire
@CaseyHancocki3luefire Год назад
what about arc gpus?
@evanmarshall9498
@evanmarshall9498 Год назад
I am having trouble following along with the hardware acceleration section of this video. Were you ever able to do one for Nvidia?
@MarkConstable
@MarkConstable Год назад
How close would the Jellyfin video setup on this F2-423 go towards supporting a desktop system with GPU pass-through, particularly for Manjaro/KDE ?
@chunkyfen
@chunkyfen Год назад
Hi! Thank you for this tutorial, do you know how I could mount SMB shares from my main windows pc to my proxmox server jellyfin container? Thank you!
@kristianwind9216
@kristianwind9216 8 месяцев назад
I am having a voice delay on AppleTV even with no transcoding needed. This is not the case from eg. a browser or on iPad. Have you experienced this?
@neail5466
@neail5466 Год назад
Great, information, what is the "pct" in the passthrough? I use 'qm' for Qmue agent.
@apalrdsadventures
@apalrdsadventures Год назад
qm is to manage VMs, pct is to manage containers
@ivelinbanchev4337
@ivelinbanchev4337 10 месяцев назад
Great one. Thank you so much! Sadly, most of the links are not available at the moment. Or they have completely changed. PS: Would love to see a way to point Jellyfin to NAS such as Synology/Xpenology or TrueNAS. Right now I am struggling to connect it to my Xpenology VM.
@apalrdsadventures
@apalrdsadventures 10 месяцев назад
Yeah, I've noticed that over the past year the Jellyfin docs have changed and they got rid of their Proxmox guide. I'll have to put up the commands on my website I guess.
@ivelinbanchev4337
@ivelinbanchev4337 10 месяцев назад
@@apalrdsadventures Yep. TBH, I've spent like 40+ hours trying to set my Jellyfin on XLC and nothing seems to be working for that transcoding. I guess I can't pass trough my video driver.
@E_Proxy
@E_Proxy 2 месяца назад
I'm a total noob at computers and cannot make a freshly installed Jellyfin LVM on a freshly installed Proxmox Server READ or SEE my "movies" folder ona a Synology NAS in the same network
@markbifferos2765
@markbifferos2765 Месяц назад
I just set exactly that up tonight. The problem is the synology talks in terms of shares but has several ways of sharing, CIFS, SMB, NFS, Apple etc... Assuming a normal windows SMB share I think you need cifs-utils which you must install with apt. The JF doesn't concern itself with network browsing and has no capability to do that, you have to mount that share first, using unix commands. I mounted the nas at /mnt/nas with mount -t cifs //nas/videos /mnt/nas. I found that I needed to give a static address for my NAS and make sure my router has the nas in the dns, because the bare jellyfin container has no way of browsing and looking up the address of my nas simply using WINS (or whatever the heck the microsofty thing is called that deals with IP lookup). So I had this situation that the Nemo browser on Linux Mint could find my NAS, but the JF server could not. I still don't see all my movies, gotta figure out why next.
@ear6
@ear6 Год назад
Thank you for this great walkthrough. Would you please write the commands for Nvidia card transcoding setup ? Are the same Mesa drivers fine for Nvidia card ? Thank you again!
@apalrdsadventures
@apalrdsadventures Год назад
I don't have any nvidia cards and the proprietary drivers aren't installed by default like the Intel / AMD ones, so you'd have to follow the Jellyfin docs on that one
@Felix-ve9hs
@Felix-ve9hs Год назад
Man I with Jellyfin existed 5 years ago when I first set up my media, I really woud like to switch from Plex to Jellyfin, but I've already put countless hours into my Library ...
@NetScalerTrainer
@NetScalerTrainer Год назад
So easy to mount folder to your existing Plex file system and just use Jellyfin
@giantxBash
@giantxBash Год назад
jellyfin dont have all the command to paste now on pve configuration
@NeverEnoughRally
@NeverEnoughRally Год назад
Can you maybe comment more about why you are using "options i915 enable_guc=3" vs the "options i915 enable_guc=2" like the jellyfins website says to do? In all my searching i was only able to find reference to the 2, but not the 3. Is that specific proxmox? I'm currently setting mine up on unraid. Is there some benefit to raising the number in performance? Is there a limit to what you can put in there?
@apalrdsadventures
@apalrdsadventures Год назад
It's a bitmask, there are two features which can be enabled (bit 0 and bit 1), so setting 3 enables both features. wiki.archlinux.org/title/Intel_graphics#Enable_GuC_/_HuC_firmware_loading has information on what the individual bits do, if you need to enable them with your specific hardware, and if they are the default on your hardware.
@NetBandit70
@NetBandit70 Год назад
Do you have any feelings about doing bare metal linux with docker, vs proxmox and containers? Seems like a lot of stuff lately is being packaged for docker: jellyfin, nextcloud, unifi controller, etc.
@apalrdsadventures
@apalrdsadventures Год назад
I'm not a huge Docker fan for a variety of reasons: - It's designed to be opaque to the operator and immutable, which is good if you have a good build system to package your app for containers, but means there's no opportunity for customization on the underlying system by me. If I'm using a container built by someone else, I have to trust / like the way they've setup the system and assume they've made decent choices. - Networking is particularly complex and I don't like it, vs LXC having a full network namespace in the container with a normal IP address and not port-mapped to the host via an intermediate docker network If I was deploying apps as a software company, I'd want to package my own things in Docker/Kubernetes containers as part of the software build process and manage them that way. Since I am not trying to deploy apps at scale and package them myself, having more control of the configuration of the Linux system is my preference.
@kchinnasamy-0
@kchinnasamy-0 Год назад
I'm using a AMD iGPU (5700g) and was not able to get it working, it would be great if you could a video on that
@apalrdsadventures
@apalrdsadventures Год назад
The GPUs in that generation of APU should be supported by the same driver as all the other GCN-based Radeon cards and there should be absolutely no difference in configuration. I'm guessing your issue is that the chip is newer than the kernel version. You can see if the amdgpu driver is loaded (lsmod | grep amdgpu), and lspci -v to see if the graphics card has the amdgpu kernel driver loaded (look for 'VGA Compatible Controller'). If it's just not loading the right driver, update to the Proxmox experimental kernel 6.1 - apt update && apt install pve-kernel-6.1
@davidariza2320
@davidariza2320 8 месяцев назад
why does this look way easier than using TrueNAS?!!!
@loopback2
@loopback2 Год назад
Hey, have you done any NAS benchmarks?
@apalrdsadventures
@apalrdsadventures Год назад
Not really, I've mostly focused on keeping things general enough to apply to more hardware
@mjmeans7983
@mjmeans7983 Год назад
Have you seen any USB 3 based dedicated transcoders?
@apalrdsadventures
@apalrdsadventures Год назад
I haven't seen anything like that, but an iGPU in a somewhat modern system should work fine
@dj-aj6882
@dj-aj6882 Год назад
Hey apalrd, I Love your litle Project! Do you think it would be posible to cluster it up at diferent familie Homes for Nextcloud?
@apalrdsadventures
@apalrdsadventures Год назад
Does Nextcloud natively support clustering? I haven't used Nextcloud much. Proxmox can't reliably cluster over WAN networks due to the latency involved, but you can replicate across them as a backup (i.e. you backup to other houses, they backup to you).
@dj-aj6882
@dj-aj6882 Год назад
@@apalrdsadventures yes it does, but it might surpass the fair use policy. Mi thought by now is: VPS running the following: Nextcloud node for public sharing Nginx or a other proxy manager Headscale for frontend Lan and Backend Lan Homeserver Running: Local Nextcloud Node PiHole as local DNS to filter network and point at local Node More apps just like Jellyfin would be cool as well. I just wonder if it is possible to integrate Jellyfin in NC so that Family can use it on one interface. Otherwise SSL could be an option. Main purpose would be to enforce data safety with multiple locations and to exploit faster Up and download speeds then you can get in My town. The Home IP would stay Hiden as well and only a Datacenter is public.
@strandvaskeren
@strandvaskeren Год назад
What's the advantage of doing the zfs to container stuff? Why not just add the media files to the fileserver vm and have the jellies vm get it's content from there? I personally leave all the handling of storage to proxmox and just add virtual disks to the vm's, what benefits am I missing?
@apalrdsadventures
@apalrdsadventures Год назад
It's more tricky to mount shares in containers since mounting has to be done by the kernel. Not a lot more, but enough to make this approach easier (at least with ZFS). For VMs there's no benefit to going outside of Proxmox. For containers there isn't usually either, but here we can share a dataset between two containers without the overhead of going through the virtual network. It's also possible to do this by creating a mount point on one container in the GUI and mounting that as a mount point on another container, but then you will have issues if you delete the first container in the future.
@strandvaskeren
@strandvaskeren Год назад
@@apalrdsadventures Thank you for the reply. I tend to use VM's over containers and find the virtual network transfer speed between local VM's run at pci speed, way higher than the storace on the proxmox host, so no real disadvantage to going through the virtual network.
@apalrdsadventures
@apalrdsadventures Год назад
In this case, using a container over a VM is important since we wouldn't otherwise be able to use the iGPU for transcoding.
@MrCriistiano
@MrCriistiano 10 месяцев назад
@@apalrdsadventures Does ZFS handle the file locking in case 2 CTs try to write to the same file at the same time?
@zachboatwright
@zachboatwright Год назад
I have a 16TB hard drive, but apparently it has hardware level RAID so proxmox said I can't add nfs storage to it. Is there a way to have shared storage space between my fileserver and jellyfin with lvm storage?
@apalrdsadventures
@apalrdsadventures Год назад
A single drive shouldn't have hardware RAID, are you sure it's not just a generic warning and not based on your actual hardware?
@zachboatwright
@zachboatwright Год назад
@@apalrdsadventures Yes, that was the issue. Just a generic warning.
@jonathand5762
@jonathand5762 Год назад
If Im only using the media files (movies/tv shows) only for jellyfin, do I still need to mount the media drive to fileserver or am i fine to just mount it to the jellyfin LXC? Also I thought that it was recommended to first mount the media drive to the fileserver LXC then have jellyfin access the media through from the fileserver using Samba. From what I gathered this was to limit chances of data corruption by two separate processes potentially trying to read/write at the same time but I guess this wouldn't happen since the two LXC containers live on the same kernel? Any advice on the matter would be greatly appreciated!
@apalrdsadventures
@apalrdsadventures Год назад
You'd only need to setup the fileserver mounts to copy media to Jellyfin, if you have another way to copy files you can use that instead. Since the two containers are running in the same kernel there's no danger of file contention.
@jonathand5762
@jonathand5762 Год назад
@@apalrdsadventures Thank you for the quick reply! What would be the reason for copying data to jellyfin? Doesnt jellyfin/plex simply just need to know it's location by supplying jellyfin the mount points for media (in my case external HDD connected to host)?
@apalrdsadventures
@apalrdsadventures Год назад
Ah I see, you already have a full drive and just want to mount it. In that case, the same mount process works, you just need to mount the external drive on the host first (probably in /etc/fstab)
@rapolo01
@rapolo01 Год назад
Can this transcoding guide can be done with and Nvidia Graphics Card ?
@apalrdsadventures
@apalrdsadventures Год назад
It's a bigger pain since nvidia's drivers aren't in the kernel and they don't use the standard Linux tools like vainfo for testing, you need to install the proprietary driver on both the host and in the container so the versions of tools line up with the kernel module version.
@rapolo01
@rapolo01 Год назад
@@apalrdsadventures Question, Does the AMD part on min 18:51 apply to a processor with integrated gpu?
@apalrdsadventures
@apalrdsadventures Год назад
Yes, it will work with any GPU using the radeon or amdgpu drivers (which is everything from AMD in the last ~15 years)
@rapolo01
@rapolo01 Год назад
@@apalrdsadventures I exchange my cpu on MicroCenter hopefully I had waranty on it, I get Ryzen 5 5600G and it works flawlessly. thx dude.
@scentilatingone2148
@scentilatingone2148 4 месяца назад
Where's everyone ripping from these days
@EnlightenedBitFox
@EnlightenedBitFox Год назад
Is it possible to configure vaapi and nvenc drivers/encoders at the same time and then choose between them in jellyfin?
@apalrdsadventures
@apalrdsadventures Год назад
Mixing Intel and AMD (with open source drivers) in general works fine, I’m not sure how much Nvidia’s proprietary drivers mess up the open source library binaries to know if they will mess up vaapi. Nvidia replaces things like libgl with their own and that can cause issues mixing Nvidia with other gpus in other contexts.
@EnlightenedBitFox
@EnlightenedBitFox Год назад
@@apalrdsadventures At this point the nvidia encoder works but the vaapi doesnt. I dont get it why not. I did all of your steps and before adding the nvidia card it worked
@apalrdsadventures
@apalrdsadventures Год назад
It sounds like the Nvidia driver overwrote the libva binaries, but I don’t have an Nvidia gpu to test with. That’s how they handle other things like OpenGL (instead of using Mesa like everyone else on Linux).
@EnlightenedBitFox
@EnlightenedBitFox Год назад
@@apalrdsadventures and how can i change it? I really would like to use vaapi with my intel 6500
@apalrdsadventures
@apalrdsadventures Год назад
I have no idea, the nvidia proprietary driver is to blame for this problem
@NetScalerTrainer
@NetScalerTrainer Год назад
Any recommendations for a proxmox system where the CPU supports PCI pass-through but the bios does not? Will this work?
@apalrdsadventures
@apalrdsadventures Год назад
In general you always need BIOS support to pass through PCI devices to VMs. However, for a container-based solution like this, there is no need for PCI pass-through, since the driver runs on the host.
@NetScalerTrainer
@NetScalerTrainer Год назад
@@apalrdsadventures that is good to know!! I will proceed!
@GodminerXz
@GodminerXz Год назад
Having trouble with hardware acceleration on an intel i5 6400. The output of ls -l /dev/dri is: crw-rw---- 1 root ssl-cert 226, 128 Apr 27 15:55 renderD128. This is different to the video, where the device is owned by either input or render. Any idea on how I can fix this?
@apalrdsadventures
@apalrdsadventures Год назад
In the container, find out what group is video (getend group video) and use that GID instead in the LXC config
@GodminerXz
@GodminerXz Год назад
@@apalrdsadventures I get video:X:44:jellyfin from getent, so I changed the 'c 226:0' to 'c 44:0' but no difference. Sorry if I'm doing something stupid, very new to Linux command line
@apalrdsadventures
@apalrdsadventures Год назад
Is the output on the container side or the host side? I might have sent you down the wrong path a bit. On the container side, leave everything as it was on the host (226:0 in the config), and chown it to video or render (chown :video /dev/dri/card0) (chown :render /dev/dri/renderD128) within the container.
@GodminerXz
@GodminerXz Год назад
@@apalrdsadventures That managed to fix it! Thanks a lot for your help and patience with a linux noob. Any ideas why this happened in the first place? I didn't see it mentioned on the jellyfin documentation for lxc and proxmox either...
@iPhonesuechtler
@iPhonesuechtler Год назад
Please help, what do I use instead of "zfs list" to find the mount points if my storage isn’t zfs? Otherwise very very great video Thank you so much! edit: ok I redid everything and have my storage as zfs now ^^ and also, did you create a zfs inside a zfs there? is "dpool" already a zfs storage and you created "dpool/media" as a filesystem inside a filesystem? Am I in a f***in k-hole right now, what is going on, can somebody please help me out here? °__°
@apalrdsadventures
@apalrdsadventures Год назад
ZFS is a lot of things (it does RAID, volume management, and filesystem), so within the zfs pool 'dpool' there's a hierarchy of zfs datasets which can each be mounted in different places and have different options.
@iPhonesuechtler
@iPhonesuechtler Год назад
@@apalrdsadventures Thanks! Great stuff. Very cool I got a quick answer too. Do you think you can make a video where you explain the networking part in more detail? (maybe on a broader scale than proxmox/jellyfin..? but proxmox makes a good example i think) Local/Public IP, IPv4 and IPv6, VLAN and WHYY is it 192.168... or 172.... or 10.0... or 255.255... How does it work in relation to VMs and Containers? Safe practices? Firewall configuration? And what is CIDR? How do you use this right? It seems to be required for Container setup. The local network defaults are something I would like to know more about. Or is there good info somewhere and if so, could you point me in the right direction? That's a lot to ask so I won't be expecting an answer here, but I still wanted to ask Thanks again for everything so far, the videos are great value, keep it up :)
@TintiKili69
@TintiKili69 Месяц назад
i know some trans coders
@rico7772007
@rico7772007 Год назад
Do you have to install some amd drivers to proxmox or the container? how to check witch graphic card is which? when you have two graphics card? Is there a command for it? Im usisng a amd firepro card.
@apalrdsadventures
@apalrdsadventures Год назад
Intel and AMD should have the drivers in-kernel, so as long as they aren't too new they should just work. You can try 'lspci -v', find the card, and see if the kernel driver in use is 'amdgpu'. If it is, then the driver is happy. If it's 'radeon', then it's an older card (amdgpu is early GCN through modern RDNA)
@rico7772007
@rico7772007 Год назад
@@apalrdsadventures to precise the question, you have mention in your video that you maybe have to change from card0 to card1 when you have two cards, but I can only identify my card in proxmox as 08:00.0 VGA compatible controller. and when I type the command 'lspci -v' ,this is the outcome: Kernel driver in use: radeon Kernel modules: radeon, amdgpu, so kernel is Radeon, any suggestion maybe what could I do now ? I'm using FirePro W4100. How to change to amdgpu kernel drivers?
@apalrdsadventures
@apalrdsadventures Год назад
if you loaded the radeon module then it just means you have an older GPU. It should still show a /dev/dri/cardX and /dev/dir/renderDX nodes ls -l /dev/dri/by-path/ will indicate the path of the device which points to the cardX and renderDX nodes. This should show the path on the PCI bus, and you should be able to find the 08:00.0 coded into that path. In my case, I had two GPUs, my radeon card was card1 but I passed it through as card0 for consistency (card0 and renderD128 are assumed if not given by the software). If you only have one GPU it will be card0 and renderD128. It's sequential, but not all cards have render nodes.
@PoopaChallupa
@PoopaChallupa Год назад
Man needs better thumbnails if he's going to increase viewership. They all look like homework.
@GreySkullification
@GreySkullification Год назад
Not using dark mode? Unacceptable. 10/10 for content. 0/10 for style. It's so bright I could not follow along.
@Hyped007
@Hyped007 Год назад
Bro change the thumbnail i know it's clean but u don't get any click
@kidsythe
@kidsythe Год назад
half height Intel GPU 😁 transcode beast 🦾
@sudokillme
@sudokillme 6 месяцев назад
is chmod -R 777 a good idea? Sounds like a safety issue
@markbifferos2765
@markbifferos2765 Месяц назад
I don't think that's required anymore, re-did my setup from scratch and didn't need it.
@sanchOlabs
@sanchOlabs 8 месяцев назад
Thanks for the tutorials, watched two until now, both super packed with information ;)
@poppipo1222
@poppipo1222 Год назад
doesnt work on coffee lake edit: works on coffeelake lol
@apalrdsadventures
@apalrdsadventures Год назад
Did you try it without the low power encoding mode? It's fairly specific which chips need and which don't need that enabled.
@poppipo1222
@poppipo1222 Год назад
@@apalrdsadventures Hello, turns out it was my fault! Turned out I had somehow deleted or disallowed my Proxmox root user from accessing /dev/dri, idk how or if thats the case. I reinstalled proxmox and followed your guide again (minus the LXC device mount, jellyfin has changed some lines there recently) and it works perfect!!! You're the best!!!
@theshemullet
@theshemullet Год назад
It would be interesting to see you do the same type of things, but with Unraid. Proxmox is great, but I kind of think of it only when I want a few servers in a cluster. I use Unraid for single host setups.
@NetBandit70
@NetBandit70 Год назад
Why not just use bare metal linux then?
@theshemullet
@theshemullet Год назад
@@NetBandit70 Is bare metal Linux a distro?
@NetBandit70
@NetBandit70 Год назад
@@theshemullet bare metal means running linux directly on physical hardware, not inside a VM or container
@theshemullet
@theshemullet Год назад
@@NetBandit70 I know that. I thought you were saying there was a distro called that. The reason I use unpaid is I like the interface. U raid is running on bare metal but I want to be able to run VMS and containers, as well file sharing abilities.
@NetBandit70
@NetBandit70 Год назад
@@theshemullet bare metal linux with cockpit might be of interest
@Simon-xi8tb
@Simon-xi8tb 5 месяцев назад
I did everything in this video but i get no card0 in /dev/dri/, only renderD128...hmmm
@apalrdsadventures
@apalrdsadventures 5 месяцев назад
on the host or in the container? You actually only need renderD128 to render.
Далее
Organize your Homelab Services with Dashy!
9:34
Просмотров 19 тыс.
18 Weird and Wonderful ways I use Docker
26:18
Просмотров 149 тыс.
Самая сложная маска…
00:32
Просмотров 747 тыс.
Turning Proxmox Into a Pretty Good NAS
18:31
Просмотров 239 тыс.
What's ACTUALLY running in my Homelab?
19:21
Просмотров 191 тыс.
Should You Buy an Intel Arc for Your Media Server?
16:36
You NEED to try Hyprland on Linux RIGHT NOW
24:36
Просмотров 40 тыс.
NixOS is Mindblowing
12:02
Просмотров 705 тыс.
Proxmox Automation with Proxmox Helper Scripts!
24:15
PLEX or Jellyfin? MY PICK using both for Years!
13:23
Просмотров 179 тыс.