Тёмный

Whats the faster VM storage on Proxmox 

ElectronicsWizardry
Подписаться 23 тыс.
Просмотров 51 тыс.
50% 1

ZFS, BTRFS, LVM, Directory. There are many options for storing VM images on a disk in Proxmox and other KVM based hypervisors. In this video, I take a look at the features and performance of all of these different storage methods.
For my test system I used a Xeon E5 2643 V4 system running Proxmox VE 7.2-7 with 128GB RAM, and a PM1725 as the test ssd.

Опубликовано:

 

2 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 119   
@paulwratt
@paulwratt 2 года назад
For those interested, Wendell just did a "what were learned" review of Linus' (LTT) PetaByte ZFS drive failure - "A Chat about Linus' DATA Recovery w/ Allan Jude" - ZFS got another development boost (with more coming) as a result ..
@xxxbadandyxx
@xxxbadandyxx Год назад
I feel like i have learned more from this 8:55 second video than i have scouring forums for hours and piece mealing things together. thank you for the straight forward video.
@gg-gn3re
@gg-gn3re 7 месяцев назад
Yea when you look at lvm vs lvm thin for example you get trash information all over the forums and other sites. This guy has been the best mass amount of information for various projects for many years
@joshuaharlow4241
@joshuaharlow4241 3 месяца назад
Agreed, I'm not sure how much time I saved, but it's a lot.
@tulpenboom6738
@tulpenboom6738 4 месяца назад
One advantage of LVM over ZFS though, is that you can share it across hosts. If you have a cluster using shared iSCSI, FC or SAS storage (where every host sees the same disk) you can put LVM on that disk (on the first host, use vgscan on the rest), add it as shared LVM in the GUI and all other hosts see the same volume group. Allocate VM's out of that group, and it's easy and quick to do live migrations. ZFS cannot do this.
@theundertaker5963
@theundertaker5963 Год назад
Thank you for an amazing, straight to the point, and concise video. I have actually been spending a lot of time trying to put together all the bits and pieces of what you managed to put into this fantastic video for a project of mine I shall be undertaking soon. Thank you for the time you put into collecting, and presenting all the benchmarks. You have a new subscriber.
@crossfirebass
@crossfirebass 11 месяцев назад
Not gonna lie...I need a whole new vocabulary lol. Thanks for the explanations. I kind of dove face first into the world of Virtualization and wow do I need an adult. I bought some pc guts off a coworker for $500. 64 x AMD Ryzen Threadripper 2990WX 32-Core Processor, 64 Gigs RAM (forgot the speed/version), and an ASROCK MB. I threw in 24TB of spinning rust and now learning how to VM/setup an enterprise. End goal...stay employed lol. Thanks again for the help.
@forrestgump5959
@forrestgump5959 3 месяца назад
and how is it going so far?
@cryptkeyper
@cryptkeyper Год назад
Finally a video that is straight to the point on what I wanted to know. Thank you
@RomanShein1978
@RomanShein1978 Год назад
Great video. It is worth mentioning that it is possible to use the same ZFS pool to store all kinds of data (vdisks, backups, isos etc.). The user may create 2 datasets, and assign the first dataset as zfs storage and the second one as a directory.
@magnerugnes
@magnerugnes 10 месяцев назад
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-oSD-VoloQag.html
@knsio
@knsio 12 дней назад
Great video! Thanks
@nalle475
@nalle475 Год назад
Thanks for a great video. I found ZFS to be the best way to go.
@LiterallyImmortal
@LiterallyImmortal Год назад
I’ve been trying to learn Proxmox the past couple days and this was SUPER helpful. Thanks a bunch man. Strait to the point and you explain your opinions on the facts presented.
@MHM4V3R1CK
@MHM4V3R1CK Год назад
Thank you for these videos. Very clear and answers the questions that come up as I'm listening. Satisfying!
@advanced3dprinting
@advanced3dprinting 9 месяцев назад
Really love your content i hate that channels with way less info but just do flashy edits get the attention when the guys that know their shxt don't get the same views
@RetiredRhetoricalWarhorse
@RetiredRhetoricalWarhorse 7 месяцев назад
I am getting to the point of realization how much Proxmox is not anywhere near ready to be competing with Vmware. The way administration works, the absolutely bad documentation and all the resources online are just so jank... Too bad. I'm considering even aborting switching my homelab over. I see no benefits even to just running the current ESXi without patches indefinitely.
@DLLDevStudio
@DLLDevStudio 8 месяцев назад
btrfs changed since this video was made. it should be a way faster today. i wish for an updated video....
@ElectronicsWizardry
@ElectronicsWizardry 8 месяцев назад
I have been interested in BTRFS for a while now, and plan on taking a look at it in the future. It seems to still be in tech preview status, so I'm waiting for it to be stable before I look at it much more.
@DLLDevStudio
@DLLDevStudio 8 месяцев назад
​@@ElectronicsWizardry Hello, brother. It appears that the system is stable when using the stable kernel. I wish it had some effective self-healing capabilities, which would allow it to replace ZFS in some of my applications. Although ZFS is excellent, Btrfs seems to be faster already. Meanwhile, XFS is still the fastest but lacks any kind of protection.
@mmgregoire1
@mmgregoire1 Год назад
Ceph RADOS is definitely the way to go, I hope that the performance for BTRFS is improved in the future, I do not really care for RAID5 or 6 and prefer 10, 1 or none generally anyway. BTRFS send and receive is a killer feature. I prefer that BTRFS is licensed and in kernel, this make booting and recovery senarios based on BTRFS potentially better with some work on proxmox side. cross fingers for BTRFS.
@lawrencerubanka7087
@lawrencerubanka7087 3 месяца назад
I'm with you! Ceph works a treat in conjunction with Proxmox HA. Ceph let's any node see the disk image so there's no down time when migrating a VM. We get replication across disks or hosts as well as the raid-like erasure encoding. I have great fun shutting down nodes running VMs and watching the VM hop across the network to another node never missing a beat. The options offered by Proxmox are awesome!
@adimmx8928
@adimmx8928 Год назад
I have a sql query taking 15 seconds long on a vm in proxmox stored on a nvme ssd. I created 3 other vm all running the same os but on different file system ext4,btrfs and zfs and installed only the mariadb server serving the same database but via tcp and i could not get the performance of the first initial vm. Any ideas why? I get close to performance with an lxc container only.
@ElectronicsWizardry
@ElectronicsWizardry Год назад
Im. It sure what the issue your have is here, some of the things I think might be the issue. Check that virtio drivers are used for the vm to allow the best virtual disk performance. I’d guess the massive performance difference could be due to caching. I’m not sure how caching is setup for containers but if ram is being used as a cache that large of a performance delta would be expected. Also if your system supports it I’d try doing a pcie passthrough of the ssd to the vm as it should allow the best performance by removing the overhead of virtual disks.
@VladyslavKudlai
@VladyslavKudlai Год назад
Hello dear EW, please can you again review Proxmox 8 with ZFS vs BTRFS performance?
@ElectronicsWizardry
@ElectronicsWizardry Год назад
I think btrfs is in technical preview status still currently. I’m waiting for it to get in the full release and I’ll take a closer look.
@AdrianuX1985
@AdrianuX1985 2 года назад
5:00.. After many years, the BTRFS project is still considered unstable. Despite this, Synology uses BTRFS in its commercial products.
@paulwratt
@paulwratt 2 года назад
yay for network storage devices that use proprietary hardware configurations. ("right to repair" be damned)
@carloayars2175
@carloayars2175 Год назад
Synology Hybrid RAID (SHR) uses a combination of BTRFS and LVM. It avoids the problem parts of BTRFS this way while still delivering a reliable file system with many of the main benefits of BTRFS/ZFS.
@ElectronicsWizardry
@ElectronicsWizardry Год назад
I think shr also uses mdadm. Mdadm is used for raid, btrfs is used for a filesystem and checksumming. If a checksum error is found md delivers a different version and the corrupt data is replaced. Lvm is used to support mixed drive sizes.
@mmgregoire1
@mmgregoire1 Год назад
BTRFS is also used by android, google, facebook, SUSE and many more...
@gg-gn3re
@gg-gn3re 7 месяцев назад
lots of things use BTRFS commercially for years as others have mentioned. BTRFS will be considered unstable for another 10 or more years, so don't let that stop you if you want to use it for some reason. Us home people don't have the issue of what license certain stuff has since we don't resell, so we can use many things that these vendors can't / won't.
@gregoryfricker9971
@gregoryfricker9971 Год назад
This was an excellent video. May the algorithm bless you.
@Alex-sm6dx
@Alex-sm6dx 4 месяца назад
Great video, thank you!
@philsogood2455
@philsogood2455 Год назад
Informative. Thank you!
@daniellauck9565
@daniellauck9565 6 месяцев назад
Nice content. Thanks for sharing. Is there any comparison or deep studying about centralized storage with iSCSI or fiber channel ?
@perfectdarkmode
@perfectdarkmode Месяц назад
If you use ZFS, does that mean you would not want hardware RAID on the physical server?
@ElectronicsWizardry
@ElectronicsWizardry Месяц назад
Yup, ZFS typically likes to do RAID its self and have direct access to the drives. You can run ZFS ontop of hardware raid and get access to ZFS snapshots and send/recieve.
@dgaborus
@dgaborus Год назад
At 7:07 slight performance advantages? Performance is 3x faster with PCI-e passthrough than with ZFS or LVM. Although, I prefer ZFS as well for the flexibility.
@lecompterc83
@lecompterc83 3 месяца назад
No idea what was just said, but I’ll piece it together eventually 😂
@2Blucas
@2Blucas 5 месяцев назад
Thank you once again for the excellent video and for sharing your knowledge with the community.
@gsedej_MB
@gsedej_MB Год назад
Hi. Is it possible to "pass" zfs-directory or zfs-subzfs to guest. The main idea is, that zfs is filesystem and guest needs to have own filesystem (e.g. ext4) which is overhead. So only the host should be doing filesystem operation, while guest would see as folder. I guess zfs would have to support some kind of server/client infratructure but without networking overhead...
@PeterBatah
@PeterBatah 9 месяцев назад
Thank you for sharing your time and expertise with us. Insightful anf informative. Clear and precise.
@ChetanAcharya
@ChetanAcharya 2 года назад
Great video, thank you!
@BenRook
@BenRook Год назад
Nice presentation of what's available and pros/cons... good vid! Will stayed tuned for future content...thx.
@smalltimer4370
@smalltimer4370 11 месяцев назад
I'm in the process of building an nvme Proxmox server using a combination onboard nvme drive w/ 4 x 2TB nvme in zraid 10 That said and based on your experience, would this be the optimal way to go for vm's? ps. having read multiple posts or comments on ssd wear, I remain a bit worried on my setup choice as I'd like to get the most out of my storage system without sacrificing the life of the devices - ie, 3 years would seem reasonable for a refresh imo
@ElectronicsWizardry
@ElectronicsWizardry 11 месяцев назад
Yea a raid 10 makes a lot of sense for VMs due to the high random performance. I wouldn't worry about SSD wear much for home server use as most SSDs have more endurance that you would ever need, and they will go well over the rated limit. I'd guess the drives will be fine in 3 years. There are high endurance drives you can get if your worried about endurance.
@angelgil577
@angelgil577 Год назад
You are a smart Cooke. Thank you, this info is very helpful.
@zebraspud
@zebraspud 7 месяцев назад
Thanks!
@SteveHartmanVideos
@SteveHartmanVideos 11 месяцев назад
This is a fantastic primer on file storage for proxmox.
@attilavidacs24
@attilavidacs24 4 месяца назад
I can't get decent speeds on proxmox with my nvme or HDDs. I'm getting a max of 250Mb/s on a 4 hdd raid 5 array virtualized even with PCI passthrough but unvirtualized its 750Mb/s. Even my NVMe drive virtualized starts off at 800Mb/s then drops down to 75 - 200mb/s and fluctuates. I'm running the virtio SCSCI controller. Why are my speeds slow?
@ElectronicsWizardry
@ElectronicsWizardry 4 месяца назад
That's a strange issue Ive never seen. What hardware are you using? Do you get full speeds on the Proxmox host using tools like FIO? Is the CPU usage high when doing disk IO?
@attilavidacs24
@attilavidacs24 4 месяца назад
@@ElectronicsWizardry I'm running a Ryzen 7900 CPU, LSI 9300 HBA connected to x7 HDDs on 2 vdevs, 1 cache SSD. One NVMe pve boot drive and I also have a samsung evo NVMe for VMs and a Mellanox 10g NIC. I will try some FIO benchmarks and report back. I have 64GB total RAM and the CPU usage stays quite low throughout all the VMs. My HBA is using PCI passthrough to a TrueNAS VM.
@paulwratt
@paulwratt 2 года назад
That statement you made about the layers you need to adjust individually is not reflected in any graphs anywhere, and thats a shame, because it clearly demonstrates _another_ main benefit of using ZFS over LVM ( yay, look BTRFS is way out in front, oh wait .. ) Not sure how to take the "ZFS fakes Proxmox cache setting", for testing non-cached it _is_ relevant, but that is _not_ a real world scenario, to the extent that you could attach a drive/device which has _no physical cache_ and ZFS will still happily cache that device, a more authentic real world scenario ( _if_ you could indeed find such a device). The _best_ part about ZFS, as Wendell showed and admitted, when your (especially raid) drive pool goes belly up, to the point software tools can not even help, you can still reconstruct original data by hand if need be, as _everything_ is there needed to achieve that .. BTRFS _might_ "get there in the end", as ZFS has had an extra 10 years of use, testing and development up its sleeve, but those BTRFS "features" that have not been "re-aligned" for years, means it's never going to be a practical solution, except in isolated cases, its better of being used for SD-Card filesystems, where it can extend the limited life span of the device (if setup correctly), and speed is already a physical issue (as long as you dont want to use said SD-Card on a windows system .. ). thanks for taking the time to do the review ..
@AdrianuX1985
@AdrianuX1985 2 года назад
For several years, the dedicated FS for SD cards has been F2FS (Flash-Friendly File System).
@Shpongle64
@Shpongle64 4 месяца назад
Bro I've been researching this topic for a couple hours on and off each day. Thank you for just combining this information into one video.
@Shpongle64
@Shpongle64 4 месяца назад
Generally what I gathered was set the physical storage to ZFSpool non-directory and then have the VM disks set to raw.
@perfectdarkmode
@perfectdarkmode Месяц назад
How does ZFS compare to Ceph?
@ElectronicsWizardry
@ElectronicsWizardry Месяц назад
There kinda different. ZFS in Proxmox is typically single system only, and Ceph is generally for multiple systems.
@haywagonbmwe46touring54
@haywagonbmwe46touring54 Год назад
Ahh thanks! I was looking for just this kinda video.
@poopbot5340
@poopbot5340 5 месяцев назад
great video, straight to the point! Confirmed my answer by :30 but stuck around to see how they all performed.
@jasonmako343
@jasonmako343 Год назад
nice job
@andymok7945
@andymok7945 Год назад
Thanks, very useful info in this video.
@robthomas7523
@robthomas7523 Год назад
What file system do you recommend for a server whose storage needs keep growing at a high rate. LVM?
@lawrencerubanka7087
@lawrencerubanka7087 3 месяца назад
Ceph. You can throw another OSD (drive) into a pool at any time. You have similar options for replication (mirroring) and erasure encoding (Zraid) as with ZFS or raid plus the ability to spread the storage across multiple nodes in a cluster. No need for periodic replication of your LVM-based images. Ceph does this in real time, continuously. All nodes see the same data at the same time.
@iShootFast
@iShootFast Год назад
awesome overview and cleanly laid out.
@JJSloan
@JJSloan Год назад
Ceph has entered the chat
@danwilhelm7214
@danwilhelm7214 2 года назад
Well done! My data always resides on ZFS (FreeBSD, SmartOS, Linux).
@dominick253
@dominick253 Год назад
I feel like there's a code in your blinking. Maybe Morse code?
@JohnSmith-iu8cj
@JohnSmith-iu8cj Год назад
SOS
@VascTheStampede
@VascTheStampede Год назад
And what about Ceph?
@ElectronicsWizardry
@ElectronicsWizardry Год назад
I was only looking at local storage in this video so I didn’t include iscsi, ceph,nfs and similar. I don’t think there would be a easy way to compare ceph to on host storage as it’s made for a different use case and I isn’t have the correct equipment for testing currently.
@scottstorck4676
@scottstorck4676 Год назад
Ceph lets you store data over many nodes, to ensure availability. If you need the availability Ceph provides, the kind of benchmarking done for this video is not something you would normally look at. I run a small six node Proxmox cluster with Ceph, and the performance it provides is not really comparable with filesystems on single nodes, as the resources are used on the cluster as a whole. There are so many factors when dealing with performance on Ceph, including the GHz of a CPU core, the network speed, the number of HDD / SSD / NVME used as well as their configuration. It is not something where you can compare benchmark results between systems, unless the hardware and software and configuration is 100% identical.
@lawrencerubanka7087
@lawrencerubanka7087 3 месяца назад
​@@scottstorck4676... and the network speed, and the network speed. :) I'm still floored by how fast Ceph runs in real-world use.
@jossushardware1158
@jossushardware1158 3 месяца назад
what about ceph
@ElectronicsWizardry
@ElectronicsWizardry 3 месяца назад
I didn't cover Ceph as its not a traditional filesystem/single drive solution like the other options covered. I plan on doing more videos on Ceph in the future. The quick summary is Ceph is great if you want redundant storage across multiple nodes that's easy to grow. Its typically slower than a single drive in a small environment due to the additional overhead of multiple nodes, and having to confirm writes across multiple nodes.
@jossushardware1158
@jossushardware1158 3 месяца назад
@@ElectronicsWizardry Thank you for your answer. I have understood that enterprise SSD with PLP is the only way to make CEPH faster. Of course node links has to be at least 10GB or more. Do you know does Mysql Galera cluster also confirm writes across multiple nodes? So would it also benefit of PLP in ssd?
@mikemorris5944
@mikemorris5944 Год назад
Can you still use ZFS for storage option if you didn't install ProxMox using ZFS format?
@ElectronicsWizardry
@ElectronicsWizardry Год назад
Yea ZFS can be added to a Proxmox system no matter what the boot volume is set to. The boot volume only affects data that is stored on the boot drive, and storage of any type can be added later on.
@mikemorris5944
@mikemorris5944 Год назад
@@ElectronicsWizardry thanks again EWizard
@Goldcrowdnetwork
@Goldcrowdnetwork Год назад
@@ElectronicsWizardry so if adding a USB storage device like a 2 terabyte WD Passport drive (I know this is not ideal but what I have laying around) ZFS would be a better choice than LVM or LVM-thin in your opinion for storing LXC templates and snapshots with Docker apps inside them?
@ElectronicsWizardry
@ElectronicsWizardry Год назад
@@Goldcrowdnetwork For practical purposes, there will be almost no difference. The containers will run the same on both. Id personally use ZFS as I like the additional features like checksumming and like using the zfs tools. LVM would be a tiny bit faster, but it will likely be very limited by the HDD with both of these.
@SEOng-gs7lj
@SEOng-gs7lj Год назад
i don't quite understand the remark that ZFS can connect "to the physical disks and goes all the way up to the virtual disks" at 4:13. I mean, doesn't lvm/ext4 in proxmox provide the same? i'm trying to create a ubuntu VM with a virtual disk formatted as ext4, is this correct? if not, is there a demo showing the "better" way? thank you
@ElectronicsWizardry
@ElectronicsWizardry Год назад
I think I said that wrong in the video. Other filesystems can be used as one layer between the disks and the VM. The point I was trying to get across was that ZFS has additional features that additional software would be needed if similar features were wanted in filesystems like EXT4. ZFS for example supports RAID and snapshots, and in order to have similar features on EXT4, MDADM for RAID and LVM/QCOW2 would have to be used for snapshots. I like using ZFS as there is one piece of software to handle the filesystem, RAID, snapshots, volume manager and other drive related operations. The filesystem your VM is using isn't affected by the storage configuration on the host, and using EXT4 on a Ubuntu vm will work well.
@SEOng-gs7lj
@SEOng-gs7lj Год назад
@@ElectronicsWizardry cool thank you!
@SEOng-gs7lj
@SEOng-gs7lj Год назад
i have proxmox(zfs) and ubuntu(ext4) guest, after installing MySQL in my ubuntu, it takes 3 mins to ingest an uncompressed .SQL, something is definitely wrong, any idea what I can check/fix? thanks!
@ElectronicsWizardry
@ElectronicsWizardry Год назад
I’d take a look at system usage during the import in the vm first. What is the cpu or disk usage. Then if it’s disk usage limited check if other hosts are using too much disk on the host.
@SEOng-gs7lj
@SEOng-gs7lj Год назад
@@ElectronicsWizardry i'm hitting the 100% disk utilization.. but there is hardly any activity apart from mysql.. seems to be a configuration issue but i don't know where
@Josef-K
@Josef-K 11 месяцев назад
What about draid?
@ElectronicsWizardry
@ElectronicsWizardry 11 месяцев назад
I haven't looked at Draid, and will take a look at it soon and make a video
@Josef-K
@Josef-K 11 месяцев назад
@@ElectronicsWizardry well I was tinkering around today with a 4TB and 3TB that I wanted to mirror. I ended up splitting them into 1tb partitions, so it let me create Draid2 with only two drives (7 x 1TB partitions) one of which as spare. This got me thinking - can Draid be used as my root Proxmox (bare metal) in order to make proxmox even more HA? And now I'm also wondering - is there any kind of performance and/or reliability gain (maybe even across multiple nodes) if I have even more partitions per disk for Draid? The idea being you can sip each partition for its data across every partition in my cluster.
@shephusted2714
@shephusted2714 2 года назад
big takeaway here is you want a nas with lots of ecc mem and zfs - a z440 with 256gb ram is about 1k making it a great deal
@WallaceReen
@WallaceReen 9 месяцев назад
You need to increase the wait time in your blink-function.
@DomingosVarela
@DomingosVarela Год назад
Hello, I'm installing the new version for the first time on an HP server with 4 300G disks, I want to know the recommended option for using the disks, keep proxmox installed on a single disk and use the rest in pool zsf mode for the vms? What option do you recommend? Thanks Best Regards
@ElectronicsWizardry
@ElectronicsWizardry Год назад
Does the server have a RAID card? If so I'd setup hardware raid using the included raid card. Then I'd probably go ZFS for its features, or ext4 if you want a tiny bit more speed. I will warn you that running vms on HDDs will be a bit slow for host uses. If it doesn't have a raid card, I'd probalby use ZFS for raid 10.
@DomingosVarela
@DomingosVarela Год назад
@@ElectronicsWizardry thanks for your response! my server has a raid card and I disabled it because zsf doesn't work very well on top of the hardware configured raid, so I disabled the hardware raid. if I use raid10 with the 4 disks I will only have the value of one of them, on this same disk will I install proxmox and the VMs?
@ElectronicsWizardry
@ElectronicsWizardry Год назад
Yea I’d you can disable the raid card and use Zfs that’s what’s I’d do as I’m a fan of Zfs. Using hardware raid and ext4 would be a bit faster especially if the hardware raid card has a battery backed cache it can use.
@DomingosVarela
@DomingosVarela Год назад
@@ElectronicsWizardry I'm using HP Gen10 it has a very good raid card, but I would really like to use zsf for its advantages associated with proxmox, so I need some support to understand the recommended option in using the disks, separate the proxmox installation with the VMs or use a raid10 for all disks and keep the proxmox and the VM in the same pool?
@davidkamaunu7887
@davidkamaunu7887 Год назад
Ext4 isn’t faster than Ext2 because it is a journaling file system like NTFS. Journaling File Systems have overhead from the journaling. Like it wasn’t good to have a LUKS on ext3 or ext4
@davidkamaunu7887
@davidkamaunu7887 Год назад
Another thing most people won’t cache on to. Never RAID flash storage (SSDs or NVMe) as you create a race condition that will stress the CPU and the quartz clock. Why? Because they have identical access times that are as fast as disk cache or buffer
@gg-gn3re
@gg-gn3re 7 месяцев назад
NTFS on windows doesn't journal. It was designed to but never implemented. Just like it is also a case sensitive file system but windows disables that entirely. Their new filesystem has these features, mostly because ntfs breaks so much with their linux subsystem. All in all NTFS is more comparable to ext2 than it is ext4 ext4 is also faster than ext2 when it is reading from HDDs (cuz of journaling)... SSDs it depends on type of data but not journaling can be faster sometimes
@daviddunkelheit9952
@daviddunkelheit9952 7 месяцев назад
@@gg-gn3rethat’s an observation over your experience. You should always qualify your statements. Otw 😬
@daviddunkelheit9952
@daviddunkelheit9952 7 месяцев назад
@@davidkamaunu7887Intel has a couple of functions in 8th and 9th generation processors that allow for PcIE port bifurcations. This allows the use of H10 Optane which has Nand and Optane on same m2 socket. There is Virtual RAID on CPU. VROC which is found on Xeon scalable and used for specific storage models. It requires an optional upgrade key in the D50TNP modules. RAID 0/1 5/10. These are VMD NVMe
@gg-gn3re
@gg-gn3re 7 месяцев назад
@@daviddunkelheit9952 no that's a fact. Posted on microsofts website. The only automated journaling is metadata and that is recent.
@teagancollyer
@teagancollyer 2 года назад
I normally watch your videos in the background but actually focused on this vid today and noticed how much you blink which, no offense intended, i found a bit distracting.
@paulwratt
@paulwratt 2 года назад
you probably could have _not_ said that, _no offense_ intended .. I think he is fully aware of it ..
@teagancollyer
@teagancollyer 2 года назад
@@paulwratt yeah I thought about not including it, i just felt it rude without it and i meant it sincerely.
@AdrianuX1985
@AdrianuX1985 2 года назад
I didn't pay attention, only your comment suggested it. I don't understand people who pay attention to such nonsense.
@MarkConstable
@MarkConstable 2 года назад
@@AdrianuX1985 Because it is quite distracting. The quality of the content is excellent, but I had to look away most of the time.
@paulwratt
@paulwratt 2 года назад
@@AdrianuX1985 its fine, you didn't need to reply (unless no one else did)
@typingcat
@typingcat Год назад
Why blink so much?
@abb0tt
@abb0tt 5 месяцев назад
Why not educate yourself?
@lawrencerubanka7087
@lawrencerubanka7087 3 месяца назад
Don't be an ass.
Далее
Highly Available Storage in Proxmox - Ceph Guide
31:13
10 tips to get the most out of your Proxmox server
5:24
What is Ceph and SoftIron HyperDrive?
5:45
Просмотров 87 тыс.
Turning Proxmox Into a Pretty Good NAS
18:31
Просмотров 241 тыс.
Configuring Storage in ProxMox
10:23
Просмотров 147 тыс.
Storage Made Easy Getting Started with Proxmox 8
12:48