I didn't realize the picture looks odd in video. The part of the picture that is visible in the video is a reflection, and the right side up part of the picture is hidden.
how the hell do you manage to make everything so water clear for noobs like me haha as always quality content and quality explainations ! thanks a lot for sharingyour knowledge with us ! keep it up ! 👍
Fantastic video. I really appreciate the RaidZ3 9 disk + spare rebuild times - and the mirror rebuild times. Right now I have data striped across mirrors (Two mirrors, 8TB disks) that is starting to fill up and I've been trying to figure out the next progression. Maybe a 15 bay server - 10 bays for a new Z3 + 1 array, leaves enough space to migrate my current data to the new array.
Why not use a good quality Raid card with…16 ports? Use 8 hard drives in a Raid #? With 8 SSD’s wrapped around the spinning drives? That will solve the latency problem and it really speeds things up.
@2:15 can you explain how to set that up, or give a search term to look that up? The I installed promos, I split my 256GB NVMe drive up in the following GB sizes (120/40/40/16/16/1/.5) (Main, cache,unused,metadata,unused,EFI,Bios) I knew about this, but just now at the stage I need to use metadata and small files.
I did a video on a 3 node cluster a bit ago and used ceph for the video. I want to do more ceph videos in the future when I have hardware to show ceph and other distributed filesystem in a correct environment.
@@ElectronicsWizardry I would love to see that. There are zero videos I can find showing a promox+ceph cluster that are not homelabbers in either nested VM's or using very under powered hardware as a 'proof of concept' - and once it's setup the video finishes!!. I have in the past built a reasonably specced 3 node proxmox cluster with 10GB nics, mix of SSD's and spinners to run VM's at work. It was really cool - but the VM's performance was all over the place. A proper benchmark, deep dive into optimal ceph settings and emulating a production environment with a decent handful of VM's running would be amazing to see!
This was a very interesting video, thank you for the explanation! :) Unfortunately I haven't had the chance to really play around with ZFS yet, most of the hardware at work use hardware RAID controllers. But I'll definitely keep dRAID in mind when looking into ZFS in the future 😊
why data sizes so weird? 7 5 9? None of them divisible by 2. why not 8d20c2s? Because of fixed width I thought it better to comply 2^n rule. Or I miss something? How compression works here?