Тёмный

How RAID Works 

Dave's Garage
Подписаться 811 тыс.
Просмотров 80 тыс.
50% 1

What is RAID and how does it work? Why would you use RAID and what are the benefits in terms of performance and reliability? Dave Explains mirroring, striping, JBOD, RAID levels, and ZFS.
For my book "Secrets of the Autistic Millionaire": amzn.to/3diQILq
Upcoming LIVESTREAM: Tentative for Sunday the 9th at 10AM PST, 1 PM EST.
ERRATA: RaidZ2 can accommodate 2 failures, RaidZ3 can accommodate 3. Whoops!
Discord Chat w/ Myself and Subscribers: / discord
Primary Equipment (Amazon Affiliate Links):
* Black and Decker Stud Finder - amzn.to/3fvEMuu
* Camera: Sony FX-3 - amzn.to/3w31C0Z
* Camera Lens: 50mm F1.4 Art DG HSM - amzn.to/3kEnYk4
* Microphone: Electro-Voice RE 320 - amzn.to/37gL65g
* Teleprompter: Glide Gear TMP 100 - amzn.to/3MN2nlA
* SD Cards: Sony TOUGH - amzn.to/38QZGR9
Experiment with Houston Command Center or Learn More: www.45drives.co...

Опубликовано:

 

1 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 323   
@amonynous9041
@amonynous9041 2 года назад
calling his storage unit lil nas is such a power move
@DavesGarage
@DavesGarage 2 года назад
Not as much as "Let's ask the big organ" :-)
@eadweard.
@eadweard. 2 года назад
Mine's called "illmatic".
@user-yr1uq1qe6y
@user-yr1uq1qe6y 2 года назад
My teen thinks I’m cool because I told her I was watching a little nas video.
@mikkelbreiler8916
@mikkelbreiler8916 2 года назад
@@user-yr1uq1qe6y Dad humour. Right there.
@shez666
@shez666 2 года назад
I hope he mounts it as the X: drive in Windows
@kevinshumaker3753
@kevinshumaker3753 2 года назад
The one thing I've learned from experience: DO NOT USE ONLY DRIVES FROM A SINGLE BATCH / DATE CODE & MANUFACTURER. Mix your drive manufacturers. Mix your date codes. We had a 500TB Server go south within days due to the drives being all from the same manufacturer and date code, and a lot of drives failed and cascade failed. They were under warranty, and we did have a good tape backup, but it took a LONG time to do the swap, mixing, and restoring, the whole time we were down (about 10 days, tape is SLOW).
@eadweard.
@eadweard. 2 года назад
Was it a bad batch?
@talibong9518
@talibong9518 2 года назад
And when using random drives, you'll naturally end up with the best drives for the job anyway as the shit ones get replaced over time and you know what to avoid.
@Doesntcompute2k
@Doesntcompute2k 2 года назад
^^^ THIS. But I've got to add, too: FLASH YOUR FIRMWARE ON THE DRIVES TO THE LATEST, before adding to a RAID volumeset. You'll thank yourself. Put a sticker on the drive AND RECORD the version, date of FW, and date of flashing. It's saved me a lot whether it's four drives in a WS or 400 drives in an EMC. 😁👍
@DrakiniteOfficial
@DrakiniteOfficial 2 года назад
Good to know, thanks!
@kevinshumaker3753
@kevinshumaker3753 2 года назад
@@eadweard. Yep, but the manufacturer wouldn't admit it and do a recall until I found a bunch of users reporting the same problem. When confronted, they issued the recall notice. I had used the same drives in regular PCs and other servers, and they all got replaced.
@katbryce
@katbryce 2 года назад
I don't know how this storinator thing works, but in FreeBSD (and Solaris), which have had zfs for a lot longer than Linux, RAID-Z1 is sort-of equivalent to RAID 5, RAID-Z2 is sort of equivalent to RAID-6, and RAID-Z3 can cope with three drive failures. One important thing about zfs is that it should always have access to the raw drives. You should not create a zfs file system out of "drives" that are RAIDed using some other method. One big advantage that zfs has over standard RAID is that it can deal with non-catastrophic disc failures - where the disk is apparently still working, but giving back wrong data. In standard RAID, you get a parity error and you know something is wrong, but because you don't know which drive has failed, you don't know what the right answer is. In zfs, you will have a much better chance of knowing which drive is reporting the incorrect data.
@alessandrozigliani2615
@alessandrozigliani2615 2 года назад
Confirmed. I don't think Storinator works differently than freebsd, linux or solaris for that matters if it is ZFS. RAID-Z1 is similar to RAID-5. RAID-Z2 is similar to RAID-6 and RAID-Z3 is a level further. The main difference with traditional (hardware) RAID is that if you have only partial failures of some sectors over multiple drives, there is a chance even in raid-z1 that you can still recover the data by mounting the pool readonly. IN traditional RAID, the disk is taken offline and is lost even with minimal problems.
@LiraeNoir
@LiraeNoir 2 года назад
Storinator is just the hardware: the case, backplanes, etc. Otherwise, it's a regular "basic" server PC. And yes, in the age of ZFS, raid is a bit... past century :) RAID doesn't protect against bit rot, doesn't really know which bits are the bad ones in case of conflict (and worse, doesn't tell you that explicitly), and the list gores on for quite a while. Which is why you'll see a LOT of people put something like TrueNAS on their storinator. Plenty of storage for cheap, and a robust software to handle the zfs of disks and turn the whole thing into a storage appliance if one so wishes.
@butstough
@butstough 2 года назад
came down here to find this comment. was worried my 15 disk wide raidz2 had only single disk fault tolerance for a second lmao.
@GuttersMN
@GuttersMN Год назад
20 years as a storage/backup admin- I still have nightmares from a RAID 5 double-disk failure over a decade ago. Manager wouldn't listen and insisted on a 15-disk RAID 5 set. Guess who had to recover the data when it failed AFTER he left the company! RAID 5 is evil, RAID 6 is better, Erasure coding is better yet! And never forget the 3-2-1 rule: 3 copies, 2 different media, one off site.
@jeremyroberts2782
@jeremyroberts2782 2 года назад
RAID is great, but there are always caveats. If you buy 10 new disks don't be surprised if they all fail with in a few weeks of each other, such is modern manufacturing quality. If you buy cheaper SATA drives they have zero internal monitoring and can suffer from bit rot, you may not realise you have bit rot on old data till to try to replace a filed drive. Most storage systems that use SATA drives recommend RAID 6. SAS drive are more expensive but have better controllers that check and refresh data to stop bit rot. SATA drives may not be designed to run 24/7 in a NAS box, so buy proper NAS drives or you could be replacing them sooner than you think. A NAS drive is at maximum usage and strain when you replace a drive and the data is being copied to the new drive, don't be surprised if this is when one of those old drives fails. Don't under estimate how long it will take to rebuild an array after replacing a drive, you are talking days or even weeks with those 14TB or larger drives and some smaller consumer NAS boxes. Three copies of data is always worth having.
@kevintrumbull5620
@kevintrumbull5620 2 года назад
The ZFS equivalents to raid are RAIDZ1 for RAID5 (Single parity), RAIDZ2 for RAID6 (Double parity), and RAIDZ3 for triple parity RAID. The number after "RAIDZ" is the number of drives (parity) that you can lose before you have data loss. It's worth noting that ZFS contains a large number of advancements beyond what traditional RAID offers, such as data integrity guarantees, transactional operation (which keeps the write hole from corrupting your array), and more.
@peterworsley
@peterworsley 9 месяцев назад
I noticed that aswell
@meggrobi
@meggrobi 2 года назад
I do like Dave's "conversational" presentation, it's interesting yet not over the top. I also like the trip down memory lane as I relate to many of his anecdotes. The loss of two drives, in a large array, is a thing when you take into account mean time failure.
@stapedium
@stapedium 2 года назад
A question I’ve always wondered about mirrored systems. How does the computer know which of the two is broken?
@1971merlin
@1971merlin 2 года назад
Relies in the drive itself returning an error code saying the data could not be read (or written).
@giornikitop5373
@giornikitop5373 2 года назад
yes, check hamming code and you will find that's both easy to understand and kind of genius for it's time. although it was primarily made for communications, the method is used in all sort of different areas. basically, with a bunch of XOR operations, which even the most pathetic chip can do fast, you can find if and where an error is present.
@1971merlin
@1971merlin 2 года назад
@@giornikitop5373 there is no hamming data in a mirror.
@giornikitop5373
@giornikitop5373 2 года назад
@@1971merlin how are mirrors checked for errors?
@wesley00042
@wesley00042 2 года назад
@@giornikitop5373 The drive has its own ECC and reports sectors as bad if it can't get a good read.
@muchosa1
@muchosa1 2 года назад
When I worked for VzB, I learned unless you test a backup, consider it bad. We had clients that did weekly restores of their data to confirm if it was good.
@Doesntcompute2k
@Doesntcompute2k 2 года назад
Always one of the BEST ideas ever invented. If you cannot restore when you want to, you cannot restore when you need to.
@stapedium
@stapedium 2 года назад
Problems come up when you spend 4 days of the week restoring and only 1 day working. Problems like the business making enough money to pay someone to do all those restores.
@SomeMorganSomewhere
@SomeMorganSomewhere 2 года назад
Yup. it's not a viable backup until you prove you can restore it... In a previous job the morons who set up the infrastructure I was building systems on had "set up backups", but after one of them somehow managed to trash an ENTIRE SAN we discovered that the backup directory which was SUPPOSED to store the backup of the MSSQL server database only had file called "placeholder.txt" in it...
@rickgreer7203
@rickgreer7203 Год назад
Reminds me of a place I found, where (in ~1999) for years an exec would put a new tape in the server, and take ejected tape home each night. Daily backups stored offsite in their home safe. Except they never noticed the tape was ejected shortly after inserting it. The tapes had been used for so long a few were worn completely clear, and all were bad. (And prior tech hadn't bothered to check it, since things seemed to be working...and of course, back then, central monitoring and such was less ideal than now. When I did get it fixed, the tape/drive size was far too small for what it was trying to backup as well...) Within a month after I left a few years later, the new folks did an Exchange upgrade, lost the CEO's mail...and they hadn't verified their new backup scheme either. I got paid to come in and rebuild the old server from old backup...getting back everything less the most recent few month's was better than nothing, and they'd given me the old hardware.
@rickgreer7203
@rickgreer7203 Год назад
@@stapedium Automate the restore and verification each day of a prior daily, and weekly of a prior weekly/monthly/yearly. Set a quarterly or so human audit process for sanity....same for walking the cages and looking for bad blinky lights that didn't get reported. Push failures to a well defined alert/ticket pipeline that isn't among the noise. Overall save time and have better reliability. And if the tools don't exist, write them, and publish them open source with the company name attached, for some free content marketing too.
@wskinnyodden
@wskinnyodden 2 года назад
Yeah... I've lost data before stupidly... Still can't get 12Tb backup online at a decent price, and even if I did, synching that at about 20Mbps upload speeds would be like watching paint dry.
@wskinnyodden
@wskinnyodden 2 года назад
Yeah, rebuild times are a pain... Disk capacities grown much faster than their speed. A 40Mb disk was faster to do a full disk scan/write than a 4Tb does and on a much slower interface. This is a stupid gap HDDs have had for a while. SSDs mostly fix it but I don't like the SSD longevity odds when powered down for long periods of time.
@daishi5571
@daishi5571 2 года назад
Back in 1999-2000 I worked for a bank in London (city of) and was put in charge of the servers in the disaster recovery building across the river (south London) I don't remember how many servers and it it fluctuated all the time, but it was always full and took up 3.5 acres. I would constantly be coordinating with the main building for data recovery testing because although I had 3.5 acres of servers it was only a mirror of what was in the main building and it was also mirrored to Germany. One day the bank of England wanted to know what the data delay was between the main building and my site and due to this being unacceptable (if memory serves it was up to 45 mins by the end of the day) I was tasked to decide how this could be remedied. My options were take up almost all the copper cables across the river (they did not want it bounced off other sites/exchanges) or fiber optics for which I was told there was plenty that was still dark so I chose fiber. It turns out the a lot of the fiber was damaged so they still ended up digging up the roads to put in more fiber (apparently the Bank of England is a really good stick to hit telecoms companies and local governments with because it was started almost immediately) So anyone who was stuck in traffic near either London or Southwark Bridges in the later half of 2000 I'm sorry. TLDR RAID across building/river is cool :-)
@homemedia4325
@homemedia4325 2 года назад
@12:18 ... I was waiting to hear you say this... haha ... "RAID is not a backup" - I learned the hard way many years ago :D Edit: I use Ceph via Proxmox... it is like raid spanned across 2 or more servers (3 is the recommended minimum) - however, I have backups to a RAID Mirror and important things go to the cloud! - great vid :)
@mikkelbreiler8916
@mikkelbreiler8916 2 года назад
RAID is peeing in your ski suit at the top of the piste. By the time you get to the ski hut you better have a plan B or you'll be shitting yourself next.
@Random_user-sz7nk
@Random_user-sz7nk Год назад
If I ever make it the rap game.... lilRaidfs is going to be it
@geohanson
@geohanson 2 года назад
Great video as always, but I think your ZFS descriptions are out slightly at 15:59 ZFS1/RAIDZ1 = RAID 5 (1 parity drive) ZFS2/RAIDZ2 = RAID 6 (2 parity drives) ZFS3/RAIDZ3 = RAID ‘7’ (3 parity drives) RAID 7 doesn’t really exist as far as I know though other than in ZFS
@Kennephone
@Kennephone Год назад
HOLY SHIT, 420TB for a consumer, how much pirated shit do you have?
@onametaquest
@onametaquest 2 года назад
This was perfect timing for this episode on RAID. I am currently working on updating my old antiquated RAID-0 Striped array which has been serving me well for the past decade, but, its time to update to something a bit more reliable. Despite that, I have been using the same disks in it for over a decade and have not once had any data loss or disk failures (dispite living on the edge LOL). Always kept the data mirrored to another disk using SyncToy. BTW, Who made that wonderful app and why did it never gain traction? Its a great utility and has always worked great for copying large amounts of data from one place to another with different modes and having a way to verify the operations. Keep up the great work Dave! This is now one of my new favorite channels. Cheers!
@silentjohn80
@silentjohn80 2 года назад
I love ZFS, it has a lot of great features. And the zpool and zfs tools are very well documented. My config is 1 zpool with 2 raidz2-volumes of 6 disks each.
@Doesntcompute2k
@Doesntcompute2k 2 года назад
Good config. With tuning, ZFS is really a monster with large datasets. If Oracle just would release the FULL ZFS build vs. the "open sourced" build, it would really be great.
@SyBernot
@SyBernot 2 года назад
It's always a struggle to come to a good config because with every choice you gain something and loose something else. More vdevs means faster drives but less space. Bigger vdevs means less redundancy but more overall space. You really have to know what you need your storage to do before you ever start using it. I think Dave will have regrets on going with raidZ3 and 3 spares, modern drives don't fail as much as they used to but if his goal was to have bulletproof storage I think he's achieved it.
@georgH
@georgH 2 года назад
@@Doesntcompute2k Actually, shortly after Oracle closed their ZFS, most of the ZFS developers (including historical and leads), quit and went on collaborating on OpenZFS (either on their own or with other companies), which, to this date, it gets the newest features and improvements (and Oracle can't get those back :)
@Doesntcompute2k
@Doesntcompute2k 2 года назад
@@georgH Thank you very much! I completely forgot about Oracle closing down their division. The say they dumped on Solaris, ZFS, etc., is as bad as what HP did with their purchases. I didn't see the name of the devs in the OpenZFS maintainers, but I miss so many things these days. Glad they are contributing. I do know several of the maintainers worked closely with the TrueNAS teams.
@belperite
@belperite 2 года назад
@@georgH Indeed, it's just a shame it can't get included in the Linux kernel because of the CDDL/GPL licensing issues* (and certain Linux kernel devs are ambivalent about ZFS at best). It's not even clear if Oracle could change the license agreement to GPL even if it wanted to (possible implications for the Sun / Netapp settlement and other issues). For most linux distros it has to be compiled by the user as a kernel module (although Ubuntu distribute it precompiled and haven't got sued yet). Personally though I've got a manually-built NAS based on Debian that uses ZFS and I'd never use anything else! *Discussed to death elsewhere so I'm not gonna respond to it here ;)
@LogicalNiko
@LogicalNiko Год назад
Back in the day I had to write a recovery mechanism for 720MB SCSI-2 drives from a large raid set whose hardware controller failed and was basically unobtanium. IIRC we built a linux box with a big JBOD, and then basically selecting offsets from dd images of each drive. Then reconstructing the partitions in the right order. I remember we ended up using box fans and steel desk filing folder organizers to hold each of the drives and give them a decent heat sink. IIRC it took something like 7-8 days in those days to process such "massive" drives 🤣
@EdwinvandenAkker
@EdwinvandenAkker 2 года назад
17:39 _"…configuring my big RAID"_ I'm Dutch… and in Dutch the word RAID sounds like _reet,_ which means _ass._ Especially when I hear you say _"Big RAID",_ it makes me laugh silently. 😁 I planned to make a t-shirt with the print _"Het interesseert me geen RAID"._ Straight translated to English: _"It doesn't interest my RAID (reet = ass)"._ Some of us dutchies say this when we don't give a damn (about one's opinion).
@Richard25000
@Richard25000 2 года назад
Umm slight error with the naming. Raidz1 = raid5 Raidz2 = raid6 Raidz3 is triple parity
@KirkDickinson
@KirkDickinson Год назад
That Houston Command Center looks like it does about what TrueNAS does. I have two TrueNAS servers set up in my office. One with 6-8TB drives, and a backup of 6-8TB drives. Both running ZFS-Z2. The production server takes snapshots every hour and every day copies to the backup server every evening. The servers are set up identically so if the production server goes down, I can remap my SMB's and be up and running on the backup in short order. All workstations back up documents, photos, emails, and a boot drive image to the production server on a regular basis. I am only using about 35% of my storage capacity on the production server, so there is a lot of room for snapshots. Critical DB data is also backed up in the cloud. Every document, photo, email archive, etc is synced back to extra drives in individual workstations in my office. My plan within a couple months is to put together another TrueNAS box and install it in a building offsite that will connect with a ubiquity wireless so I can have a totally offsite backup. I wish that there were affordable tape backup solutions now days for another layer of backup.
@LanceMcCarthy
@LanceMcCarthy 2 года назад
I just bought a 12 disk, 4U JBOD. It uses a USB 3.2, but is architecture allows for parallel access to each internal SATA interface. It arrives tomorrow, can't wait to load it up.
@ferna2294
@ferna2294 2 года назад
Watched the whole thing. I love the thorough way you explain everything. Even though I work as a technician, I have 0 experience with RAID machines, so this is always useful. Thank you ♥
@DavesGarage
@DavesGarage 2 года назад
Glad it was helpful!
@tullyal
@tullyal 2 года назад
Another super video - I haven't heard RAID explained as well in a long time.
@DavesGarage
@DavesGarage 2 года назад
Glad it was helpful!
@jwc4520
@jwc4520 2 года назад
All this data loss made me think of Clinton a d her towel ...sorry. tha KS I learn a bit with each video.
@TractorWrangler01
@TractorWrangler01 2 года назад
Your channel is great. Lots of information from behind the user perspective that is never seen by the end user. Love your content.
@DavesGarage
@DavesGarage 2 года назад
Welcome aboard!
@marksterling8286
@marksterling8286 2 года назад
Loved this video, reminded me of my first production netware server, an eisa 486 with 2 scsi cards and 1gb scsi disks mirrored. The disks were full height 5 1/4 inch monsters. 2 years later I installed compaq proliant 1500s with the hot plug 2gb drives and the smart array raid5 with 4 active disks and a hot standby. At that office we ended up with 6 servers in that configuration. And with the 30 drives we would get at least 1 amber alarm on a disk every month or so.
@meggrobi
@meggrobi 2 года назад
Ahh, Netware and a 486 server, fond memories. If they had implemented a true TCPIP solution, they may have put up a fight against Windows NT. As a file, print server and directory services, it was definitely a superior system to NT.
@marksterling8286
@marksterling8286 2 года назад
@@meggrobi I remember with netware 3.12 you could apply an tcp/ip stack but could only really use it to tunnel ipx/spx to a client. I had a huge amount of fun in those days with kit that seemed massive back then but tiny by todays standards. An example those proliant servers came with 80mb of ram that seemed massive back in the day.
@meggrobi
@meggrobi 2 года назад
@@marksterling8286 yes, ipx/spx was not routable so you needed a bridged network or router to carry ipx/spx. I still have a set of NetWare 4 discs. It was major flaw not to use TCPIP as the internet was just taking off outside universities etc in the 80s.
@marksterling8286
@marksterling8286 2 года назад
@@meggrobi you could route ipx/spx but you needed to setup the router with ipx subnets you could also use the netware server as a ipx router if you wanted to save some money, back in the day I used to have both Ethernet and token ring at home and a netware 3.12 box did the ipx routing. At work we used the 3com net builder 2 routers. At the time I worked for a large steel company and we had a very large ipx network with about 200 servers visible and about 300 ipx subnets
@marksterling8286
@marksterling8286 2 года назад
Although we also had to bridge some network subnets because of netbios and netbeui for lotus notes
@LA-MJ
@LA-MJ 2 года назад
raidz1 is raid5, raidz2 is raid6...
@jimmiejaz
@jimmiejaz 2 года назад
If you haven't tested the backups, they don't exist.
@smeuse
@smeuse 2 года назад
ST-251? I would have sworn it was an ST-225 :) I loved the sound of that era of disks. You can almost hear each byte being written....
@Doesntcompute2k
@Doesntcompute2k 2 года назад
So many of those old Seagates. And Miniscribes--before they shipped bricks, I mean. The 225 was indeed a classic.
@wesley00042
@wesley00042 2 года назад
I remember that little ST-251 seek test on power up.
@benz-share9058
@benz-share9058 2 года назад
I had (still have somewhere?) an ST-251. They were popular classics around about 1988. When a MB was a bunch of data! Great to see and hear one once again.
@bluehatguy4279
@bluehatguy4279 2 года назад
I'm a bit curious how ZFS compares with BTRFS. Between the two, I've really only tried BTRFS. When it works, BTRFS is like the best thing ever, and when it breaks, BTRFS is kinda the worst thing ever.
@travis1240
@travis1240 2 года назад
And it does break.... I went back to EXT4 because I wanted to keep my data.
@marcfruchtman9473
@marcfruchtman9473 2 года назад
The "Count" was absolutely glorious!
@DiegoPereyra
@DiegoPereyra 2 года назад
Master Class, Master...!
@janmonson
@janmonson 2 года назад
My Nas is Cold Fusion. My pools are Fat Man and Little Boy.
@ryanwalker4660
@ryanwalker4660 2 года назад
Thanks for the tips Dave, I've only ever done RAID 0 for the read and write speeds and I'm sure like many stories you've heard that lead to a failed drive or corrupt data down the line. I've started off on my path for my AAS with a emphasis is cyber security and this is one of the first topics we hit on. I'm okay with tech, I might not get every aspect when in classes, I'm more of a hands on person and reading and learning from books is not my strong suit. Your videos are great and you have a good way of explaining the material. Thanks again.
@rawsaucerobert
@rawsaucerobert 2 года назад
I got two used (although hardly) hard drives from FB marketplace, made sure there were no errors, they were legit, etc etc and now I just use 2 externals to back up all my stuff. I back up my pc to one, clone it or just run the backup again on the other drive, store one off site and switch them out every so often. I don't think the wear and tear or energy consumption of a NAS suited me when I backup maybe once per month or less but it's still essential for me to store all my photos and such once in a while. Cheers and good luck.
@c128stuff
@c128stuff 2 года назад
@@rawsaucerobert you could still use a raid based solution for that (raid1, mirroring). It will just take care of the duplication part you do manually now, saving you some time, and reducing risk on both 'human errors' and technical failures. My 'big array' which I use for backups is turned off most of the time, and only gets turned on for making or restoring backups. Using 'wake on lan' you could even automate the turning on part of that. There are companies (icybox being one) which sell external enclosures for this specific use case, with raid1 support built into the enclosure.
@thumbwarriordx
@thumbwarriordx 2 года назад
Throwing a raid into a desktop PC I always felt the RAID 10 striped and mirrored approach with 4-6 drives made way more sense than any of the RAID parity options. Keep that CPU usage down at the cost of an extra drive maybe. The big gains for RAID 5/6 are on the upscaling. Whew boy.
@dennisfahey2379
@dennisfahey2379 2 года назад
So each NAS is created to back up the last NAS because the size is so big it will not fit on anything else. So much for the 1,2,3 backup paradigm. I miss the early days of PC backup: audio cassette, VHS, 9-track - even paper tape. I recall a paper tape storage system company running ads indicating that paper was the only medium that would survive in an EMP attack. Backup has always been a fear tactic business.
@DavesGarage
@DavesGarage 2 года назад
You miscalculate. Multiple older small NAS units always get backed up to the new, largest, so there are always 2-3 copies. If I were filling the big NAS with nowhere to back it up to, that'd be silly.
@powerpower-rg7bk
@powerpower-rg7bk 2 года назад
@15:54 RAIDZ2 is akin to RAID6 as that permits two drive failures while maintaining data in the array. RAIDZ (or RAIDZ1) is similar to traditional RAID5 in that a single drive can fail and the array is still functioning. RAIDZ3 can have three drives fail while data is still accessible. One thing also worth noting about RAIDZ2 is that the two parity calculations are different and when scrubbing data single bit errors can be detected and corrected. And for those curious, RAID3 did get some usage historically. It was where the parity information was put on a dedicated disk. The only modern implementation of RAID3 I've seen 'recently' were in some Texas Memory System DRAM based arrays about a decade ago. The reasoning for this is that since the drives were based on DRAM, rotating parity between memory channels impacted latency too much. The DRAM channels themselves had ECC and would only invoke the parity channel for recovery. The combination of ECC and parity did permit modules in these DRAM arrays to be swapped out while the system was running. Texas Memory Systems was purchased by IBM years ago and their technology has been implemented in various mainframes. Uniquely, IBM mainframes effect support RAID5 across memory multiple channels which their marketing calls RAIM. Pretty good timing for Texas Memory systems to exit the market as in a few years time, PCIe based SSDs would have displaced their entire product line offering fast, low latency storage at higher capacities and lower prices.
@blazewardog
@blazewardog Год назад
Unraid's storage array works like Raid3. The 1-2 Parity drives contain all the metadata from the array with each logical sector matching up directly across all the data drives in the array. Also both Parity drives contain the exact same Parity data, so really Unraid is a raid 3 with a raid 1 drive set for Parity.
@davidkamaunu7887
@davidkamaunu7887 Год назад
I’m surprised that you didn’t have issues with striping SSDs. Typically due to the very low latency of flash storage; a RAID 0 striped set causes a “race condition” in the CPU as it has to play timekeeper between the two drives.
@pontiacg445
@pontiacg445 10 месяцев назад
I've been running two striped 256gb samsung SSDs since the 4770k released, like a decade almost. M.2 was just becoming a thing when I built this PC, and way back when I was pulling specs much faster than those early m.2 drives...
@georgH
@georgH 2 года назад
I just love ZFS, use it with redundancy on both my main and backup pools. The zpool/zfs command line is very clear and simple to use. Thanks to the checsumming and periodic scrubbing, I discovered a fault on one drive on the backup pool, it was writing rubbish once in a while! Using znapzend to automate snapshots, snapshot thinning, and send it to the backup server. All the configuration it needs is stored using zfs properties of each filesystem, really neat having the configuration along with the pool itself. I created this pool using OpenSolaris back in 2008, then migrated to FreeBSD and, finally, Linux, which I feel more comfortable using :)
@ytuser13082011
@ytuser13082011 Год назад
I much more prefer software RAID like ZFS or BTRFS.
@DavesGarage
@DavesGarage Год назад
The video is mostly about ZFS :-)
@ianjharris
@ianjharris Год назад
Dave you bring these subjects to practical reality from your experience. An obvious pro.
@notarabbit1752
@notarabbit1752 2 года назад
a phrase I heard about backups: "One is none and two is one"
@DavesGarage
@DavesGarage 2 года назад
I love that saying!
@danman32
@danman32 2 года назад
One thing not focused on for fault tolerant RAID: parity calculation speed. The parity can be generated in hardware in the controller or in software. You'll pay $$$ for full hardware fault tolerant RAID. Those inexpensive MBs that had built in RAID 5 (or 6) did most of the work in the driver using the main CPU. That makes writers SLOW! otherwise the HW is simply a multi-drive JBOD controller.
@Doesntcompute2k
@Doesntcompute2k 2 года назад
16 port LSI-based MegaRAID SAS RAID controller for a PC: $125USD. LSI 9260-16i. Handles 16 SAS, SATA volumes--disk or SSD. Buy from eBay from a reputable reseller (with a huge rating, lots of sales, and a true business in the business of resell/refurb). I have nine of the LSI 9200 and 9300 installed here at home. TrueNAS loves them. 😀
@DavesGarage
@DavesGarage 2 года назад
True enough... my Synology is an Atom and it can't keep up to 10GbE when running RAID6. I ran it at RAID5 for that reason!
@Doesntcompute2k
@Doesntcompute2k 2 года назад
@@DavesGarage Same in my Synology 1819+. Why they didn't use i5 amazes me.
@michaelpezzulo4413
@michaelpezzulo4413 2 года назад
Far better explanation of RAID then the Linkedin learning video course. Everybody's gotta love a man who knows what he is talking about.
@stevendavies-morris6651
@stevendavies-morris6651 2 года назад
I love how Dave doesn't use 10 words when 3 will do. And (being very techie myself who taught loads of computer classes) I appreciate how he uses a KISS approach to his presentation so not-very-techie people can learn from and apply what he explains.
@geoffstrickler
@geoffstrickler 2 года назад
RAID 10 (1+0) is/was my preferred method for high performance on HDDs (actual spinning disks). With SSDs, that becomes impractical and unnecessary. Raid 5/6 is more appropriate for high performance systems using SSD. Of course, if you’re mostly streaming data (e.g. a few users, no large database, or streaming or editing video. Backups, gaming, etc), then RAID 5/6 is often a smarter choice even for HDD based arrays
@EricBRowell
@EricBRowell 2 года назад
I don’t think changing to SDD would change my mind from RAID 10 verse the others. But it is common for cost savings. But at home I normally still willing to risk as there’s a Backup.
@SauvikRoy
@SauvikRoy 2 года назад
Great topic of discussion. Thanks Dave!
@butstough
@butstough 2 года назад
also you should disable access time, or set relatime, and change the record size to 1MB, especially since youre storing video. having atime set sends your drive activity to the moon, as zfs is constantly writing updated access times to the disk as you read files. theres no downside to setting record size to 1MB, as zfs will use the smallest record size possible for small writes, like single text files or metadata changes.
@tropicalspeedbmw
@tropicalspeedbmw 2 года назад
Haha the Count segment was great
@DavesGarage
@DavesGarage 2 года назад
Thanks!
@nexisle7508
@nexisle7508 Год назад
Disappointed that this video isn’t sponsored by !!RAID SHADOW LEGENDS!!
@neorandy
@neorandy 2 года назад
Gladly done with RAID and supporting Windows users, but enjoy your videos. Have you done one on 3.11/Windows for Workgroups?
@neorandy
@neorandy 2 года назад
I installed my first SCSI drive in a Tandy 4000. It was a Hard Card SCSI from Tandy.
@RaymondDay
@RaymondDay 2 года назад
I got some LVM drives on my Ubuntu server. World houston make it easy to use them? If so how do I install it? Thought it be as easy as this: apt install houston Reading package lists... Done Building dependency tree... Done Reading state information... Done E: Unable to locate package houston Thank you for your grate videos. -Raymond Day
@pedro_8240
@pedro_8240 2 года назад
I think you got thinks mixed up a little. RAID-Z(1) can lose 1 drive before losing data, RAID-Z2 can lose 2, and RAID-Z3 can lose 3, that means RAID-Z is equivalent to RAID-5 and RAID-Z2 is equivalent to RAID 6. Edit: welp, only read the video description after the video ended (I generally never bother to read it, it's usually just a bunch of links and useless stuff), and sure enough, what I just said is there. Just a suggestion, perhaps it's better to put pinned comments in situations like this, that way it's more likely people will actually see the info.
@yellowcrescent
@yellowcrescent Год назад
Nice overview! Have seen RAID6 fail many times at work... although usually from hardware RAID controller and disk issues (Samsung EVO SSDs + Dell PERC). When you start a fsck and it segfaults, or everything is getting moved to lost+found, you've got a long night ahead.. switching to Intel DC SSDs solved the issue, so makes sense to use proper datacenter drives with hardware RAID controllers :P Also, the RAID5 scenario where you lose an additional disk while rebuilding is happening is not uncommon, since the rebuild process really stresses out the remaining disks. At home, I use a single-node Ceph setup (osd level), since it makes expanding much easier than ZFS, and I can use CephFS + SMB for sharing.
@hiddenyid4223
@hiddenyid4223 Месяц назад
correction for 16:00 RaidZ1 can lose 1 (like Raid5). RaidZ2 can lose 2 (like Raid6). RaidZ3 can lose 3(!!!), No Raid equivalent
@wskinnyodden
@wskinnyodden 2 года назад
RAID in my home country is a fly killing spray (bug bomb). That said, I have a ZFS dual redundant pool (custom built) using recovered SAS disks (not the best idea, but got lucky I suppose, but better shut up as I need to replace one already, I do have them spares but its a pain to track which one)
@c128stuff
@c128stuff 2 года назад
'As long as the second drive doesn't fail before the first is recovered' If you get to deal with arrays often enough, you will run into this. For that matter... I have encountered this situation at least twice on the raid storage I use at home. 2 solutions: - use smaller drives. Rebuild times for 1-2TB drives are quite reasonable still, reducing the risk significantly. - use raid6
@thearossnz
@thearossnz 2 года назад
Pretty sure RAIDZ1 = RAID5 (one parity drive), RAIDZ2=RAID 6 (Two parity drive), and there was no direct equivalent in basic RAID for ZRAIDZ3, Below one parity drive is stripe or just Jbod
@moorera364
@moorera364 Год назад
Another outstanding, informative and entertaining video. Keep them coming Dave, really do enjoy your content! Love to see something on Networking if you do it!
@dalrob9969
@dalrob9969 Год назад
Ha, Ha, You are funny, But man, Mirrors, now that's what Im talking about...! You got this Dave. 😅
@shephusted2714
@shephusted2714 2 года назад
setup writing to a couple of nas each with its own raid arrays - flash is changing the game - value tier smb needs to go to 100g to complement faster speeds and dirt cheap storage - you need 10g/s to saturate 100g but expect 6-8gb/s - you can upgrade to 200g with bonding/bridge. flash/ssd/nvme nas should help with some reliability - no moving parts
@vcbb1090
@vcbb1090 Год назад
Need a correction: RAID-Z1 = RAID5 (1 Parity Drive) while RAID-Z2 = RAID6 (2 Parity drives).
@UncleMinecraft
@UncleMinecraft Год назад
I'm really careful about backups. Not got a 24 disk array though...😢
@DrakiniteOfficial
@DrakiniteOfficial 2 года назад
I wish you made a "sponsored by RAID Shadow Legends" joke
@snowdog993
@snowdog993 2 года назад
I was calling the "I" in RAID a different term. Redundant Array of INTEGRATED Discs. OH well I have been wrong all this time? Such is life.
@NovaLand
@NovaLand 2 года назад
I've read that in theory, the MTBF is smaller than the time it takes to restore a modern sized drive (even with hotswap), making raid actually not living up to.. anything. Sure, I'm using a raid 6 configuration myself, but it's good to be aware that when comparing numbers of MTBF and time to restore a drive, it could be a problem. Another issue that could be handy to know is that modern production is so streamlined, that if a drive fails and you have a bunch of drives from the exact same batch of manufacturing, it is very possible that more drives could fail VERY soon. So a good way to solving that problem is to buy drives from different batches to make sure they don't fail at the same time.
@larzblast
@larzblast Год назад
Interesting that you chose Houston over TrueNAS Scale. Did you examine TrueNAS Scale and if so, what made you choose Houston as a result?
@staceyward777
@staceyward777 2 года назад
Jeezus....I still have some ST-225 drives with data on them that I need to pull off. But even today on my desktop (still running Win2K btw) I don't mess with RAID. I simply have a second drive, as drive D:, that my C: drive is imaged to every night. If the C: drive fails, I pop in a new PATA drive for C:, and restore the image and I don't have to spend thousands of dollars. That's how those of us that aren't millionaires do it. And before you say "omg you're ancient", and yes, I still have an XT machine with 6.22 on it, I have 4 servers of various flavors on my network with over a dozen workstations in my house. But some of us can't afford thousands of dollars for a RAID server to backup files. My only critical files are my financial information ( like tax returns back to the 80s), portraits of my kids, and legal documents like my divorce paperwork. That requires less than a single terabyte. And that is also backed up to a Linux machine (Rpi) buried somewhere in my backyard in a waterproof Faraday cage that will survive a tornado (common in these parts), a house fire, an EMP, or even an NSA warrant.
@JesseHughson
@JesseHughson Год назад
Do you have any issues with WiFi conflicting with your LED data signal using FastLED on ESP32?
@AndrewDeFaria
@AndrewDeFaria Год назад
Not sure why NFS is always nixed. I guess because it doesn't some by default with Windows (though perhaps under WSL?). In any event I believe the NFS is way faster and easier to administer than SMB. That'd be a good video - comparing SMB to NFS...
@RaymondDay
@RaymondDay 2 года назад
Aliso wow how much power is all them drives taking? So do you run off green power?
@АдамСмит-ы7р
@АдамСмит-ы7р Год назад
Simply removing the wrong directory would instantly lead to data loss? Haha, what about modern filesystems supporting snapshots (like the ZFS you mentioned)?
@fattomandeibu
@fattomandeibu 2 года назад
I only have a single NAS. Nothing fancy, just a Raspberry Pi and 8tb USB3 HD. It has image files for every system drive for every system in the house, and is also used as a media server for the smart TV and various game consoles. The most important files(family photos etc. that I consider truly irreplaceable) get backed up to everything I can get my hands on, using the "if I back it up enough times, surely they won't all fail at once" method. Off the top of my head, I use USB flash drives, USB hard disks and optical discs. I'd probably still use floppies if they had them in useful capacities. For the USB drives, I have a sorta "3 rolling auto-save" where I always have 3 USB sticks and hard drives with my 3 newest backups, but I have redundant optical disc backups going back over 20 years. For hard disk and flash storage that rarely get used, good practice of refreshing disks to help guard against bit rot might be a good topic for another video. It always seems like something a lot of people aren't aware of, at least for flash devices.
@HelloKittyFanMan....
@HelloKittyFanMan.... 2 года назад
"...So I will 'SPARE' you today." Haha, I saw what you did there; nice work!
@BobZed
@BobZed Год назад
Aha. You're a "blue badge", AKA one of the real people. I was a green badge V-dash which designated me aa a vendor who works with Microsoft My Microsoft email address preceded by the “V-” prefix. I put in a hell of a lot of unpaid overtime for Microsoft. The Softies (Microsoft employees) were also putting in massive amounts of OT, but they expected that to be reflected in the stock price, which was something I didn't get a chance to benefit from. My employer designated me as an exempt employee, so I was exempt from getting OT pay. And yes, I know you're not personally responsible for any of this but you did flash the badge.
@rayoflight62
@rayoflight62 Год назад
My experience with RAID systems - both software and hardware controlled - has been negative. I had five disks failure in five different RAID5 systems, and all five occasions I lost everything - the controller card was unable to rebuilt the data from the failed drive. What worries me, I lost data also from a 2-disk mirror with one failed drive. The systems were either SCSI2 or SCSI3 (68 pin). After these experience, I never relied anymore on a single disk controller in redundant arrays; I ended up using a dual controller for each disk. Now, with the hard disk gone (except those stupid shingled Disks) the problems with flash disks are on a different level...
@cuteraptor42
@cuteraptor42 2 года назад
16:14 I'm not sure that's right. RAIDZ2 can lose 2 disks, not only 1 disk, and still recover Whoops, just read the errata
@dfitzy
@dfitzy 2 года назад
RaidZ1 is the RAID5 equivalent with only one drive fault tolerance. RaidZ2 is comparable to RAID6, with two drives fault tolerance RaidZ3 has no _standard raid_ equivalent with three drives fault tolerance I can't find anything online referencing a name change to the raidz levels. 45drives own website states what I've commented here
@brucethen
@brucethen Год назад
I moved my profile to a new drive, and using Move was my mistake, somehow part way through the move, i deleted the new data and lost everything that wasn't backed-up, luckily i keep regular backups, but for some reason, i wasn't backing up my word or excel documents. I found a really old backup from 2012, but that meant i lost about 9 years of documents, lickily nothing important.
@robsku1
@robsku1 Год назад
Why no mention of RAID 6? I've been meaning to implement my own home-knitted RAID (using Linux software RAID support) with 3 drives - it has the most interesting method for creating redundancy, it only uses 1/3rd of each drive on 3 drive setup and can lose one of them without loss of data :)
@davidhiggen3029
@davidhiggen3029 Год назад
I'm a fan of simple, well carpentered solutions. I hate RAID. It falls into a class of computer concepts that I call "cutesy clever". In other words, using a lot of complexity to squeeze out just a bit more performance or capacity, at the expense of simplicity. RAID is a typical example: it requires a lot of computation on the data just to store or retrieve it (OK, some can be done in hardware). But the more complexity, the more chance for errors. And in fact, disk capacity has advanced at a rate that makes it rather pointless. If you're really concerned about data safety, use mirroring. There are other "cute" ideas that are equally dubious. Recursion: oh, how clever! But you can't set bounds on the stack. ;-) And tree structures (I know I'm going to be unpopular about this one), but having to keep copying data around to keep the tree balanced is a waste of CPU cycles and a source of possible errors. Of COURSE your Btree code doesn't have any bugs in it, right?
@JonathanSwiftUK
@JonathanSwiftUK 2 года назад
Seriously Dave? Modern RAID, you mean like I've been using for 25 years? Ofcourse, that's proper raid, on hardware raid controllers - before SSDs were invented - not an OS-based mock-up of proper raid. I hope you're not going to suggest 'spanned' drives, those are now depreciated. We use Storage Spaces - but it's limited (columns issues). I'm a fan of the OS doing compute, not spending too much time doing parity calculations. At least in Azure we can use 'simple' mode because the underlying disk is already protected. Performance can be an issue with this software raid. I've used raid 5/6 a lot, raid 6 was fine on MSA2050s - they were new to me at the time so I got nervous and used raid 6, with drives in groups of 8 drives - with spinning drives there is an ideal number of drives - too few or too many and throughput starts to drop.
@XxTWMLxX
@XxTWMLxX 2 года назад
My dell r230 uses 1 ssd drive for os. Then mirrored. One ssd drive for data on server then mirrored. I have another dell perc card in the server as well Used to connect to a 8 bay jbod enclosure doing a raid 1+0 (10) with 8 drives as a Nas and other data storage.... All of this is backed up to a actual brand name Nas configured with raid 1+0 and using windows server backup on server. So I can restore any file any time from network path or a full backup on reinstall of server os... Been going strong for over 10 years now. Server uses SSDs. Jbod is HDDs. Specially datacenter grade 16tb HDDs. So a 64tb Nas.
@tuttocrafting
@tuttocrafting 2 года назад
Meanwhile I just lost all my hoarded data.... importatnt stuff was in backups so I could restore it. Now I have to find why the data got corrupt. It can be a lot of things... bu tI have jet to find the issue, so I dont tryst my system anymore.... And I have to find a new place for 10TB of stuff. (And I'm poor)
@RealCheesyBread
@RealCheesyBread 2 года назад
I know you're not doing it for the money, but this would have been the PERFECT video to be sponsored by RAID Shadow Legends
@cyclic2696
@cyclic2696 2 года назад
Please can you put a link in about the LED devices glowing and moving in the background - they're such a tease and always out of focus! Thanks. 👍😃
@chbruno79
@chbruno79 9 месяцев назад
Hi Dave. I have a data recovery business and I'm wondering if I can use this video on my website (of course, giving you the credits), because this is the best explanation I ever found on RU-vid about how RAID works. Please let me know if you are ok with that.
@Muzer0
@Muzer0 Год назад
I was always taught that "RAID nought is naughty" because it doesn't do redundancy, as a convenient way of remembering it
@andressepter
@andressepter Год назад
How RAID works. You sray it on bugs. Bugs die. Debugging 101.
@ShadowManceri
@ShadowManceri 2 года назад
Often small detail is being missed out when speaking RAID is that no matter what configuration you have, you might lose all of it even on single device failure. That's because disk isn't the only thing that can fail. If for example your controller does something bad, it's all gone (or at least very difficult recovery). Or your memory could get corrupted, there could be power outage etc. And second, even more critical point is that RAID recovery is not easy. And it can be scary as the recovery itself might destroy more. And it can (=will) be very expensive too. And RAID actually requires constant maintenance in weekly basis or maybe less if it's not in heavy duty. The best case scenario is that there is just disk failure and you can do simple recovery but that's often not the case. People talking about RAID fault tolerance often only focus on the best case scenario. Reality is more bleak. And then we need to talk about filesystems. Too many times I've seen a business to have a problem after they just thought throwing everything in big RAID and never look after it. Then they are in panic when everything is lost. They never got the above fine print.
@colinmaharaj
@colinmaharaj Год назад
I've been doing mirrors in my dev drives for over a decade. no regrets, plus backups on external drive, and a backup computer for my dev work, and I'm doing a 3rd back up PC to try new stuff.
@kbsanders
@kbsanders 2 года назад
Step 1: Make the FBI think you have classified documents in your house.
@XxTWMLxX
@XxTWMLxX 2 года назад
All the raid in the world can't protect you from ransomware... Iv been hit with it once by my own stupid misconfig. This is why I also have an offline backup that can't be accessed until connected manually.
@silentjohn80
@silentjohn80 2 года назад
My understanding is that raidz2 allows 2 disks to fail without losing data, while raidz3 allows 3. Raidz (or raidz1) allows one drive to fail. I think you said otherwise, that's why I'm writing this.
@DavesGarage
@DavesGarage 2 года назад
I believe you are CORRECT and I have updated the video description with errata. Here was I was worried about getting 1+0 backwards from 0+1 and I boned something else!
@Doesntcompute2k
@Doesntcompute2k 2 года назад
@@DavesGarage Striped mirrorsets SHOULD be RAID01 but for some reason....LOL
@FFVison
@FFVison 2 года назад
Just gonna say... that is a LOT of space for your porn, erm, I mean, backups...
@johnstonfamily6653
@johnstonfamily6653 Год назад
RAID has been a disappointment to me on numerous occasions. In the real world, everything from unexpected rebuilds to dead controller cards being incompatible with newer models, to incomprehensible problems on boot, and far too common instance of your fault tolerant array causing your system not to finish booting after a power outage because the volume is marked as dirty. Beware. Use RAID only if you have good backups, it is not safer or even as safe as a single hard drive without a backup.
@mobilephone4045
@mobilephone4045 2 года назад
I had so many Seagate 2tb and 3tb drives (prematurely and abruptly) fail a few years ago, that I just can't give them another dollar. I'm so salty about it, I had them deployed on remote media servers in locations where most cost me a return flight outside of the service schedule to replace them. I replaced them with WD and for now I stick with them, until I have a similar catastrophic situation. Edit: to be clear, these were a temp disk used to cache playing media. Not economical for Raid, even considering the flights, because the customer looks at the investment price and compares to competitors. The data is not important, but uptime and service cost is.
@Neeboopsh
@Neeboopsh Год назад
the question in the thumbnail reminds me of a question asked of one of the science consultants on star trek, and his answer "how do heisenberg uncertainty compensators work?" "quite well, thanks" ;)
@mrfoodarama
@mrfoodarama 2 года назад
If there is an Error in this system, I wonder if the Logs say: "Houston, we have a problem"! Lol, sorry couldn't resist !
Далее
Windows vs 30 Hard Drives - Can it Do it!?  Find out!
23:59
The Computer that Birthed BASIC and led to Microsoft!
14:59
You're Doing it Wrong:  Rebooting!  Find out why!
11:50
Просмотров 521 тыс.
The UPS Secrets YOU Need to Know!
24:57
Просмотров 76 тыс.
Hardware Raid is Dead and is a Bad Idea in 2022
22:19
Просмотров 679 тыс.
11 Characters That Crash Any PC: the Fork Bomb!
11:59
Просмотров 511 тыс.
How Computers BOOT: From Startup to Viruses
15:15
Просмотров 241 тыс.
Set up a Local AI like ChatGPT on your own machine!
13:22