Pro tip for unRAID n00bs: find a USB header cable and connect it directly to the motherboard. It's fine to have the flash drive just lying in there. source: this is the method I've used on my four unRAID servers.
@@includenull You should still be careful, like JT wrote already earlier: "static build up is static build up". Just because it didn't happened in LTTs video, it doesn't necessary mean that this can't happen. There are a lot of factors involved in this, especially with computer hardware. Prices for a variety of different components are going up these days and I personally would be extra careful with the bought hardware. So you could say that there is, besides of the technical issue also a financial one.
A few things that you may want to get in your next purchase / setup change. Drive array that you can upgrade in place. - Unraid might already do this, but the ability to replace a drive with a larger one and have the new drive automatically rebuild is priceless. 10Gbe network ports - You’ll be glad that you got them as they really speed up file editing and transfers. Front accessible drive slots. You don’t want to have to open the server case up to access drives. the ability slide them in/out of the front of the server is a big stress reliever. Larger drives - this is the eternal problem. Trying to buy enough storage to last 2-3 years when you are just setting up is difficult since your needs can change over time. I’ve learned to multiple my estimate by 150% and that had worked so far. Noise reduction - Depending on where the server is at, you might want to get quieter fans and drives. I check the noise level of drivers before I buy them. if the server is out in a garage or something, noise may not be a problem. Spare drives - I admit that I haven’t purchased spare drives - ever. With a fault tolerant drive array, 1 failed drive might not be a big deal. But what about 2? Drives purchased at the same time have a higher chance of failing at the same time. Especially if they are from the same batch of drives! Your choices aren’t bad per se, but these are things that can make your purchase easier to maintain and last longer before you decide to replace it.
For that many drives in one enclosure, I would HIGHLY recommend enterprise level drives. They're more expensive, but you're far less likely to have an issue due to vibrations or timeouts, plus reliability is far better. Older Ultrastar drives are really good for this purpose, and usually aren't too expensive, and are highly reliable for long term use from my experience. I got 6 4TB Ultrastar drives back in 2015 for $100 each brand new. They were of a 2 year old model and OEM, but still new in the box. I used to work in a server storage software development test lab, and of the hundreds of thousands of drives I dealt with, the Hitachi/HGST Ultrastars has the lowest defect rate by FAR, as in we'd get 1 Ultrastar failure a month compared to 15-20 Seagate drives of the same age range, (test storage would be used for 5-8 years in a test lab setting, so I was mostly dealing with 1TB drives even in 2015) and Toshiba and Fujitsu were worse than Seagate. I can't recommend Ultrastar drives enough for long term reliability.
yes definitely enterprise drives, I took a risk once and bought WD nas red disks (x12) and had 4 fail within the first 6 months. Replaced with WD NAS red pro , running 3 years no issues.
@@edwindr7514 No, they're really not, mostly because they like to drop out of RAIDs even when they're not failing. Enterprise drives have this little thing called "Time Limited Error Recovery" that consumer drives don't have. If a consumer drive has an issue reading a sector, it will keep trying for up to two minutes to read the sector, and in a RAID scenario, that leads to timeouts and dropping the drive from the set. Enterprise drives will time out and mark the sector as unreadable, replacing it with a spare sector, and declare the data lost to the RAID controller, prompting rebuilding the sector from parity or mirror data. I know from experience when a consumer drive falls out of a RAID set, and then another drops out during the rebuild, it can cause major data loss. I lost my pics for my once in a lifetime DC trip because of it. In addition, consume drives don't do well with vibration, and this can cause errors that cause dropouts as well as cause collision between the platter and the head, if too many drives are put into one enclosure. Enterprise drives are secured from wobbling much better than consumer drives with tighter tolerances and more points of securing the heads and spindles. I know these things because I'm a sysadmin, most notably a sysadmin in a server storage software test lab for a major storage company from 2010 to 2016. (I do NOT claim to speak for them, as I was just an employee and have no say in corporate opinions.) I've overall been a sysadmin for 12 years, out of a 25 year career in corporate IT. I have EXTENSIVE experience with storage. My profile is available on LinkedIn. I'm the one in Colorado, if you care to look. Oh, and those WD Red drives? They're horrible for RAID, almost as bad as consumer drives. They're designed around 4 drive enclosures and software RAID, where vibration protections and TLER aren't as necessary. My favorite drives for mechanical storage are HGST Ultrastars, as they were massively more reliable, even when old, out of the hundreds of thousands of drives of dozens of models and 6 different vendors I dealt with in the test lab. They're not that expensive in most cases, if you keep it to 2-4 steps down from maximum capacity, and even equal to consumer drives in price at the bottom of the capacity list. (4TB Ultrastars are super cheap right now, and fast and awesome drives.)
I also got the same server after crafts video. Honestly If the videos stop putting food on the table He should just buy up all of one unit and make a video about it. He did a great job selling this item.
I thought the same, anything 10TB or over from Seagate would have been non-SMR drives. I recently pulled all of the 8TB SMR drives out of my Unraid, even though its not really a critial issue like it is for raid and zfs you certainly do get effected by the slow write speeds.
@@PrimeRedux realized I got SMR's from Seagate a while ago before the whole debacle. unRAID. Definitely slower writes but with a cache SSD, meh. 4TB drives for like 50 bucks was still worth it in the end. Just holds computer backups and ripped movies so nothing mission critical anyway.
@@Nordlicht05 Well, small RaspberryPi NAS server is not a big deal. I didn't even notice the difference in power consumption. Full-blown data server though...
It's nice to see little projects like these that involve nods to other tech channels. This is a nice little 1U set up and a SOLID build for an early jump into home server hardware.
The motherboard has internal USB sockets to allow you to plug in your flash drives inside the case so it isn't hanging out the rear of the case while the server is running. That does assume your flash drive is short enough to stand up in the case with the lid closed (~30mm but shorter is better for clearance of the case lid).
I wanted to point out that the factory retaining mechanism for the drives seems to rely on longer than normal screws in 2 of the holes. You might be able to reuse those screws from the Seagate external enclosures, if that kind of thing matters to you. A regular 6-32 SCHS (allen bolt) would probably work too, to engage the retaining mechanism.
Typically I would agree from all the servers I’ve worked with in the past, but I have a Dell R210 1RU server at home that runs at room temperature and is almost silent unless I push the CPU workload up high. It’s pretty impressive. Most of the newer dell servers are also fresh air rated. It’s a stark difference from the earlier IBM servers I used to work with that sounded like aircraft taking off :-)
Yeah, I would only do it if I had a dedicated server area in a basement or something. Maybe some low power Atom or ARM processers would run cooler and be fine.
I agree. Very nice build, but It's very few people that have a basement or sound isolated room where you can hide a 19" rack server away. even its small 1u height these babies take quite a bit of space, and the noise is the even bigger issue if people live in an appartement it's close to impossible to find a suitable place to hide them away without the noise being an issue. Problem to build a "compact" system (if you can call a 10+ disk nas "compact") is to find a suitable Mobo/sata controller with enough connectors to support a "pro" level nas configuration. Even you can find mobo's with 8 Sata connectors, they are often spanned over 2 different controllers wich can be a problem dependent on how well its implemented on that particular mobo model. Thats why if you want to make a diy Nas, a 19" server is often the only way of getting the flexibility you need, and like Matt found out the choice of buying off shelf external drives and hording the drives can be risky if you don't know exactly what kinda drives is inside. Dependent of cause how "pro" you want your system to be vs price. There is a reason why eg Seagate's server grade 24/7/365/10 year drives cost over 3 times as much as their equivalent consumer grade drives, and why they have different drive series today dependent on what usage you plan to use them with.
3 years after buying 4*4 TB HDD I realized they were SMR. I'm using TrueNAS which uses the ZFS filesystem and you get horrible performances with SMR (10 MB/s). I only saw that when I did a little bit more that just watching 1 video at a time stored on the NAS. Otherwise I had 0 issue, thus the 3 years to figure that out. The HDD weren't labeled for NAS (got them because they were cheaper) so I had to buy replacement HDD. tl;dr 100% agree with the video, avoid SMR drive, you want CMR
and don't forget that Seagate actually ripped off customers for implementing it to their drives at one point and most of the drives would fail cause of it
@@N1lav I don't think any one was sued. People were mad at WD for changing some of their CMR product lines for SMR. Seagate had had their barracuda drives SMR for years now. Their iron wolf drives are CMR. They don't actually recommend you even use barracuda drives for a raided NAS device at all regardless.
How is it possible to go down to 10 MB/s, when normal HDDs can write at ~100MB/s? And with GBit-LAN, plenty of RAM and CACHE, how does a system manage to waste so much performance, while just copying data?
@@realedna It's not the system that's the problem. It's SMR drives during an ZFS rebuild, due to design "flaws" in SMR drives. These drives are NOT able to do 100MB/s during ZFS rebuild.
Be aware if you replicate this - some external HDD's with USB interface comes with the USB interface directly on the controller PCB. There is NO internal USB-SATA converter, it can ONLY be used as an USB drive!
12:40 Drive manufacturers aren't overstating size, Windows under reports it. Windows uses tebibytes, based in binary, while terabytes are decimal, resulting in ~%10 difference in size. For some reason they refuse to either switch to terabytes, or to use the appropriate symbol for the tebibyte TiB.
This is actually partially due to historical reasons. Originally it was measured only in powers of two -- with the SI unit prefixes used for convenience. Occasionally companies would use it by power of 10 but it wasn't until 1995 when the IUPAC recommended the creation of new units to denote powers of two vs decimal units. The IEC adopted the recommendation in December 1998 -- which is relatively recent. Even OSX/MacOS used the powers of two units until Snow Lepoard and iOS 10 (2016). Likely Microsoft is staying with what it is because of a mixture of inertia and it ultimately not being that important to them. Fun-Fact: Donald Knuth proposed to call a Kibibyte a 'large kilobyte' (KKB),
It's weird how RAM manufactures and M$ can agree on which unit should be used but storage manufactures decided to used something else because they wanted to put a bigger number the box The International Electrotechnical Commission (IEC) created the term tebibyte and the other binary prefixes -- kibi, mebi, gibi, pebi, exbi, zebi and yobi -- in 1998. Before then a Kilobyte was 1024 bytes there was no decimalisation of a base 2 number system. M$ is just displaying the units the way they always have
SMR is inferior, but sadly not outdated. It's a farly new tech and the drive manaufacturers used it mostly for cost saving. I don't think SMR is going away any time soon, its primary purpose is for write once read many applications so is ideal for data archiving, many of the much higher capacity drives initually used SMR.
@@AJ_Animations more likely to all die at once and cause permanent data loss in the first couple months than Ironwolf as well Barracudas aren’t made for the settings of servers, you aren’t being scammed by enterprise/nas drives costing more than the cheapo’s, servers are simply an expensive hobby The ones in this server are much slower than enterprise drives, aren’t rated for the vibration of so many drives in a single chassis, the stores they will be under, and will lack the needed statistics/warnings of drive failure needed Also oh god an old psu with a data server… It’s worth, if you get a really cheap server chassis and are going to do this, getting a second hand 80+ platinum server psu This video is made for getting your views, showing you a cheap way around things that you need to take an expensive path If you care at all about the data to be stored on something like this, don’t use this, doesn’t mean you have to buy the nicest stuff in the world, but this is basically a waste of money
Nice to see you do this . I have also built something similar but still changed to a synology NAS after a few years , As I didn't have space for a server rack , the noise of the fans were quit annoying to the family
Part of me appreciates the cost savings of the external drives, but the other part of me cringes at the waste of all those plastic enclosures. And, as it's clear from the video, using external drives means that you don't really know what you're getting (other than capacity) until you crack open the enclosure. I would have spent the extra money and bought the bare drives to make sure I had exactly what I wanted. But I can appreciate what he got for the money.
Great video man. Just word of advice makes sure you make regular back ups of your unraid boot drive. Had a scare the other day where the drive I was using failed. The flash drive stores your drive assignment, Plugins, and configuration. If you have to replace the drive from scratch you need to start from scratch but also know which drive sn are your parity drives. I got luck and was able to recover my config folder under linux. So I'm fine now but man that was a pain learning how to do that last minute. I would also recommend take a screenshot of all your drive assignments and save that somewhere.
Good advice... my first NAS build I used a tiny [HP branded] thumb drive mounted inside, using the USB port on the motherboard. I shut it down to replace a failed drive, and pulled the thumb drive out, and it fell apart in my hand (I assume it was due to the heat inside the case). My OS drive literally fell apart in my hand!
In the US, shucking does not void your warranty. Just don't damage the enclosure when shucking, and keep your enclosure and note each serial number on the enclosure in the event of a failure.
subbed... because its hes decent enough to give props to his resources... unlike other YTubers - who like to present like he just magically stumbles unto knowledge.
That's a super cool build even with the SMR Drives. Tyan has been around for years and has always put out great hardware for servers. If you want to see one of my favorite boards check out the Tyan Tiger MP S2460.
You might consider getting a PCIe riser card and putting a NVME card on it then you could have up to two NVME drives (the x16 slot is x8, not x16) as cache. That would free up the 3.5" bay.
Good work. Been wanting to do this exact same thing for the longest while. Yes, those drives have limitations but compared to what I paid for an HP 3PAR 120TB storage array 6 years ago, this is a no brainer.
"But I will probably mount it with some double-sided tape in the future." No way. Never use double-sided tape. Use Velcro. So much better for hard drives in every way.
having your own personal server is not a bad idea in general, you can learn so much from it. i would guess getting an old used server from ebay would be pretty cheap and could do most of what you want a server to do.
Yeah, I'd have started with fewer drives. I use 4tb more and buy in sets of 4 or so. And, it takes a good while to fill them. So, 80TB or (to be more exact) 60TB from video, would take quite a while to fill.
Hi friend. I did something similar than you. I bought an 2012 i5 HP server. I addedd 6 SATA discs (of 2TB each), and an SSD to boot and store OS. 75€ of the server, the disks than I previously had, 20 for the SSD. and 20 for the RAM. Some euro for the small stuff (to hold in place the disks), and a TRUENAS. Ultracheap nas. Great option the server cage from ebay, i did not knew that!
I also just bought an old HP server (DL360 G6) for about $140. Two 6 core Xeons, 32GB RAM. I am about to set it up with Proxmox for a virtualization and Nextcloud server.
Never heard this one run, but 1U servers are obnoxiously loud and have to use more power for their fans. I think for home use it would absolutely be worth buying a 4U case. It's also worth setting things up so that the drives will spin down when left idle, because in a home setting they will only see sparse demand. I have one of these exact model 8TB SMR Seagates, it's held up fine for me so far but I've only had it since about when this video was posted. Whenever I've thought about shucking, the risks haven't seemed worth it for the difference in cost I was seeing. But sometimes there might be a bigger difference and the more drives you buy at once the more tempting it gets. My last purchase was an 8TB Toshiba X300 for $160. The Seagate SMR was $140, but all the "NAS rated" drives were over $200. That includes the N300, which I'm convinced is the same drive as the X300 with a different sticker, longer warranty and probably TLER firmware. TLER won't matter unless you use hardware RAID (which I don't believe in). I've had good experiences with five of the 4-5TB Toshiba X300s, ranging in age from 3-5 years, but those are a different drive design from the 6-10TB models.
Superb build! Inspired by this , I built one with a B250 motherboard, 500W power supply, a 1T Samsung SSD as WIN 10 with 7X8T WD drives into a small desktop casing.
only issue you will have is vibration from non nas drives that may cause issues later. I built a truenas system off am2 phenom quad core and 2x 4tb Iron wolf and 2x1tb WD Drives in it for under 500.00 server chassis seem like a good idea till you hear the noise even in a closet you will find the wine of fans to be annoying
same. i would probably be fine with a very low storage amount for myself, but to have it all stored and accessed through the network instead of having to plug in an external every time, as well as having redundant backups is pretty sweet.
Great job, my friend. I've been out of messing with IT stuff for the last ten years, but have thought of building a home storage server. This was a great intro to it.
Would love to see a follow-up video showing how you have the 1U server all hooked up. I'm thinking of doing something like this in my "server closet" (aka unused coat closet).
Yeah, I'm thinking about learning how to add an air duct venting to my extra bedroom closet, then I can move computers in there. Not started but thinking about it.
"server noob" right.... excellent work, Wendell and Jeff would be proud. To bad you got SMR drives, but with chugging its a roll of the dice, but unRAID is an excellent workaround for that.
Great, love this type of stuff :) To be honest I would just go with a couple of cheap NAS devices (like 3-4 drives), even buy them second hand to cut on price.
I may replicate something like this someday...but I'd probably also follow this approach and start with a few purpose-built server-grade drives and add to it over time. Costs more but you get higher reliability.
I've found a couple of them at my Microcenter for less than that over the past two weeks. Got one for $145 and the second for $130. But yeah, overall the supply is minimal, and the prices are upsetting.
I like buying used enterprise servers. Often come with CPU, RAM, and have externally accessible drive bays for fairly cheap. I've bought servers worth several thousand dollars for under $500. They might have been several years old when I bought them, but 5+ years later they are still humming away.
You didnt do your research enough. 10TB expansions were yielding 10TB Barracuda pro PMR drives, I was buying these and own 4 in my Synology NAS. The 16TB's had a good chance they were EXOs, my last 2 had EXOs. They are no longer in stock on Amazon. One other thing you probably should have done was do a write/read check on them before shucking. If there were issues at least you can still return it or claim warranty.
ZFS does compression, so if you use ZFS, you'll use the CPU more than you think. You could get an internal USB drive and connect directly to the MB. FreeNAS isn't bad. :) SMR vs CMR was a good tip. Thanks!
@@dbzssj4678 This. Almost every segate drive I have had deployed in a server has failed within the first 1-3 years. I dont know why they have such garbage QA
The hardware setup is almost the same as the one I have been using for a few years now. My MB is a bit older but still uses Ivy Bridge E3 CPU, I have 16GB of ECC RAM and am running 11 HDDs and 1 SSD but mine is in a standard case with the drives in hotswap bays in the front. Only big difference is mine did not come with 12 built in SATA ports so I bought an SAS card and bought cables to split it into 8 SATA ports. I have been using it as my media and various server setup, with Plex being the main thing run on it but also some other servers in docker containers. It has been running like a champ for at least 6 years now without a problem.
Average of $125 each on the HDDs? Great deal considering they are no longer available on Amazon and NewEgg showing $220 each now. I'm not going to say the same BS other comments did about them not being NAS drives, obviously NAS drives that are CMR would be better but, if you consider the idea that you might only be using this for more of an archiving type purpose and not necessarily lots of continuous IO then mainstream drives are not a terrible idea at all. For the price paid for the server, I think this is a great option. Interested to know how well it is managing the heat since 1U servers can run pretty hot.
Excellent Video! Just the type of stuff I have been looking for. I need a new server and I have not done server work in over 10 years. A HUGE amount has changed in both hardware and software. This video has given my confidence back that I can still build my own. And you wanna know what was most inspiring? You made mistakes but still hit the target.
Good video. I also use Unraid and am very happy with it (4x10 NAS drives). The motherboard is an ASUS server which has the advantage that the boot USB stick is internally plugged in to the MB. I have it doing a parity check every 2 months and it takes 17 hours to complete.
If I may ask, what's the model of your server and how much did it cost? I tried buying the chembro server but it looks like it's out of stock indefinitely. Trying to get something in the same price price range as the chembro.
SMR is not “Older” and “inferior”. SMR is a technology that permits drastically (+50%) increased density allowing drives to be made larger or with fewer platters. It is ether a cost saving measure or tool to drastically increase the size of a drive you can make. It however does significantly impact performance. Regardless your point is still valid In a NAS these drives are suboptimal
Device-managed SMR (like in those 8TB Barracudas shown) is inferior in all aspects except price per gigabyte - for the manufacturer, that is, not the customer, given the whole brouhaha last couple of years when all three majors started replacing their previous CMR offerings with SMR at exactly the same price. Host-managed SMR is a different story but you will find neither host-managed nor even host-aware in the budget line unless you luck out on shucking high-capacity drives (14TB+!).
I find it funny that this is counts as 1 Unit :D Coming from having audio equipment (much shorter) in my 19" Racks, this thing seems so massive. Seriously, where do you keep it safe? In a Rack? Anyway, nice and inspiring video :)
Not to knock your build. But, I purchased a couple of Mediasonic ProBox 8-Bay hard drive enclosures. They were certified refurnished. So, they only cost me $230 each. However, what I didn't know is that they were future-proofed. At some point, the Amazon page got updated to indicate that the drive bays could hold bigger hard drives. before, they were rated at 8 TB per slot. Now, they're rated at 16 TB per slot. When I emailed Mediasonic, They informed me that, if the enclosures had a REV.03 marking on them, then they contained the same hardware as the 2020 model line. Not only did those 8-bay enclosures have that marking, so did the 4-bay enclosure I purchased a year before that. That means that the two 8-bay enclosures and the one 4-bay enclosure, at 16 TB per slot, gives me a total of 320 TB total of hard drive space.
Yeah. I thought about going this route my self but without having warranty and the risk of getting SMR drives i just couldn't take the chance but this is still very interesting to see and gives me ideas for future NAS builds for myself
Keep in mind that SMR drives don't exist in sizes larger than 8TB. So if you buy anything larger it will have CMR, because nobody makes SMR drives larger than 8TB. So if you want to shuck drives and are afraid of getting smr go for the larger sizes.
Matt nice video...just curious how loud the fans are in this chassis? As you stated it only has that copper heat sink for the CPU cooler but curious how loud the system runs and are you able to throttle those fan speeds? Thanks.
You should consider a blade that has hard drives in hot swappable enclosures. If you have a drive that is failing, by being hot swappable, you can replace the drive (before a second one starts to fail) and maintain 100% reliability. A lot better than having to shut down your nas and completely dismantle it to replace the drive inside...
Very cool! BTRFS as your data/parity and xfs as your cache (BTRFS has a ton of un-needed writes to the cache drive otherwise) I saw the same video by craft and was VERY tempted to get one of those... I have unraid on my server but I run PLEX on it so, and I used shucked drives as well Great video!!!!
My fileserver is an Athlon II with 6GB ram and 1.8TB total data capacity. It also works as a host of a virtual machine running pfSense, Home Assistant and dlna. Debian here. Works perfectly.
I don't know how to feel about this. So basically, -10 points for a 1u server, +5 points for used hardware, +5 points for rack mount, -15 points for not using a netapp drive shelf, +10 points for shuching drives, -5 for using Seagate, -5 for no ecc ram, -5 for not using an lsi controller, -5 for not using zfs, +30 for using Linux. Honestly, good job and +100 for making a diy Nas.
i am using trueNAS now and think that was easy to set up and fidle with. but unraid looks even easier to use. my NAS server / storage server was a budget build. HUANANZHI X99 motherboard, 16GB unbuffered DDR3 ram (can switch to DDR4 but they cost more at the time), Intel(R) Xeon(R) CPU E5-2678 v3 2,5GHz cpu. 8 HDD drives of 1TB each. might set up a cache m.2 drive aswell (got 2 slots) and it works like a charm. even set it up against a ups and set the server to shut down 10 minutes on battery power. For my use of it, it works like a charm, backup points for windows and backup of all my pictures and documents!
unless you need a high-density compute / storage cluster, why would you ever use a 1U chassis? At a minimum, go with 2U, but 3U/4U is even better for rackmount deployment...
@@StephenBuergler 1 u is super condensed, heat can be an issue, those 12 drives in a 1u case could have been 40 or more in a 4 u case. And ofcourse, can a honey badger or three fit in there ;)
A few problems: First, you have no redundancy for the cache drive. If that fails, you will lose data. Second, those motherboards are sort of time bombs. I've had four of them, all of which eventually died (due to bad capacitors, I think...repairable, but troubleshooting wouldn't be fun). Maybe you'll get lucky, but you should probably get a spare just in case. The onboard LSI RAID controller supports both SAS and SATA drives, too. I probably wouldn't trust those Barracuda drives, either, but maybe Unraid takes care of that to your satisfaction.
that's awesome, i would totes build that with half the drives for cheap! i personally would run fedora or arch server, but I'm a control freak also I'd try to water cool the thing cause those 40mm fans get loud af
Excellent price on the server. Buying full tower PC case, power supply, motherboard, processor, 12 port SAS controller card (SAS supports SATA drives), additional cables, all used, might cost more and you still have to build it. The only disadvantage is no hardware RAID. I manage servers in my job and had too many bad experiences with software RAID solutions, while hardware RAID tends to be rock solid in reliability. My NAS is with 11 SMR drives in hardware RAID 5 using a cheap used SAS controller card from eBay and it works very well. Write speeds typically max out 1 Gb Ethernet speed. I only just stumbled on this channel. The way he talks reminds me of NileRed, but instead of chemistry, it's computers.
@@tyaty and "all" these external seagate drives will 200% use smr, and are now living in some sort of a raid hell, he even know these use smr. i learned it the hard way myself. with a single drive being starved to death by itself, updating 1 game with a 10mb patch took up to 10min....
with a little modification the ssd's will fit under the hdd's in that chassis. giving you room for more drives/parity. you can also get pcie ribbon adapters to for m.2 or u.2, along with a 90 degree pcie 16x adapter to make use of the one slot.
12:36 its not actually storage companies over stating the capacity of their drives its windows that uses the gibibyte counting system and calls it gigabyte