Тёмный
SomeTechGuy
SomeTechGuy
SomeTechGuy
Подписаться
I make various tech content focused on home enthusiast compute users, hardware, automation, storage and gaming tech. I have worked in the tech industry for a couple of decades but its also a personal passion, and this channel is focused on the tech I enjoy to use and find intesting. Hope you enjoy the content, and please subscribe if you do and want to catch more.
Комментарии
@LubomirGeorgiev
@LubomirGeorgiev День назад
Easy subscribe!
@briankleinschmidt3664
@briankleinschmidt3664 День назад
When a man knows his end is near, he gets his affairs in order. If a hard drive could do that, all our worries would be over.
@briankleinschmidt3664
@briankleinschmidt3664 День назад
I wonder how they compare to SSD failure rates. If you are buying a single drive as a backup, you want the best quality, but if you are operating a nas, you can withstand one failure and not lose your shit.
@Igorath
@Igorath День назад
i had a BRAND NEW external seagate drive fail after 3 hours of use. backed up my computer and wiped my internal drives. when it came time to reimage the seagate just started clicking then died. massive back and forth with seagate ended up with me getting blocked on their sm accounts for telling people this.never had that with a western digital drive yet and bought 2 external and internal recently. ;tldr: seagate drives are sh*t, don't buy.
@alexk7467
@alexk7467 День назад
Years ago I used to use Seagate when I first buying hard drives to backup files, I had a 500GB and 1TB which both eventually failed, but still had a backup so didn't lose much. These days I prefer WD drives and never had any fail on me. Because of this reason I'm a bit sceptical about about buying Seagate ever again
@donaldjohnson-ow3kq
@donaldjohnson-ow3kq День назад
Beware the WD blues. They won't just outright fail, which would be better, because at least then one could diagnose the source of the problems. They will work for a while, then suddenly stop delivering data. If you are running Windows, that means the system just suddenly freezes and you'll be blaming the last software you installed or the browser, thinking that must be the cause. Then everything will be fine again for a while after reboot. They are moody or something.
@sometechguy
@sometechguy День назад
Moody blues :-(
@marcse7en
@marcse7en 2 дня назад
I rarely use 3.5" drives any more. 2.5" are bus-powered, and more convenient. I have two SMR drives. 2TB and 5TB. TBH, in real world use, there's not much difference to CMR. When I bought the 2TB, I didn't know it was SMR, but when I bought the 5TB, I did. I'm expanding into SSD.
@FlyboyHelosim
@FlyboyHelosim 2 дня назад
The only issue I've ever had with WD was a fake SSD I recently purchased. Never had an issue with their HDDs though.
@sometechguy
@sometechguy 2 дня назад
Seems to be a bit of a problem with fake SSDs. Was this a mainstream local platform? Or was it from AliExpress?
@bastiangugu4083
@bastiangugu4083 2 дня назад
Thank you, that was illuminating and useful.
@sometechguy
@sometechguy 2 дня назад
Appreciate the comment. 👍
@mytech6779
@mytech6779 2 дня назад
I would like to know the details of the mechanism they use to do BTRFS data repair with mdadm redundancy. The reason I like BTRFS raid(1) is that I can mount the drives in any other linux machine and use all of the features, so my data integrity has no vendor dependancy. md is also portable of course, but it's that secret sauce glueing the two together which makes me skeptical. (LVM2 can also do a lot of the raid tasks.)
@MrDabadabadu
@MrDabadabadu 3 дня назад
Shingled drives are GARBAGE, period!
@connorharris1900
@connorharris1900 3 дня назад
good job but you should really start the graph at 1 month and extend to 10 years
@fgjah
@fgjah 4 дня назад
I was looking for a condensed version on this topic and this video was perfect! Thank you, this was very useful and basically confirmed which type I was already going to buy (CMR).
@stamasd8500
@stamasd8500 4 дня назад
I am staying with CMR drives in my raidZ arrays. Worst case scenario for me would be resilvering after a drive failure, and the expected performance degradation (and the duration thereof) would be far worse with SMR. It has never happened so far, but that doesn't mean that it never will. I'd rather not have to go through that especially since as you mentioned there is essentially no cost benefit for SMR.
@leexgx
@leexgx 3 дня назад
The issue is once you write more Then 100 GB constantly in one session the cmr zone of an SMR drive is about 100gb then it changes to direct smr writing mode (or forces emptying the cmr zone as new writes are Comming in) can result in 0 to 20MB/s once the drive has received 1 drives worth of data (it's why benchmarking them is difficult and performance problems usually start after 6-12 months) it really suffers when it doesn't clear empty shingles (smr drives usually support trim so the drive knows what blocks are actually empty and can reorganise the shingles when drive is idle) I am halfway into video and hasn't hit the pitfalls yet if smr when all the shingles are full
@Ray_of_Light62
@Ray_of_Light62 4 дня назад
I avoid shingled drives at all costs. It is a bad idea, only leading to bad results, to save few pennies in the end...
@mcdazz2011
@mcdazz2011 4 дня назад
In my experience (over 26 years working in IT), manufacturers vary in terms of quality (and failure) of drives. I've seen batches/models from ALL manufacturers fail within certain time periods, which to me, doesn't indicate bad manufacturers, but bad batches of drives. These days for home use, I almost always use Western Digital drives for mechanical drives, but a mix of brands for SSDs. I've used some 2.5 inch Toshiba mechanical drives at times, but they generally fail the most within short periods of time.
@tomasbengtsson5157
@tomasbengtsson5157 5 дней назад
Good video! Very clear and concise. I will give a couple of comments regarding RAID or not. First of all you need to think through what you application is and what you are trying to achieve. The solution will be different for different use cases. (Obviously 😊) High speed for small random writes (high IOPS) Maximum availability (100% uptime) Maximum storage per unit of cost. What volume of data? What budget do you have? The important things to understand is: 1. There is no magical solution that does everything. 2. Both performance and reliability is archived with layers of speed and protection. 3. Your budget will be the limiting factor. Your first priority should be to get a very good backup scheme of the data you can’t afford to loose. 1. This means identify your critical data and separate it from the non critical. 2. Run a backup scheme to an off site storage 3. Make sure it is staged so you have immediate backups. Hourly backups, Daily backups, Weekly backups etc to the level you need. The important thing is to be able to roll back to a known good backup. 4. Isolate the backup from the normal network. (E.g. don’t have your backup drive mapped as a network drive.) 5. Have a system that alerts you if something goes wrong. This may sound complex but is easy even on a home system. Use a cloud based service and a good backup software. Make sure your cloud backup is not accessible to any ransom ware. Once this is in place, step two is to decide what’s important next. It’s usually a good idea to separate OS and data on two physical disks. Run the OS on a fast good SSD, go for quality. Size is less important. For your data go for either good data storage SSD if you don’t have a lot of IO operations or large amounts of data. If you have large amounts of data, like long video streams, large raw picture files, go for a spinning disk of high quality. If you need more storage buy another disk and divide your data if possible. Or buy a NAS which already has everything. I don’t recommend RAID for home or small business, since the complexity and cost if you are going to do it right increases a lot. A cheap RAID (cheap, software, freeNAS) etc. Increases the risk of something going wrong and it can be very hard to recover if you’re not an expert. If we are talking Enterprice level, this is a different topic. It starts in the same way as a good small business system with a robust backup scheme. Then, depending on the application you can spend 10s of thousands of dollars. What’s important is to analyse your need and target first. Then build layers of protection. You will probably end up with ECC memories in your server Multiple hardware RAID controllers with SAS interfaces and battery backed cache. Hot stand by drives A RAID 10 configuration Multiple disk cabinets with redundancy Multiple UPS redundancy etc. Multiple servers with data sharing and mutual data base mirroring and load balancing. You can do it as complicated as you want basically as long as you use tested and known working configurations. Or you can cut corners and go for a software RAID, FreeNAS or whatever. As long as you are good enough to know what risk you are taking and mitigate it properly. This requires knowledge, a lot of it. I have recovered databases where 3 days of lost data cost more than the whole it system, because somebody taught he knew better and went for a cheap option. My opinions after working with this in enterprise server production environments since early 1990:s anyway . My deep tech knowledge is not up to date since I work in management nowadays, the fundamental principles are the same, but there may be new systems out there I don’t know about.
@sometechguy
@sometechguy 4 дня назад
This is a great, and detailed write up and includes lots of excellent points to think about. Thank you for taking the time. 😁
@tomasbengtsson5157
@tomasbengtsson5157 4 дня назад
@@sometechguyThank you for a very good video 😁 Both your video and your comments elsewhere in the comment section are spot on i.m.o. 👍🏼
@nbrown5907
@nbrown5907 5 дней назад
I want you to get your hands on the new Seagate 240TB drive if you can lol. Curious to see how they manage that assuming it comes to fruition. The video I saw made it sound like it was happening for sure.
@sometechguy
@sometechguy 5 дней назад
If its what I think that is, which is the multi layer HAMR drive, I think that is a way off yet. But certainly interesting concept. If it has a SATA, or even a SAS interface, it may take a while to fill it. 😂
@FINNIUSORION
@FINNIUSORION 5 дней назад
I don't buy wd anymore. I've used every single color and type and every single one failed right after the warranty period. It's almost like they have it all calculated out lol.
@enkaskal
@enkaskal 5 дней назад
great work. just found your channel and nice to see you use the exact same methods of analysis i’ve always done on drives 😀👍for US retailers, i’ve actually been buying direct from wd the last couple of times as i got lucky they had really good sales going on whn i needed to purchase. might be worth looking at in future analysis along w/ a quick spot check of amazon. love the back blaze video too, am now subscribed, and will be double checking all future purchase analysis against your videos so please keep up the great work!
@stevec00ps
@stevec00ps 5 дней назад
Great video, thanks! Liked and subbed :)
@sometechguy
@sometechguy 5 дней назад
Thank you 🙏
@Pegaroo_
@Pegaroo_ 5 дней назад
The main problem with SMR is resilvering with ZFS which WDs Engineers knew was a problem but the suits branded them for NAS use any way If they had marketed them as media library drives at a fair discount they'd be a tempting option but as it is I do my utmost to make sure I avoid them
@sometechguy
@sometechguy 5 дней назад
Yes, agreed on this. Any persistent rewrite activity, such as a RAID rebuild is likely to get his with the performance found in this testing, so I would personally avoid using the. In any kind of array. And given the CMR was actually no more expensive than SMR disks, it’s hard to see any good use case. Though archiving wooldnt be so bad, as long as you are not doing this repeatedly on the same disk and over writing file. So all in all, I would just avoid them as a rule, and I wish it was clearer that SMR was in use on these and other disks. This is my attempt to shine a little more light on that.
@me1not1a1number
@me1not1a1number 6 дней назад
What about life span? Do we have any data?
@sometechguy
@sometechguy 6 дней назад
I have not seen any public date sources on this. I did some videos analyzing and comparing reliability of disks based on a large pool of data from BackBlaze, but there are no SMR disks in that dataset, which isn't a big surprise. Due to write amplification, the SMR disks certain work harder. So the head actuators are going to be a lot more active. I don't honestly know though if this is the component that is the most likely to fail, so I can't say if this will cause reduction in life span. The best way of course would be to find data source that reveals this, so if anyone knows one, I would love to hear about it.
@richardhole8429
@richardhole8429 6 дней назад
Your rapid speech is like my ears assaulted with a machine gun. You are not paying RU-vid by the second, give us a break!
@Turco949
@Turco949 6 дней назад
Informative and very much correlates what I always thought of the brands in general. As stated, for this study to be much more accurate, a longer testing duration needed (5+ years min.).
@IMFERMO.
@IMFERMO. 6 дней назад
Im so desperate, i tried to remove the screws but my freaking card had them sunked into the plastic, the hex screws flatened and idk how to remove them animore
@flagger2020
@flagger2020 6 дней назад
Well laid out analysis.. I still shiver anytime anyone says star in a disk name as I once lost two deathstar disks in a raid before I could do a rebuild. I think they were ibm GXP made in Hungary.. ancient history but not forgotten. My Hitachi drives based on magic pixie dust or whatever are all still in service surprisingly..
@ekiM2K
@ekiM2K 6 дней назад
Neat video, I'll keep this in mind when buying my next M.2 NVME SSD 😆
@arthursouza420
@arthursouza420 6 дней назад
16:29 idk if i´m one of those 4810 rio de janeiro, my unit came from paraguayian market....
@arthursouza420
@arthursouza420 6 дней назад
btw, this graphic chart its nuts.
@arthursouza420
@arthursouza420 6 дней назад
crying in 20EZAZ
@ferkator
@ferkator 7 дней назад
Thank you for this video, liked and subscribed. I'm currently debating whether to splash some cash into some big CMR drive (I'm looking at some enterprise refurbished models because they're cheaper, and also riskier, of course) or keeping my current WD Black P10. I didn't know about CMR or SMR when I bought it, now I have it half full (or half empty?) and I'm looking at some speed degradation compared to when it was less populated (writes are about 70-80 mb/s now). It's a Plex storage drive with looooots of idle time (24/7 on) so I guess I should be 'fine' based on this video, right? I mean, a CMR would be better but it's not a critical change.
@sometechguy
@sometechguy 7 дней назад
I believe the P10 is a 2.5” in a caddy and very likely SMR. An external 2.5” isn’t likely to be the fastest choice, but if you are filling a drive sequentially and not replacing files too often it’s likely fine. Because it’s the overwriting or modification of files in the SMR zones that has the impact. And if you are not writing constantly, depending on the cache available on the drive, it’s likely manageable if there is idle time. Also , reading media files often isn’t too terrible from a bandwidth perspective, but if it’s 4k and and not a compressed source, you could be up over 100MBps and it could be a bottleneck. And thanks for the Sub, good to have you here. 😁
@ferkator
@ferkator 7 дней назад
@@sometechguy yes, it's a SMR drive. I do replace some files sometimes (delete movies/series, get new ones, etc...) but I do this like, I don't know, once per week? The rest of the time it sits idle until there's some media playing. I have not made extensive testing but I do remember it writing at around 100 mb/s when it was empty and now it does 70-80, I'm wondering if I'm not going to have a paperweight when it's 75-80% full (that's why I'm looking for CMR drives).
@Igorath
@Igorath 7 дней назад
NEW seagate portable one died after 3 hours and cost me all my data. so save to say seagate = bad.
@Hex-Mas
@Hex-Mas 7 дней назад
MSI = Pure BS
@MR-vj8dn
@MR-vj8dn 8 дней назад
Great analysis, good production. Thanks for sharing 👍🏻
@sometechguy
@sometechguy 8 дней назад
Thank you 🙏
@Infinity_Ghost
@Infinity_Ghost 8 дней назад
Useful video with good proof. N1
@sometechguy
@sometechguy 7 дней назад
Thank you, glad it was useful and appreciate the comment and feedback.
@PatrickDKing
@PatrickDKing 8 дней назад
I used to buy all HGST Ultrastars, then for whatever reason distribution sucked and it seems only odd and end places or shady resellers have them available. So I decided to just get WD Gold drives exclusively now since all the major distributors have them. If you are patient you can get them cheaper than any other the other drives and lower cost per TB. My sweet spot is about $270 for an 18tb WD Gold drive here in the USA. I almost got some Ultrastars but after digging around found out they were "recertified/refurbished" whatever the hell that means and by whom I don't know. Not going to risk it to save a buck on my mission critical stuff. I won't pay normal Gold prices those because those are horrendous.
@sometechguy
@sometechguy 8 дней назад
You should be able to find WD Ultrastars ok, and those are an evolution of HGST unit, and are also basically the same as WD gold under the hood. I did a comparison of the larger capacity enterprise disks based on BackBlaze reliability data and they seem really solid.
@tsclly2377
@tsclly2377 8 дней назад
tape LTO-5 write speeds....
@sometechguy
@sometechguy 8 дней назад
Worse than :😀
@tsclly2377
@tsclly2377 8 дней назад
@@sometechguy you ave a good sense of humor.. I was just saying as I need a server backup [as if one opens up an LLM there are backup requirements] I wish that the IBM optical tape had evolved, but I have a feeling the the government has it and it is now a 'national security thing'... anyway I was just trying to make a somewhat valid comparison. Anyway new to you channel and so far I liked what you are doing. Bye-the-way, I'm a big L406S fan and believe the Russians when they say 'one can not harden CPUs, etc below the 28nm process.. (double masked?)... like from a super Carrington Event. Check out the our solar (planetary) alignments in 2040, 2046 October and a couple more in the 2050's...
@lian-illuminati
@lian-illuminati 9 дней назад
I was about to buy a Seagate Barracuda 4TB ST4000DM004, but after I watched this video I searched for its recording tech and it turns out to be a SMR drive. Taking into consideration the testing done in the video and the experiences shared by other people in the comments, I decided to go for a 4TB IronWolf ST4000VN006, which is CMR. Thank you for this video. (As a quick recommendation, slow down a bit your talk speed, it will be appreciated).
@sometechguy
@sometechguy 9 дней назад
Glad this was a help, all the barracuda drives do seem to be SMR, and it again, is not always obvious. I bought a 2Tb Barracuda and will also be doing some testing to see how this compares to the WD models. And yes, on this video I think the pace was too high. Some have said its fine, but for others its a bit intense. You can slow it down in youtube and set it to 0.8 or 0.9 but I know this isn't really a good answer and I will certainly take this on board for future vids. Appreciate the feedback.
@leexgx
@leexgx 3 дня назад
Pro barracuda are cmr (all ironwolf are cmr)
@CamelCasee
@CamelCasee 9 дней назад
This probably sent a lot of late 2010's laptops to the landfill for being slow.
@markwith140
@markwith140 9 дней назад
Good video, very good analysis. I did not mind the speed, it worked well for me. I look at the specs when buying HDD to make sure it's not an SMR. I bought a bunch of disks for a NAS recently so checked the specs carefully. I was supplied an SMR drive within a rebuilt pc and compared to the CMR drive that I installed in the system myself the SMR is dog slow and unpredictable in ti's operation.
@retrozmachine1189
@retrozmachine1189 9 дней назад
Do the SMRs have the ability to do an on the fly read after write verify concurrent (well momentarily after in time but not with a separate spin of the disc) with the actual write operation? If not that means every single time there is a RMW cycle the possibility of loss of previously successfully stored adjacent data through writing to an area of the disc that has become bad appears - or - we take a pretty hard performance hit to pass the media under the RW head again. Write a 'sector' and suddenly data from another unrelated sector becomes unreadable. I don't like that idea at all.
@sometechguy
@sometechguy 9 дней назад
I can't imagine the write and verify order of operation is different from any other disk. I think the best way of visualising this is that inside an SMR zone (which if I understand correctly, is where I think your scenario exists) there are only ever sequential writes and never random writes. So an entire zone is written from start to finish as part of the RMW process, and writing to a given track can partially overwrite the track ahead, and not the one behind, which will be corrected as the drive moves onto the next track. So I would expect that the write and verify process is performed the same as any sequential writes of a CMR with a similar generation.
@MohammedAsif-xc1gn
@MohammedAsif-xc1gn 9 дней назад
Where do i get these fans and what's the dimensions?
@sometechguy
@sometechguy 9 дней назад
I can’t give you specific advice on where, other than check sites like eBay where you have buyer protection in case they are not correct spec/fit. Dimensions on my fans are in the comments, I would double check your own just in case they differ.
@Goodmanperson55
@Goodmanperson55 9 дней назад
If they were much cheaper, I would happily get an SMR drive and the money saved would ideally be spent towards a small NVMe SSD and a PrimoCache license. IMO, it's probably the best way to leverage the capacity potential of SMR technology. It provides a nice bit of guarantee once the internal cache is filled.
@thisnthat3530
@thisnthat3530 9 дней назад
I use a combination of 6x 3TB Hitachi Ultrastar desktop HDDs, 1x 3TB Toshiba desktop HDD, 8x 4TB WD Red HDDs, and 6x 12TB "White" (shucked Elements USB) HDDs. They all have over 35000 hours on them. Some have been running almost continuously for 15 years. My QNAP NAS is 15 years old, my D-Link NASs are nearly that old, and everything has worked flawlessly so far. I stopped buying Seagate disks for many years after the disaster that was their 8GB IDE offering, as well as numerous 1.5TB and 2TB failures. Recently bought a few 2.5" though so time will tell if they've improved.
@Mike80528
@Mike80528 9 дней назад
When companies pull this crap (looking at you WD), I just avoid them. Problem solved.
@FlyboyHelosim
@FlyboyHelosim 2 дня назад
Except WD is hard to avoid, being one of, if not the, best in the game.
@sheldonkupa9120
@sheldonkupa9120 9 дней назад
Very interesting video good job👍👏 For home users, smr doesnt make sense inmho. Cmr drives are - at least until now - only slightly more expensive, so why risk it. Smr seems to make sense for certain professional use cases, when smr maybe cheaper. I noticed that a lot of recent Seagate smr drives are for sale "refurbed". Maybe a lot of customers send their hdds back?!
@satoshimanabe2493
@satoshimanabe2493 10 дней назад
SSDs are completely different...until you realize they are not. They initially had SLC, then transitioned to MLC, TLC, QLC. So multiple bits are "stacked" into a single cell. Effectively the same as shingling, as each cell must go through read-modify-write. The more things change, the more they stay the same. Lol.
@incandescentwithrage
@incandescentwithrage 9 дней назад
Not quite the same. SLC, MLC TLC, QLC yeah each has an increasing number of values / bits that can be stored per cell. Cells can only accurately store (and read out) the increased number of charge states a certain number of times, reducing write endurance. However, flash has always had read-modify-write behaviour as there's a minimum number of cells that can be modified at a time, defined by erasure block size. No SSD has ever had erasure blocks as small as the 512b sector size presented to the OS.
@sheldonkupa9120
@sheldonkupa9120 9 дней назад
Yeah, i also like to assume that they transferred the ssd mechanism kinda to the hdd world, and this for no good...
@sometechguy
@sometechguy 9 дней назад
This is exactly write, and the similarities to SMR go further. SLC > MLC > TLC > QLC get slower and slower, so those larger capacity cards have an SLC cache for speed and endurance that behave just like the persistent cache in SMR drives. And if you fill that cache then writes need to be done at QLC speed, and performance drops badly, just like when the SMR persistent cache gets filled. They are alike in their behaviour in many ways. 🙄
@bikemanI7
@bikemanI7 10 дней назад
Won a Seagate 8TB SMR Drive in 2019 via Twitter CES Contest, used it as primary backup drive for a year, then obtained a Western Digital My Book 8TB as Primary Backup drive, and that one works so much faster than the Older Seagate 8TB SMR Drive for Backing up files. (I use the Seagate one still very little for some extra backups at times)
@herauthon
@herauthon 10 дней назад
My question is.. SMR: will it decay in time.. interesting test is sys 2 sys over LAN and see how the max speed is kept up.. time to sag.. interruption and restore curve
@sometechguy
@sometechguy 10 дней назад
If you mean, will performance decay; it looks like it will depend on a number of things. How often files are modified or rewritten and how much file time the disk gets. It could certainly be a problem in some scenarios and as the drive fills and gets fragmented, it could get worse. I think it’s fair to say that for many users, it probably won’t matter noticeably, but for various reasons the disk could just be generally a bit slower. And the real question is why would you do that when the CMR drives are around the same price, maybe cheaper and don’t have any of these concerns. I just prefer people didn’t end up with these drives without realising what they are getting.
@micnor14
@micnor14 10 дней назад
A few years ago I purchased a Seagate drive and had my first experience with SMR. It was terrible as my workloads were 100% worst case scenarios for SMR. I had only ever used CMR. I RMA'd the drive, only to have the same issue again. I ate the cost and trashed it. I bought my usual WD red drive only to have it happen again for a 3rd time! That's when I discovered the SMR/CMR disaster with WD. I swapped mine out for a regular CMR drive and have been happy since. Just bought a WD blue 8tb and am glad I read that model breakdown PDF before buying. SMR is a joke. You should repeat the worse case scenarios but instead of stopping the timer when the file transfer says it is complete, you should end it once the SMR drive activity goes back to zero. A CMR drive takes 10 minutes to transfer then it's done, I can unplug the drive. But a SMR drive takes 10 minutes to transfer then 10 more minutes to finish rewriting? Thats 20 minutes to transfer, effectively half the speed of a CMR drive. Unless you unplug it once it says "Files transferred" then you end up with a bunch of corrupted files. Yeah. SMR is straight up BS. You can't shut down/reboot windows until the drive is done processing long after it's "done transferring files".
@sometechguy
@sometechguy 10 дней назад
It’s less deterministic to find the time to cache cleaning completion, just because there isn’t a clear trail. You just have to listen to the drive. But I did record some times and on the 7200 SMR it started around 12 mins in the earlier tests, and climbed to around 18mijs for the later ones. So file transfer to persistent cache was ~3mins, time to get all data written to permanent storage, 18 mins. Pretty surprising, but you can see those numbers for clearly in the non-idle rewrite test, because there the cache is saturated and you are seeing the real time to write to the SMR portion of the disk.