Тёмный

Ultimate SSD Speed: Combining FOUR SSDs into a Supersonic Storage Drive! 

Constant Geekery
Подписаться 42 тыс.
Просмотров 36 тыс.
50% 1

In this video I experiment with RAID 0 Striping to combine four Samsung 990 Pro SSDs into one very fast storage volume! I'm using the ASUS Hyper M.2 X16 Gen 4 card to mount the drives.
I also create a two-drive system stripe with a pair of Crucial P5 Plus SSDs... and I compare the speed of my motherboard RAID versus using Windows RAID with Disk Manager.
From my Short on this, where I posed the 'guess how fast?' question... a shout out to: ‪@MeinDeutschkurs‬ for correctly predicting stripe performance; @deavo74 for guessing the theoretical top speed correctly; @jimtaylor431 for guessing very close; and @andrewchandler2347 whose guess was closest to what we actually got.
Join this channel to get access to perks:
ru-vid.com...
AMAZON LINKS - As an Amazon Associate I earn from qualifying purchases
Samsung 990 Pro
USA Store: amzn.to/43VpyCz
UK Store: amzn.to/3DH4PHH
ASUS Hyper M.2 X16 Gen 4
USA Store: amzn.to/3OspTaa
UK Store: amzn.to/3Yk1DLE
Crucial P5 Plus
USA Store: amzn.to/3OspVyO
UK Store: amzn.to/47sWWDE
0:00 Intro - Samsung 990 Pro
0:48 ASUS Hyper M.2 X16 Gen 4 card
2:13 An unexpected upgrade - Crucial P5 Plus
3:05 ASUS Hyper card without hardware RAID
3:25 PCIe Bifurcation
4:05 RAID 0 (striping) overview
4:31 Performance predictions
4:48 4x the speed?
5:36 Benchmark result: 4x Samsung 990 Pro
6:03 Shout outs
6:33 Benchmark result: 2x Crucial P5 Plus
7:00 Windows striped volume setup
7:39 Benchmark result: 4x Samsung 990 Pro Windows striped volume
8:03 Potential thermal benefit?
8:45 Benefits of fast SSD speeds
9:08 Conclusion

Наука

Опубликовано:

 

1 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 145   
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
SOME IMPORTANT NOTES: Some of this is in the video, but since not everyone watches the whole video... 😁 1. DO NOT use a RAID 0 stripe for storing critical data… unless you have a backup. 2. Make a backup… then make another one for good measure. 3. RAID 0 does not provide redundancy - if any one disk fails, the whole volume is going down and you lose your data. See point 2. 4. Some RAID arrays do provide redundancy, or a combination of striping AND redundancy… but, NONE of them replaces a good backup routine. See point 2. 5. A RAID 0 stripe massively improves sequential read/write performance, but does not make much (if any) difference to random drive operations - as shown in the video. Unless you are regularly managing large files like me, you may not see much benefit. 6. SSD RAID volumes often don’t support TRIM - it’s important to do your research. If you do choose to do this, consider buying at least double the capacity you actually need and be prepared to periodically format the volume/individual SSDs when it inevitably slows down. 7. Most people would not stripe their system drive as I do in this video - if you don’t understand the pitfalls, or the benefits, do something else. 8. This system is not my only computer. All my data is backed up multiple times on RAID arrays which do have redundancy, and in multiple physical locations. 9. If my striped volume dies it will cause me precisely ZERO problems, because I have arranged my workflow so that I can use multiple computers… and, see point 2. 10. NOTHING in this video should be construed as me making any recommendations on how you should organise your storage. You proceed at your own risk. 12. Did I mention that you might want to make a backup? Have fun everyone 😊
@deavo74
@deavo74 11 месяцев назад
6:08 Best part of the video! 😂
@theramblinggamer165
@theramblinggamer165 9 месяцев назад
Thanks for the review man! Saved me from making a mistake on getting the hyper-x for my rig, as it is, my motherboard does not support bifurcation. Although I just really wanted to have this option for expanded, fast storage. Looks like I will wait on it a while longer now haha. Thanks again!
@jpdj2715
@jpdj2715 11 месяцев назад
Very interesting and well researched. I would add a few things. A motherboard may have a so-called South Bridge chip (SB - "chip set") that acts like a switch between I/O channels. Consequently, all I/O devices with their CPU connection routed via this SB compete for CPU time. Your mouse may compete with your system (boot) drive when both dangle from the SB - the default of most motherboards. The logical design diagram of the motherboard will shed light on this. A motherboard may have an NVME slot directly on the PCIe lanes of the CPU, but using it may render a PCIe slot inoperable and you may not be able to boot from the NVME slot. The RAID array on the PCIe card in the video bypasses the SB and has direct links to the CPU - this should make the latency associated with each I/O (as in IOPS) lower and hence improve throughput for random I/O or transaction-based I/Os. As my motherboard has an Intel Rapid Storage (IRST) RAID controller, I suspect in this case, the RAID is in part in a driver, in part in HW and the performance difference between such a HW RAID and a Windoes RAID is not very relevant. As, like spinning platter disk drives, SSD have an I/O controller that runs firmware and uses cache, top speed in a RAID 0 array depends on cache size of member drives and number of members. Concurrent writes to member drives aside, as are happening here with RAID members having their own CPU-I/O channels. Or, the cache & #members thing becomes relevant if the members share I/O line(s). Personally, I have a RAID 0 array like this one here that I use for write bottlenecked I/Os. This includes cache files for Photoshop and Lightroom, their catalogues, etc. It can also host Windoes' page file that does not need to be on the C: drive actually (except you may no longer get a .DMP (dump) file when Windoes crashes). And I have my OS and apps on a RAID 1 array. These volumes see only few writes, especially if the page file is elsewhere. You could make a partition in the RAID 0 array that you mount on C:\Program Data and in that way hopefully speed up some apps a bit that use writes to drive instead of global variables (meh), or (fine) store config data in there. The OS on RAID 1 facilitates concurrent writes to different data blocks and this reduces application load times, but also makes paging faster of binary (app, OS) data. All memory is completely written into a page file, except for binary data that only has its pointers/addresses in memory when associated memory pages are paged out (better, overwritten in this case). And, as SSD life expectancy depends on the quality of its memory cells, as the quality is expressed in TBW (total bytes written), and as essentially TBW/DriveCapacity gives you how many program/erase (P/E) cycles render your SSD's memory cells inoperable, you may have bought SSD with 600 P/E cycles life expectancy. Completely rewrite these drives every day and they'll die before the second year is over. So I stage data that is on my workstation only temporarily on an array of rotating platter drives. In concluding, aside, ask a team of electrical engineers and quantum physicists how long they thing data in off-power solid state will remain reasonably consistent. The answer may be "between 7 and 7,000 hours". Or, if consistency of your data over time is important, then make sure to use something like RAID 5 with a HW or SW controller that does data scrubbing on a, say, daily basis. RAID 0 and RAID 1 have zero data consistency guarantee.
@hauer54
@hauer54 11 месяцев назад
Great info... thanks!
@dafyddthomas7299
@dafyddthomas7299 Месяц назад
Thanks for informative video - especially using 2 M2's of say mid Capacity 2 TB as a Raid stripe drive to obtain improved read / write speeds
@semuhphor
@semuhphor 9 месяцев назад
This past weekend (before seeing this video) I bought one of these cards. My plan was to make a 2TB raid 10 drive with 4 Gen 3 1TB M.2 drives.I have a Gigabyte x570 Aorus Master MB. When I plugged it in the second slot, i only saw one drive (like you mentioned). When I bifurcated the slot, I saw two, not four. Now I don't know if it's my board or all x570s, but the second slot is only a x8 slot, so it can only see two of the cards. I ended up returning it and getting a single 2TB card for the system. Oh well... it was a learning experience. Thanks for the video.
@olegyamleq7796
@olegyamleq7796 11 месяцев назад
great overview!!!!!!! thanks!! man, they don't make computers easy. there were so many caveats to keep in mind!!!
@bryans8656
@bryans8656 11 месяцев назад
I don't have a desktop PC that can take advantage of your setup but I still enjoyed learning about it. Thanks.
@ConstantGeekery
@ConstantGeekery 11 месяцев назад
Glad you enjoyed it!
@dunknow9486
@dunknow9486 10 месяцев назад
Finally someone do a video on ssd raid. Well done 🎉🎉
@dafyddthomas7299
@dafyddthomas7299 Месяц назад
agree
@Olsnapper
@Olsnapper 6 месяцев назад
great video! i was wondering if bifurcation is an issue with an ITX board? i wanna do a APU build and wanna leave the pcie slot open for this NvMe riser card, wouldnt that be ok to do?
@andrewchandler2347
@andrewchandler2347 11 месяцев назад
Thanks for the mention! As I recall the aim was to guess the actual not the theoretical! 😉
@ConstantGeekery
@ConstantGeekery 11 месяцев назад
It was... thought if my motherboard chipset was supported I'm confident it would run at double the speed.
@learnalanguagewithleslie
@learnalanguagewithleslie Месяц назад
Q: You said you already have 2 SSDs installed on the motherboard. Which prevents you from raiding the ASUS card. Why is that? Because it then tries to RAID your existing SSDs is it? I thought you would have been able to specifiy which drives weren't part of the RAID array in BIOS settings and exclude those in the settings. Or create a separate raid for those two system drives.😕I am thinking of getting the Gen5 version of this card for additional fast storage. It's handy to know that windows raid is slower than expected.
@ConstantGeekery
@ConstantGeekery Месяц назад
Unfortunately, the Lenovo BIOS doesn’t seem to allow you to switch on RAID separately for the on-board NVMe sockets and the add-in card. That probably makes sense because it’s all running on the same PCIe interface. We have a couple more of these Lenovo P620s in our web studio and we bought Sabrent 4-drive cards for those. They do appear to run at full PCIe 4.0 speed and are a similar price.
@Skobeloff...
@Skobeloff... 5 месяцев назад
3:24 Wouldn't the first caveat to using one of these add in cards be that you need a spare 16x pci-e slot that is not sharing lanes with another slot?
@REKTRc
@REKTRc 3 месяца назад
Just after the day, I did 2x Samsung 980 PRO 1TB as RAID0 on my gaming rig and hit 14GiG/Sec Read speed. Thank you sir!
@sheldonkupa9120
@sheldonkupa9120 10 месяцев назад
Yeah, interesting, wonderful playground. I love to play around with such ideas also. Gonna test this setup under linux with diverse filesystems (zfs, btrfs) and raid levels🤣
@benjaminaustnesnarum3900
@benjaminaustnesnarum3900 3 месяца назад
I'm thinking of getting the Gen 5 version in my Asus ProArt Z790 Creator Wifi mboard. I'm not sure if it's compatible in the second PCIe slot, though, as it says that 1(x8)/x0 is bifurcation. The first slot is 1(x8)/1(x16) so will it take x8 from there? 🤔
@ConstantGeekery
@ConstantGeekery 3 месяца назад
Looks like it doesn’t support 4-way bifurcation, so you’ll be limited to two drives in slot 1, and it will only see a single drive if you use slot 2. Bit of a shame for such a nice motherboard, but I guess with four M.2 slots on the board already it wasn’t seen as a priority.
@SighManP
@SighManP 11 месяцев назад
I remember doing something a little similar with an older Mac mini Pro - which supported 2 drives. I put SSDs in both and then back with MacOS supported, set them as RAID 0, damn that was fast given most disks were still platters...
@RalfWeyer
@RalfWeyer 9 месяцев назад
What I’ve seen with the latest M2 SSDs, they can get very hot when they are in heavy use. So if you use a very small enclosure without proper cooling this can even cause the SSD to be completely unresponsive even after a very short time.
@user-lt7xk7qw8e
@user-lt7xk7qw8e 5 месяцев назад
I have laptop running gen wb 850 5700 read write speeds, there was gen 3 3200 read write about from factory, double drive sizes also 2tt
@legendhasit2568
@legendhasit2568 8 месяцев назад
How are you mounting the system raiding drive? Is this on another AIC or are you spread across two motherboard nvme slots?
@ConstantGeekery
@ConstantGeekery 8 месяцев назад
Two motherboard slots.
@adilqatinefilms
@adilqatinefilms 4 месяца назад
Is good 4 m.2 pci in raid 0 boot Or just 1 boot and 3 storage In asus oci adapter x16 ? I need more power speed For rendering Thanks for info
@ConstantGeekery
@ConstantGeekery 4 месяца назад
You should be able to configure a 4-drive bootable RAID 0 stripe.
@desi76
@desi76 11 месяцев назад
You can increase your drive speeds further by caching your available RAM against it using Primocache (or something similar to).
@dunknow9486
@dunknow9486 10 месяцев назад
May not be useful when his file size is laege
@MeinDeutschkurs
@MeinDeutschkurs 11 месяцев назад
Aaah, stripes was the vocabulary I was looking for. 😂😂 🙌
@LtCzr
@LtCzr 7 месяцев назад
are you using a am4 or am5 motherboard setup for this? I have a x570 motherboard and a r7 3800x. My Gpu is a rtx 4090 so im looking to upgrade the rest of my setup to AM5. However im not sure if there are any motherboards available either expensive or cheap that can bifuricate it to support both a 4090 and 4 m.2 nvme drives on the asus card. Any recommendations?
@ConstantGeekery
@ConstantGeekery 7 месяцев назад
Interesting conundrum. My board is the WRX8 socket and WRX80 chipset. I also have a 4090 in my system, but with Threadripper Pro my system has 128 PCIe 4.0 lanes, so there's plenty of capacity for all this stuff. Latest AM5 boards and 7000 series chips support 28 lanes of PCIe 5.0 (double the speed of PCIe 4.0). Since the 4090 is PCIe 4.0, I'm guessing you could in theory run it at PCIe 5.0 x8 and get the same result as PCIe 4.0 x16... and that would leave enough lanes to bifurcate an x16 slot. However, an AM5 board won't have a chipset supported by the ASUS carrier card I use in this video, so if you went with that you'd end up running at PCIe 3.0 speeds for the SSDs just as I did. Perhaps there is another combo of motherboard and SSD carrier card that can do what you need with AM5, but I don't have a wide enough current product knowledge to make any specific recommendations. Please report back if you find a solution! 😁
@kevinburns8473
@kevinburns8473 17 дней назад
I would also like an update if you find anything. I'm thinking about making a build like that with the 5090 when it drops later this year. With the new release of ddr5, I'm excited to see how snappy i can get it.
@LtCzr
@LtCzr 16 дней назад
@@kevinburns8473 They havent released the product yet and I dont really see it coming out any soon. If they release it ill try to respond, however I did find a few motherboards that have 3-4 nvme slots. So it could be possible with an additional 4 or 8 pcie slot to expand it to 5-6, but my goal is to have multiple nvme on the motherboard and 2 16x pcie slots so i can have my 4090 and 4 more pcie slots.
@ericneo2
@ericneo2 10 месяцев назад
Why not use software raid? Spin up a Linux VM pass the drives as virt-SCSI throw them into a ZFS pool and ISCSI them back to the host? Then have the host overlay the NTFS onto the ZFS block storage. You get RAID, and READ caching if you increase the VM's available memory.
@MacGyver0
@MacGyver0 8 месяцев назад
New Gen5 asus card is already available. Worth to rty.
@cLickphotographySEA
@cLickphotographySEA 10 месяцев назад
Awesome to know knowledgebase
@KeithTingle
@KeithTingle 11 месяцев назад
there are some cards from Highpoint that might interest you
@RealLordy
@RealLordy 6 месяцев назад
Just a question: why no test with Win 10 storage spaces? I thought the raid setup for windows was kind of obsolete.
@ConstantGeekery
@ConstantGeekery 6 месяцев назад
I probably should have included a Storage Spaces test. I suspect the result might have been slightly better, at the cost of a few CPU cycles, but it would have been interesting to see, so apologies for not including that. I wouldn't say the RAID functionality in Disk Management is obsolete though - there are pros and cons to both approaches.
@bdhaliwal24
@bdhaliwal24 5 месяцев назад
It would be interesting to see how it performs with software RAID in Linux.
@be-kind00
@be-kind00 10 месяцев назад
Why didn't we get full gen 4 bus speeds on the drives attached to the Asus controller?
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
According to ASUS support, the WRX80 motherboard isn’t supported. Apparently for AMD, the card will only run at gen 4 if using hardware raid on a x570 or TRX40 board. That said, another commenter has it running full speed on an Epyc system, so beats me 🤷🏼‍♂️
@thomaslechner1622
@thomaslechner1622 10 месяцев назад
Let Windows manage your striped SSDs?? How is that working out with the typical ubuntu / Win10 dual boot system?? I guess a hardware RAID controller is required, right??
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
Correct. Windows RAID volumes won't be visible to the Linux OS... and a Linux software RAID won't be visible to the Windows OS. Hardware RAID is the way to go if you need a dual boot environment. Alternatively, you could virtualise one of the OS installations within the other.
@masterphoenixpraha
@masterphoenixpraha 11 месяцев назад
I'm currently looking for some Time Machine backup solution... Now using USB3 traditional spinning HDD... and it drives me crazy how slow it is... and with my data i don't need anything big... so this might be a good tip with for me with some enclosure... still would probably go with USB3, as thunderbolt enclosures still seem pretty expensive to me
@ConstantGeekery
@ConstantGeekery 11 месяцев назад
Samsung T7 drives are brilliant.
@brightboxstudio
@brightboxstudio 11 месяцев назад
For Time Machine, you don’t need anything super fast, not anything close to that shown in this excellent video. A cheap 10Gbps USB-C enclosure with an affordable SSD in it will be an effective Time Machine drive. A Time Machine backup involves enough of its own overhead that it will probably never come close to saturating the bandwidth of the fastest SSDs or Thunderbolt. There is definitely zero justification for a setting up a RAID of SSDs for Time Machine.
@huggeebear
@huggeebear 10 месяцев назад
What about THUNDERBOLT? If I wanted to build an array for a Mac Mini using these Samsungs?
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
Good question. Sadly, there wouldn't be much point. Each Thunderbolt port on your Mac Mini has 4 lanes of PCIe... and each SSD drive needs that many lanes. Another way to look at it is Thunderbolt's theoretical bandwidth of 40 Gbit/sec or 5 GB/sec. Some of that is reserved for display bandwidth, and there's some encoding overhead. Realistically, you might be able to run one of these drives at PCIe 3.0 max speed (half potential speed), with a good external Thunderbolt enclosure.
@tadeoluwatobi5380
@tadeoluwatobi5380 17 дней назад
Which should I get for the my M1 pro?
@ConstantGeekery
@ConstantGeekery 17 дней назад
Mac M1 Pro? It would need to be in a Thunderbolt PCIe enclosure, and I have no idea whether it’s possible to bifurcate the PCIe slot in that scenario.
@anthixious
@anthixious 11 месяцев назад
To this day, I still haven't seen 4 Samsung 990 Pros, or 980 Pros for that matter, thrown in RAID0 with PCIe 4 speeds, this needs to happen.
@thomaslechner1622
@thomaslechner1622 10 месяцев назад
This would be the perfect system disk. Even putting your linux swap partition on that drive would work way better than swapping on single SSD (though still way slower than just having enough RAM to avoid swapping entirely)....
@anthixious
@anthixious 10 месяцев назад
@@thomaslechner1622 My next PC won't happen for a few years, but I plan to have 32GB minimum of RAM and raid 990 Pros, Samsung's pcie gen 5 SSD if they have one by then, or 1 or 2 Optane P5800X drives.
@thomaslechner1622
@thomaslechner1622 10 месяцев назад
@@anthixious My plan is following: R9-7950X3D or equivalent, 64 RAM, 2x fastest M.2 SSD's as striping Raid for host systems and virtual guests. I 'm in waitstate for prices to make this affordable to me. May easily take 2 years. My goal is booting a virtual machine should take as much time as opening a text editor nowadays!
@henriquefernandes1985
@henriquefernandes1985 3 месяца назад
hi I'm from Brazil, I bought (hyper m.2 x16 gen 4 card) and, my motherboard is an Asus B85M-E/BR motherboard for Intel LGA-1150 VGA DVI HDMI (2015) It is compatible? Is it possible to use with 2 or more nvme m.2 ssd hds?
@ConstantGeekery
@ConstantGeekery 3 месяца назад
Hello, based on a quick look at the specs for that board, I think you might have problems. The board has a single PCIe 3.0 x16 slot, which in theory could work, but only if the BIOS allows you to bifurcate the slot. Bifurcation basically splits the slot it into four 4x segments, which is what allows you to access each of the four SSDs. Without bifurcation support, you will only see the first SSD on the card.
@henriquefernandes1985
@henriquefernandes1985 2 месяца назад
@@ConstantGeekery ok, nice tks!
@nightshadowblade
@nightshadowblade 10 месяцев назад
Could you do some loading time tests for current games? Would be interesting to see if a raid can speed those up significantly or not.
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
I wouldn't expect much performance improvement unless the game codebase needs to load in large files. The benefit is much reduced when you're working with a lot of smaller files, and there is little to no benefit for random read/write operations. I will do some testing though and if I notice anything meaningful, I'll do another video. 👍🏻
@nightshadowblade
@nightshadowblade 10 месяцев назад
@@ConstantGeekery Yeah, theoretically it should at least speed up the initial first loading of the game, as soon as the files are in the RAM, it shouldn't make a difference, but one never knows, so thank you for testing.
@thisiswaytoocomplicated
@thisiswaytoocomplicated 11 месяцев назад
You should change the OS. I've tested my 8 2T 990Pro with Linux md raid and simply got 8 times the speed. And fortunately my machine runs them all as PCIe 4. But that was only testing for fun. I'm actually running those 8 drives with zfs in a pool of stripes of 4 pairs of mirrors. Because for me it is more about fault tolerance and all the file system features and not about speed. But if you only want speed - get a PCIe 4.0 adapter and run it with Linux md raid. That will pretty much max it out.
@ConstantGeekery
@ConstantGeekery 11 месяцев назад
macOS does it faster than Windows too. Changing OS may be an option for some viewers, but it isn’t an option in this case. I don’t need to anyway as I have hardware RAID.
@thisiswaytoocomplicated
@thisiswaytoocomplicated 11 месяцев назад
@@ConstantGeekery Just be aware that any normal (hardware or software) raid does not protect you from bit rot. APFS also does not help since it only checksums metadata but not your files. For that you need a more advanced file systems like zfs or btrfs maybe set up as a separate file server if you want to stay with macOS. Or you trust your backups ;-)
@user-vv5qe2xg7m
@user-vv5qe2xg7m 5 месяцев назад
System drive on raid made a significant gain in performance compared to a single nvme drive? I Don't believe it. How was it measured?
@ConstantGeekery
@ConstantGeekery 5 месяцев назад
* a gain in sequential read/write performance which is specifically helpful to my workflow, where I use the space for large file transfers. General computing tasks are typically more limited by random read/write performance, which is unlikely to improve with RAID striping.
@droknron
@droknron 11 месяцев назад
I have that same card the Hyper M.2 and it runs in my server motherboard with all SSD's at PCIe 4.0 speed. I am not using any RAID function of my motherboard or CPU, just all 4 SSD's presented like normal to my operating system. I suspect your issue is your motherboard setting the speed lower for some reason with this card. (I'm using it with a Gen 3 EPYC Supermicro board). EDIT: I see in your video you left the slot speed on "Auto", change it to 4.0 mode and see if it works.
@ConstantGeekery
@ConstantGeekery 11 месяцев назад
Yes, tried changing the speed. Also spoke with ASUS who confirmed it does not support the WRX80 chipset RAID.
@droknron
@droknron 11 месяцев назад
@@ConstantGeekery That's .. so odd. I thought on the Threadripper Pro platform all 128 PCIe lanes come from the CPU directly? like EPYC. It's odd that it would not run at 4.0 speed like it does on my EPYC system (using a Supermicro motherboard). Is the slot you're plugging into actually getting lanes from the chipset instead of the CPU? - very strange.
@ConstantGeekery
@ConstantGeekery 11 месяцев назад
@@droknron I believe it’s all CPU, but it’s a Lenovo system with their proprietary board, so I don’t know without investigating further. The BIOS shows it at PCIE 4.0 speeds, but once we get into Windows, it’s 3.0. I’ve also noted the board doesn’t automatically detect and bifurcate the slot, like it should do - it has to be manually selected. Really, it doesn’t matter much to me, as the additional speed beyond what I already have will make zero difference to my workflow, and the Hyper was cheap. I probably will find another card at some point, but I’m in no rush. Lenovo do make one, but it’s also limited to gen 3 🙄
@droknron
@droknron 11 месяцев назад
@@ConstantGeekery Mhm I see. I actually picked up the Hyper M.2 after trying two other cards that had intermittent faults (one of four 980 Pro's I'm using dropping out randomly etc). Seemed to be power related. The Hyper M.2 has a much beefier VRM than some of the no-name brand ones I tried. Working well for me and I hope for you too! :)
@okbrown
@okbrown 8 месяцев назад
Just so it is not clear, you cannot use this Asus Hyper M.2 adapter to add 4 extra logical drives to your system. It all depends on the amount of PCI Lanes your motherboard supports. You will end up wasting your money and time if you don't do your research. 😥
@charleskrueger5523
@charleskrueger5523 11 месяцев назад
How should you set up a card like this when using macOS (whether in a h@ckintosh or a real Mac Pro)?
@ConstantGeekery
@ConstantGeekery 11 месяцев назад
You would need to get a card that is fully supported in macOS. There are models available for the 2019 and 2023 Mac Pro. You could also do this in an older cheesegrater Mac Pro, but it will be limited to PCIe 2.0. Another option is an external Thunderbolt PCIe enclosure - or Hackintosh (though I have no experience with those). Hardware RAID cards are available for Mac, but macOS does a decent job of software RAID too, which you can set up via Disk Utility. I've always gotten the impression that it is a bit better than Windows RAID, but I haven't done any specific testing on that. It's fair to say that these days most Mac users would probably go with an external Thunderbolt RAID enclosure.
@charleskrueger5523
@charleskrueger5523 11 месяцев назад
If you set up RAID in macOS, Windows wouldn’t be able to use it for a dual-boot system, is that right? Or vice-versa?
@ConstantGeekery
@ConstantGeekery 11 месяцев назад
@@charleskrueger5523 that’s correct, unless you have a hardware controller that both OSes support.
@PolluxChung
@PolluxChung 11 месяцев назад
Imagine using that as Photoshop scratch disk.
@therightman2031
@therightman2031 17 дней назад
In raid 0 random 4KB (RND4K) isn't double?
@ConstantGeekery
@ConstantGeekery 17 дней назад
No - RAID doesn’t benefit random performance. In fact, it often makes it slightly worse. This is all about sequential performance, which is what I need for my workflow.
@shephusted2714
@shephusted2714 10 месяцев назад
you don't need windows or the controller card though
@seynoonrae2474
@seynoonrae2474 10 месяцев назад
Your CPU is most likely the issue of you losing performance in WIndows software rRAID. Yes there are better option that give better results, but the main issue for losing roughly 1/3 of the performance is not the number of cores, but the performance of each core/thread. Try the same setup but with highend desktop AMD or Intel Chip and you should see better results.
@adilqatinefilms
@adilqatinefilms 4 месяца назад
Message 2 I have 2 slot m.2 intern in motherboard
@PJWey
@PJWey 11 месяцев назад
I was slower than an SSD running on a gen 3 setting and only made it the 2nd like! 😀
@thebearjew8463
@thebearjew8463 10 месяцев назад
Be careful. Sometimes using built in raid on motherboard doesn’t allow TRIM
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
Indeed. Intel does support it… AMD not so much. Of course, manufacturers build drives with the knowledge that TRIM may not be enabled. Some controllers do background garbage collection, though I don’t know how that applies to a RAID stripe like this. I’ve always mitigated by massively over-specifying capacity so I never get close to filling the drives, and I format regularly to avoid issues. This is just working space for me and I have multiple copies of all data elsewhere, so it’s no problem to do housekeeping from time to time. That may not be the case for every user, so your warning is appropriate. 👍🏻
@pandabytes4991
@pandabytes4991 8 месяцев назад
I'm all for RAID 10 in a professional workflow. I don't have the resources or need for that, so I have no experience... my computer is just for entertainment.
@sheldonkupa9120
@sheldonkupa9120 10 месяцев назад
By the way, not advisable for important data. Backup VERY regularly, even when ssds are very reliable.
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
Absolutely! Backup, backup, and then another backup… that’s my motto! 😁
@BeaglefreilaufKalkar
@BeaglefreilaufKalkar 10 месяцев назад
Why didnt you return the card and get the right one?
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
I will buy another at some point, but this is fast enough and I’m interested to test out the lower operating temperature theory.
@michaellundsrensen2292
@michaellundsrensen2292 11 месяцев назад
Does TRIM work on a striped Raid? I think it is Windows that controls TRIM and Windows cannot see the 4 individual drives.
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
It can, but last time I checked it’s an Intel only thing. I tend to over-spec capacity on my drives and reformat regularly (which is no problem as it’s just temporary working space), so I don’t often encounter issues.
@dennisfahey2379
@dennisfahey2379 6 месяцев назад
Windows being slower than MOBO drivers is not an artifact of poor software. Its an artifact of where in the software stack the payload is being segmented and reassembled. At the BIOS level it is pointer manipulation, at the Windows driver level its there are additional transfers to move blocks down the the hardware level for DMA block writes. If you were to look at BIOS RAID for Marvell or Intel chipsets they have custom drivers for Windows to massage this dataflow. It will be interesting with CXL coming down the pipe. All this manipulation stands in the way of raw superblock transfers.
@olegp2420
@olegp2420 10 месяцев назад
You compared sequential reads/writes and ignored randoms, which are almost the same, and more important to overall performance.
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
For me and my video editing workflow, where I’m often copying large files, sequential is more important. The random performance does factor into video editing timeline work, but it’s already fast enough that, for me, no real-world difference would be perceived if the drives were faster in this area. Of course, every workflow is different which is why I presented the random results data on screen. I didn’t personally ignore it when considering the solution for my workflow, but I probably should have made at least a passing reference to it in the script 👍🏻
@chrisatye
@chrisatye 11 месяцев назад
I’m giving this a like, but if I’m honest I didn’t understand half of it 🤣
@DJaquithFL
@DJaquithFL 10 месяцев назад
If your goal is pure speed to offload then I would probably pair a couple M.2 PCIe 5.0 drives. But then if you actually value that data, have that information moved to a RAID 1 or simply have a RAID 10 array for editing. Then once you're done with a project, move it to a local NAS that has M.2 caching or scratch drives along with high capacity NAS HDDs for archiving. Your current configuration is extremely volatile and any of that information can disappear forever in a snap of a finger. The more drives you have in your RAID 0 configuration are exponentially more volatile. Personally, there's zero chance that I would ever RAID 0 my working data or my OS and apps. Instead in my rig I have a 4TB RAID 1 M.2 array with a single M.2 with my OS and Apps that has a daily 1:1 bootable clone matching M.2. Your boot drive, OS and applications all work on 4K random speeds. There's very little benefit in speed of putting them in RAID 0 configuration.
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
Getting a lot of comments like this, despite clearly stating in the video that this is not suitable for critical data. All my data is backed up on two separate NAS systems with multiple parity drives, in separate physical locations. Plus I have additional backups whilst working on the files - usually at least 2 if not 3 backups during working, in addition to the two NAS systems. RAID solutions are NEVER substitutes for good backup routines, even when they have redundancy built in. My configuration is much wider than this one machine, and I have no risk of losing my data.
@DJaquithFL
@DJaquithFL 10 месяцев назад
@@ConstantGeekery .. RAID 10 as I mentioned with double the speed plus redundancy. In RAID 0 you're doubling the failure rate for each drive that you add to the array.
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
@@DJaquithFL Yes of course, but this is just working space for me and none of the data is critical because it’s backed up elsewhere. I need the capacity that the stripe provides. Of this 8TB, I won’t fill it up to more than 50% before reformatting the stripe, otherwise we get into typical SSD issues and slowdowns. My data use is very temporary - once the project is complete, the data gets archived and removed from the (multiple) working drives. There’s zero risk for me as I can pick up another copy of the files, plug into my Mac laptop and carry on. Each user needs to carefully consider their own needs. You and I have both done that for our specific situations. 👍🏻
@DJaquithFL
@DJaquithFL 10 месяцев назад
@@ConstantGeekery .. Then as long as your time is valueless between the time of initial upload to the time of backup, then your okie dokie. _Just as an FYI, you are talking to some guy who owns a data center for real estate. My wife is a Realtor and I can't tell you how much money I have spent just to make certain that she doesn't lose 2 hours of time even in a hurricane in Florida._
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
You’ve obviously taken the time to figure out solutions that work for your specific requirements based on your experience. As it happens, I am also capable of figuring out solutions that work for my specific requirements based on my own not inconsiderable experience. By the time I ingest to the workstation, I already have multiple copies and backups. So, you need not lose any sleep at all on my account, because a loss of data on this temporary working stripe will cost me no more time than it takes to make a cup of coffee. And I like coffee. Best of luck with the hurricanes and the data center empire. 👍🏻
@wngimageanddesign9546
@wngimageanddesign9546 10 месяцев назад
Who the hell is actually needing such data rates in everyday use? Your actual performance is the slowest/weakest link in the data chain.
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
Very few people. It's a bit of fun to see what's possible and test the limits. It has actually made a noticeable difference to my work when using this machine. It's not so much the peak performance, it's the fact I can access multiple video streams in parallel with no slow down. The other reason for doing it in my case is to have more capacity than I actually need. It means I won't go beyond 50%, and that extends the period of time until the SSDs inevitably slow down and I have to reformat. It is a niche use case of course.
@jmd1743
@jmd1743 11 месяцев назад
Current capacity 2.5 drives will be ewaste, I hope somebody comes out with a NAS unit that's designed to hold a stupid amount of 2.5 drives and deploy them top down like a toaster that you stick bread in when you're making breakfast. Icydock has these triple 5.25 bay modules that convert optical drive slots into twenty-four 2.5 drive arrays each per module, it would be cool if somebody took 4 of those devices for twelve 5.25 total slots to make a NAS that would contain upwards of 96 drives, 96 drives that would have otherwise gone to e-waste dumps as the laptop manufactures all switch to m.2 drives. 96 drives * 8tb is 768 tb of storage. It's a lot of storage I would like to think that mother nature would appreciate you from saving 96 drives from rotting in dumps to leach nasty chemicals. Right 500gb and 1tb drives are rotting on shelves., what's to say the market would look like for 8tb drives in 5-10 years from now?
@cszulu2000
@cszulu2000 7 месяцев назад
Try only 3 drives
@timk9847
@timk9847 11 месяцев назад
WOW you really really should NOT STRIPE your boot drive. Mirror the drive! You lose 50% capacity but you gain redundancy without losing any read performance. Since all data would exist on both drive it can read half from one drive and half from the other. Write performance would be half of a stripe array but with an OS drive read is far more important.
@ConstantGeekery
@ConstantGeekery 11 месяцев назад
I don’t need redundancy. There’s nothing on my system drive that can’t be restored from backup or re-downloaded in short order. Each situation needs to be assessed according the individual use case. I’m not recommending that anyone copy my setup without doing their homework. As a systems engineer and developer for over 25 years, I’ve had one or two stripes, and I’m very comfortable with my setup.
@timk9847
@timk9847 11 месяцев назад
@@ConstantGeekery LOL No one needs redundancy until they need redundancy...
@ConstantGeekery
@ConstantGeekery 11 месяцев назад
@@timk9847 As a general piece of advice, redundancy is useful. For this specific situation, it’s not necessary… because I already do have dual parity redundancy for ALL my critical data. And multiple backups. And backups of backups, in multiple geographic locations. I simply don’t store any critical data on my system disk, so it needs only to run my OS and the few apps I use, and provide temporary working space for files (which have already backed up multiple times). My setup can be completely restored in under two hours… but I don’t even need to wait for that if it happens, since this is not my only machine, and I can switch seamlessly to one of my other machines and continue work within minutes.
@timk9847
@timk9847 10 месяцев назад
@@ConstantGeekery I think you miss the big point. Its not about how risky it is but how minimal the rewards are for the risks associated. Aside from the capacity delta (2TB vs 4TB) there is not a lot of actual performance gained by using a striped array vs mirror vs single drive. But the risk of data loss is roughly double because if either drive fails or gets corrupted then the entire array is toast. Compared to the old IDE drives limited to 40MBs, modern M.2 drives are so fast why assume any unnecessary risks lol Just my opinion, you are a grown ass man who take any risks with your data you want lol
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
@@timk9847 I haven’t missed any point - I’m familiar with the risks and rewards. It just doesn’t apply to my specific situation here. There is no risk of data loss if you don’t have any data to lose. I don’t disagree with your points for general practice, but since my wider system (of which this is one small part) has complete redundancy, I’m happy to enjoy the benefits without needing to worry about the risks.
@Kventin-Buratino
@Kventin-Buratino 2 месяца назад
Всем привет!
@bentheguru4986
@bentheguru4986 11 месяцев назад
I bought one of those ASUS cards and was a total WOFTAM. Only supported under ASUS boards (yuk) and refused to work under native PCIE Bifuricated systems. Threw it in the bin after smashing it with a sledge-hammer.
@droknron
@droknron 11 месяцев назад
Works fine in my Supermicro motherboard with Bifurcation. Keep in mind, this "card" has no logic on it at all, it literally just connects the PCIe slot directly to the four M.2's (you can even see the traces) and then supplies the 12v slot power through a VRM to 3.3v for the SSD's to use. It has no way to verify anything about what motherboard it's being plugged into. The issues you faced were down to a faulty card or user error.
@ChrisM541
@ChrisM541 10 месяцев назад
For an OS, your talking primarily small file sizes and therein lies a massive problem for anyone buying even the latest gen 5 NVME's. While headline/marketing read speeds have jumped over the years (as a direct by-product of each new PCIe iteration), the OS-crucial 4k read/write speeds have shockingly stagnated at snail speed (in comparison) for a long, long, long time, unfortunately. Even if you were to use the drive as a data drive for large files, we then immediately come across the second major issue/gotcha - the 'Achilles write hole problem'. You don't have to write that many GB for your drive to suddenly crawl - many dropping to speeds waaay below a fast HDD. Sad, but unfortunately true.
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
I completely agree that we should be seeing more focus from manufacturers on improving these things. Of course they know that headline numbers sell. I tend to buy much more capacity than I need when it comes to SSDs. The headroom helps to mitigate some of the inherent issues with the technology. There are pros and cons to every solution, and on balance (taking into consideration hundreds of drives of all types from various server farms and machines in my web dev business) SSDs have proven to be more reliable and more performant than traditional spinning disks. So, I’ll live with the downsides.
@dosgos
@dosgos 10 месяцев назад
The 4k speeds are disappointing, so I suppose that is marketing leading SSD development. However, some tasks will really benefit from the blazing sequential speeds. It is not easy for the average person to build an "optimal" SSD storage system IMHO.
@3k3k3
@3k3k3 11 месяцев назад
Samsung, the maker of selfdestruting SSDs
@jolness1
@jolness1 10 месяцев назад
Eh I bet it’s fine. Everyone uses their nand anyways so with firmware fix, I’m comfortable with it. Don’t own one personally but I would if I needed gen4 pcie
@mikesm1ty196
@mikesm1ty196 5 месяцев назад
NVMe 4.0 x4 maxes out at 7450 Mbps - 6900 mbps - That is the speed limit, REGARDLESS HOW MANY DRIVES YOU HAVE ON RAID 0 - The speed tests make it seem like you can run these cards and you multiply the speed at which you run, say 2 Samsung 990 Pros @ 7450 x2 = 14500 mbps, WRONG! i'm telling you - these companies DO NOT! ESPECIALLY SAMSUNG! DO NOT WANT TO CONTINUE CREATING NEW CARDS FOR A MARKET THAT WON'T BUY THEM, but the fastest speed those cards can handle is 7450 mbps. Raid 0 testing that makes you believe your doubling your speed is COMPLETE FABRICATION! - the only way to achieve faster speeds on your boot drive is to go PCIe 5.0 @ 12500 Mbps, Reality is Speed limits are Speed Limits, the speed tests suggest you can beat that with 2 of the same drives in Raid 0 (While great idea, YOUR STILL MAXING OUT YOUR SPEED AT 7450 Mbps) it's about samsung refusing to build these chips yet, they don't think the market needs them, i just spend hour RANTING ABOUT PCIe 5.0 and the true cord that allows 600-watt Speed, No not 2 6+2 pins, fuckin idiots, that's not what i'm talking about, as the true speed of PCIe 5.0 will require 4 8-pin 150 watt Or 4 6+2 pin 150 watt (since 150 watts is the max you can get in those Pins on your power supply) HOWEVER - the 7900 xtx runs 3 8-pin 150 watt slots, giving you access to 450 watts of power, making it a PCIe 4.5 (almost 5.0 but not quite) where as the True 5.0 cards have a new cord altogether, designated within the New power supplies, which ppl may suspect is dubious, yet is giving you 600 watts of power - they're called 600W PCIE 5.0 16Pin (12+4) High Current Power Cable (12VHPWR PSU Cable) now some ppl are dubious about this cable giving you 600 watts of true power, cuz the pin count doesn't match up to the 4 8-pins it would require to give you the actual amount of power it would require to run a PCIe 5.0 Card, but in my opinion that is the PCIe 5.0 dedicated cable, 600 watts, not fully needed unless you get 3090 TI or above, even the RX 7900 XTX has 3 8-pin or 3 6+2pin Connectors, which is still 450 and the most powerful card AMD has ever put out, yet their dubious on how to equip that, i'd rather put 3 8-pins into it and run it that way, as opposed to the using the wire they give you to connect the 12VHPWR PSU Cable to 3 6 +2 (Empty pins) cables - seems scammish, they're 8-pin cables are missing 2 pins in each of them, so rather run 3 8-pin cables for a total of 450 watts, or even 4 8 pin cables for a total of 600 watts, either way PCIe 5.0 is 600 watts of dedicated GPU power, My point being you need more power to run PCIe 5.0 and doubling up your 990 Pro in Raid 0 WON'T ALLOW YOU TO BEAT THE SPEED LIMIT! EINSTEIN TELL THESE PUNKS THEY'RE BEING PLAYED! light speed is light speed, can't be cheated.
@ConstantGeekery
@ConstantGeekery 5 месяцев назад
Not just speed tests - actual file transfers. For sequential operations, and let's be clear that we are ONLY talking about sequential operations (which is what I personally need for my workflow), RAID striping does give you a cumulative speed benefit beyond the top speed of the actual card. This DOESN'T mean that the PCIE speed limit of any individual card is being exceeded (nor did I ever suggest that), simply that the data operation is spread across multiple drives. A bit like transporting 200 people on 4 buses, instead of a single bus four times. Going to PCIe 5.0 is nice... if you have a computer that supports it. For me, that would mean a new motherboard and CPU. I don't need that - I just need faster transfer for large files, for which this solution achieves all goals.
@Neeboopsh
@Neeboopsh 10 месяцев назад
gross dude's avatar at whatever time.
@cinemaipswich4636
@cinemaipswich4636 11 месяцев назад
Those NVMe's are PCIe 4.0 so a computer with less than that will be a choke point. If you don't have a hardware raid card you are wasting your money. Are they freebees perhaps? Is this an ad?
@ConstantGeekery
@ConstantGeekery 11 месяцев назад
My computer has 128 lanes of PCIe 4.0. As I explained in the video, the Hyper card does not support the WRX80 chipset for PCIe 4.0 so is running at PCIe 3.0. At some point I’ll change the card, but it’s plenty fast enough so I’m not in a rush to spend more money. I paid for the components with my own money - I always fully disclose when I have received a review sample.
@mdrizzle6665
@mdrizzle6665 7 месяцев назад
Re-install windows to switch between ACHI and Raid mode? Im sad bro, can’t be a reviewer and tell lies. “Set windows to boot in SAFE mode and reboot to safe mode. Once booted in safe mode, shut down and switch to Raid or vice versa, switch to ACHI, reboot in safe mode, windows will automatically reconfigure for either or, set back to normal mode boot and reboot, done… 5mins, not a FULL re-install’
@ConstantGeekery
@ConstantGeekery 6 месяцев назад
Zero lies told, so no need to be sad. Some on-board RAID controllers do allow this (i.e. Intel RST), although you should edit the registry to force safe boot, and depending on the system you may need to inject a new driver. BUT, this only works because the controller allows the drive to work as single-drive RAID0 "array"... but that's not actually RAID. Not all systems work in the same way, and the proprietary Lenovo motherboard & BIOS in this system doesn't work like that. Switching on RAID appears to actually enable RAID for ALL drives in the system (NVMe, SATA and add-in cards) whenever you switch to that mode. This renders the system non-bootable (safe mode or not) until a RAID array is defined in the RAID controller for the on-board slots. I didn't see any option to create a single-drive RAID0 "array". If you have some specific product knowledge pertinent to the Lenovo P620, by all means please do share the specific steps for converting a single system drive into a conventional multi-drive RAID array without incurring any data loss, having to clone drives, or re-install OS.
@jaygreentree4394
@jaygreentree4394 10 месяцев назад
Having 4 SSDs doesnt really give you that much better performance. 1 ssd yes but 4 doesnt really add that much. Nice try though.
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
Apart from 4 x the speed for sequential read and write and 4 x the capacity. Both of which my workflow really benefits from.
@weisstdudochnicht1
@weisstdudochnicht1 10 месяцев назад
Sorry, but nonsense test, as max sequential speed is not a typical workload… look at the random I/O numbers -> no change-> not faster in typical home user workloads
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
Sequential speed is absolutely part of my typical workload and is for many other professional users. This setup is already making a big difference to me. As stated in the video, I built this large stripe as working space for video editing and offloading footage - huge files. I put the other results on screen so individual users can assess whether it may benefit them or not. If a user’s work doesn’t involve managing or working with very large files, then they’re not likely to be considering a solution like this in the first place. And a home user is probably less likely to make such an investment.
@jiamiekori6575
@jiamiekori6575 11 месяцев назад
First
@PJWey
@PJWey 11 месяцев назад
🎉
@elalemanpaisa
@elalemanpaisa 5 дней назад
Technically raid0 is not raid Also, never, I repeat never use a raid controller with flash!
@jonesgang
@jonesgang 10 месяцев назад
That is not how electronics works. Electronics are only as good as their weakest link. That is like thinking about getting four Viper engines and mounting one to each wheel, thinking you are going to go really fast. Dumb thinking, funny and entertaining to think about but dumb never the less.
@ConstantGeekery
@ConstantGeekery 10 месяцев назад
RAID Striping is commonly used and works just fine, provided one understands the pitfalls and limitations (as with any storage solution). In this case we clearly demonstrated an improvement in sequential read/write performance along with increased capacity, equal to four times a single drive. Your analogy doesn’t match up to what is actually being done here. A better analogy would be to consider a queue of 200 people waiting for a bus. If the queue can be divided between four buses in parallel, the queue will diminish four times faster than with a single bus. That’s what’s happening with the sequential read and writes here. Though, I’m absolutely up for having a go in a car with four Viper engines! 😁
@jonesgang
@jonesgang 10 месяцев назад
@@ConstantGeekery Yes, but the files are fragmented across multiple drives. Which drastically speeds up data transfers because each drive only gets a small chunk of data at a time. So the drives are not working faster, just not working as hard. But if your RAID is not properly set up or routinely backed-up you will end up with a lot of headaches, but you will have incredible transfer speeds.
Далее
RAID 0 на трех и четырех SSD
20:53
Просмотров 20 тыс.
OVOZ
01:00
Просмотров 1,4 млн
Hardware Raid is Dead and is a Bad Idea in 2022
22:19
Просмотров 656 тыс.
I Got Scammed - Fake 4TB Samsung SSD from AliExpress
22:59
M.2 SSD Adapters & Enclosures
16:20
Просмотров 195 тыс.
Sabrent 4x NVME to PCIE Add in Card Review
14:13
Просмотров 35 тыс.
Choosing The BEST Drive Layout For Your NAS
21:42
Просмотров 115 тыс.
The ULTIMATE Raspberry Pi 5 NAS
32:14
Просмотров 1,6 млн
Best mobile of all time💥🗿 [Troll Face]
0:24
Просмотров 1,3 млн
Магниты и S Pen 🖊️
0:37
Просмотров 18 тыс.