Тёмный
No video :(

How Much Memory Does ZFS Need and Does It Have To Be ECC? 

Lawrence Systems
Подписаться 337 тыс.
Просмотров 52 тыс.
50% 1

Опубликовано:

 

6 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 127   
@Jimmy_Jones
@Jimmy_Jones Год назад
This will be a common video for all newbies to look up.
@marshalleq
@marshalleq Год назад
Finally good advice without fearmongering. There is so much fear mongering with ZFS for some reason.
@healthy5659
@healthy5659 Год назад
Nicely explained, however I am still not clear- if ECC is not strictly required and data integrity is still there without it, what precisely is the benefit of ECC? Or should I ask, in what situations would a non-ECC system fail where an ECC system would not? Thanks for the video, please keep uploading more great content!
@Prophes0r
@Prophes0r Год назад
everything is just layers. zfs can provide some reliability. ECC provides reliability at a different step in the chain. Example: zfs loads data into memory to perform a checksum. A bit is flipped in memory. Checksum is calculated. Checksum no longer matches. So it tries again. Now the checksum matches. In the end it decides the data was fine and moves on. ECC would have fixed the single bit-flip and it wouldn't have had to do the extra work to make sure. Or, different ECC would have at least realized there was a problem sooner so it could redo the read before continuing. zfs assumes the disks are not trustworthy, but in reality nothing is. There are extra checks to hopefully recover from problems, but eliminating them before they can mess with a process is better.
@charleshughes7007
@charleshughes7007 Год назад
ZFS helps detect and correct errors which are written to the media, but ECC prevents a potential source of errors before they can ever reach the media. It's nice for data integrity but I think ECC's main virtue is that it lets you know very promptly when your memory is failing or otherwise having issues. If those issues are not too severe, it can mitigate them enough to keep your system functional while you resolve the root cause. A system without ECC which has memory corruption will crash randomly, corrupt files, and/or just generally act unpredictably. All of these are awful in a NAS.
@baumstamp5989
@baumstamp5989 Год назад
if data is in ram and written to disk and PRIOR to being written to disk a bit-flip occurs, then it is a problem. so i cannot agree with the statement that you do not need ECC if you want a proper zfs nas
@mikerollin4073
@mikerollin4073 6 месяцев назад
@@baumstamp5989 "ZFS without ECC RAM is safer than other filesystem's with ECC RAM" - It took WAY too much reading to finally learn all of the fear mongering about ECC is just a myth.
@edwardallenthree
@edwardallenthree Год назад
Thanks for the comment about the Linux 50% rule with ZFS. zfs_arc_max is a critical setting to adjust.
@bdhaliwal24
@bdhaliwal24 Год назад
Easily the most informative video/content I’ve seen yet on Truenas. Thanks for sharing this!
@Okeur75
@Okeur75 Год назад
Well, to be honest I'm a bit disapointed by the video. I would have expected some benchmarks to show when TrueNas becomes unstable/unusable under a certain amount of memory. Or you could have an ECC system and a non-ECC system, overclock RAM on both of them until it's unstable and see what it does to your data. This very video does not show a lot, and I'm sure did not required a lot of work. What happens if you run TrueNAS with 2Go of RAM ? Or even 1G ? What happens if you run TrueNas with 8Go (the bare recommended minimum) but with 100Tb+ of storage and some loads ? How does it affect write and read performance ? How resilvering is affected by the lack of memory ? All these tests would be useful, interesting to watch, and would also offer a definitive answer to the question we are seeing to many times on the forum "how much memory do I need for my system" ?
@lucky64444
@lucky64444 Год назад
There are too many variables to make benchmarks like those worth anything. It completely depends on your workload and your equipment. Everyones performance will be fairly unique. Not having enough ram is the difference between saturating your 10Gbe network connection and barely reading at 200MB/s.
@Tntdruid
@Tntdruid Год назад
Thanks for the easy too understand zfs guide 👍
@bertnijhof5413
@bertnijhof5413 Год назад
My ZFS memory usage is occasionally measured in MB not GB. My use case is running VMs on an Ubuntu desktop and I have only 1 pair of hands to keep the VMs occupied. My hardware is cheap; Ryzen 3 2200G; 16GB; 512GB nvme-SSD; 2TB HDD supported by 128GB sata-SSD as cache. My 3 datapool are: nvme-SSD (3400/2300MB/s) for the most used VMs; 1TB partition on begin of HDD with 100GB L2ARC and 5GB LOG for VMs; 1TB partition at end of HDD with 20GB L2ARC and 3GB LOG for my data. L2ARC and LOG partitions are together again 128GB :) I maximized memory cache L1ARC to 3GB. My nvme-SSD datapool runs with primarycache=metadata, so I don't use the memory cache L1ARC for caching records. My nvme-SSD access does not gain very much in performance using the L1ARC. The boot times of e.g Xubuntu improves from ~8 seconds to ~6.5 seconds. My metadata L1ARC size is 200MB, saving space to load another VM :) I have a backup-server with FreeBSD 13.1 and OpenZFS, it runs on a 2003 Pentium 4 HT (3.0GHz) with 1.5GB of DDR of which ~1GB is used :) So OpenZFS can run in 1GB :) The VMs on the HDD run from L1ARC and L2ARC, so basically they boot assisted by L2ARC and afterwards they run from L1ARC. After a couple seconds it is like running the VMs from RAM disk or a very fast nvme-SSD :) :) Here the VMs fully use the 3GB (lz4 compressed) say 5,8GB uncompressed and my disk IO hit-rates for the L1ARC are ~93%. Using a 4GB L1ARC I can get it to ~98%. For all the measurement I use conky in the VMs and in the Host. Conky displays also data from /proc/spl/kstat/zfs/arcstats and from the zfs commands. PERFORMANCE: The relative small difference between using nvme-SSD and nvme-SSD + L1ARC is probably caused by the 2nd slowest Ryzen CPU available. I expect most boot-time is used by the CPU overhead and decompression, so reading from nvme instead of memory does not add very much more delay. That would change in favor of the L1ARC with a faster CPU like e.g. a Ryzen 5 5600G. More memory would make the tuning of the L1ARC easy, just make it say 6GB. It would not make the system much faster, since the L1ARC hit rates for disk IO are already very high in my use case. However I could load more VMs at the same time. The 2TB HDD is new. In the past I used 2 smaller HDDs in Raid-0. It were older slower HDDs, but the responsiveness felt better. I expect, while one HDD moved its head, the other could read. Those HDDs had 9 and 10 power-on years, so one of them died of old age, so I don't trust the remaining one anymore for serious work. Another advantage was, that my private dataset was stored with copies=2, creating a kind of mirror for that data. Once it corrected an error in my data automatically :) I consider buying a second HDD again. My Pentium backup server has one advantage; I reuse two 3.5" IDE HDDs (320+250GB) and two 2.5" SATA HDDs (320+320GB)and it has one disadvantage; the throughput is limited to ~22 MB/s due to a 95% load on one CPU thread. That good old PC gets overworked during say 1 hour/week.
@milhousevh
@milhousevh Год назад
Timely video as I've just upgraded an old FreeNAS 8 server to TrueNAS. The performance I'm seeing definitely aligns with this video. HP Gen 7 Microserver N54L (2x 2.2GHz AMD Turion 64-bit cores), 16GB ECC RAM, LSI 9211-8i SAS controller (PCI-E 16x slot), Intel NIC (PCI-E 1x slot). TrueNAS Core 13.0-U5, booting off 250GB Crucial MX500 SSD (internal SATA port). * RAID-Z1 Pool: 4x 8TB IronWolf Pro 7200 RPM HDD (connected to 1st port on LSI Controller) * RAID-Z1 Pool: 4x 1TB Crucial MX500 SSD (connected to 2nd port on LSI Controller, via 2.5" 4-bay dock in optical drive bay) * Mirrored Pool (encrypted dataset): 2x 4TB IronWolf Pro 7200 RPM HDD (connected over eSATA to external 2-bay enclosure). This is an ancient system, massively underpowered these days, but for home use (ie. SMB/NFS file sharing - mostly media/movies/TV shows on HDD, plus the occasional git repo or document on SSD) it's still perfect as it saturates the 1Gbps NIC for pretty much everything (reads AND writes, even from the 4xHDD pool which has a sequential read rate of 640MB/s). At idle the system pulls about 55W, and maxes out at about 105W during a 4x HDD scrub. It's nearly silent but stick it in a closet (as mine is) and you absolutely won't hear it. Even the older and slower N36L can saturate the 1Gbps network with a similar controller/disk setup (I recently swapped out the N36L motherboard for the N54L as a final upgrade!) The only possible improvement now would be to upgrade the network side of things as that's definitely become the limiting factor, but to be honest for home use there's really no need...
@chromerims
@chromerims Год назад
Great vid 👍 My brain reading title as: *"How much money does ZFS need?"* Kindest regards, friends.
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
How much money does ZFS need seems somewhat accurate as well.
@Ecker00
@Ecker00 Год назад
Took me days of research to come to these same conclusions a few months ago, thanks for putting the record straight!
@paulhenderson1462
@paulhenderson1462 6 месяцев назад
A nice calm discussion. Thanks for a well reasoned argument about memory use in ZFS. In my shop, we have a general rule of thumb of 128GB of memory per 100TB of zpools served. IOW, if I have a 200TB zpool, the server managing it will have 256GB of memory. We get very good performance this way, with most of the memory mapped to ZFS, which is what you want.
@thisiswaytoocomplicated
@thisiswaytoocomplicated Год назад
I'm running ZFS on my desktop. It has 8 NVMEs all mirrored in pairs which results in about 5TB of storage in total (not evenly sized - but I run it for reliability not optimal speed and 14/10 R/W GB/s is just plainly good enough for me). Doesn't really matter since that desktop is a bit beyond most normal stuff (5975WX, 512GB ECC RAM, etc.) and so it is more of only anecdotal value. And yes, that is too much RAM even for ZFS - it only uses about 50 -150 GB out of the box for those 5TB of storage. So I will need to look into it of how to tune it to do better caching. ;-) My file server on the other hand is only an old trusty workhorse (old i7 from 2015) until recently running Linux md-raid with 16 GB of non-ECC RAM. It is just a very normal home file server. Normal (recycled) PC hardware, running about 8 years 24/7/365 without issue. Only PSU needed replacement once so far. Was always running raid 6 with 8 drives. Last incarnation was 8x9TB. Of course after a few years that again now became too small. So a few days ago I replaced the 9TB drives with 18TB drives and this time I also switched from md-raid to ZFS (zraid-2). What can I say? It just works at least as good as before. Just a bit faster since the drives are a bit faster than before. Hardware is old but not super-slow. Memory is not much. But with 10GbE connection it simply is still good enough for me. md-raid certainly stood the test of time in my home so that I still can fully recommend it. With ext4 it simply is very robust. But now running ZFS of course has its added value. And when the hardware finally dies, I will switch this to ECC RAM, too. Of course.
@ashuggtube
@ashuggtube 6 месяцев назад
Great work Tom. Good onya. Just watching this now because you posted it again in YT timeline. 😊
@henderstech
@henderstech Год назад
I appreciate your videos so much. Thank you for your hard work. You are my hero.
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Thank you
@be-kind00
@be-kind00 Год назад
Another issue for us home lab folks is that if we want to build a low power small nas there are very few matx or it motherboards that have ecc support and the ones that do are spend. That's why we want to use a nas that uses zfs raid.
@lukevenn8921
@lukevenn8921 21 день назад
Incredible video Tom, but how important is RAM speed for the typical home user e.g. 40TB Samba and some dockers/Plex/Tdarr? Many of us have to balance costs of newer vs ageing hardware.
@jms019
@jms019 Год назад
I’ve got a slightly nasty 32GB stick that writes incredibly slowly so would take hours to fill but works well as a cache device though has taken weeks to fill. Now it’s full (ZFS-stats -L) it has improved things beyond what just some smaller faster SSD cache partitions did on their own. So if you have “spare” USB memory sticks and ports no risk in stuffing them in as cache devices. As I only run the machine for hours per weeks persistent cache is good for me.
@drescherjm
@drescherjm Год назад
0:15 I have had zfs at work and at home for around 8 years. I usually don't come even close to the 1GB per TB on any system. It's usually closer to 1/3 of GB of memory per TB. The main reason is budget and the number of slots. And then some of my servers are 10+ years old and only have 4 dimm slots but at the same time have 20 or more hard disks.
@Prophes0r
@Prophes0r Год назад
The only time it is ACTUALLY needed is for deduplication. You can get away with turning off ARC if you want. But deduplication just uses [X]bytes of memory / [Y]bytes of storage to function.
@nixxblikka
@nixxblikka Год назад
Thank you so much for bringing light I to this and also love the new frequencies of high quality content !
@ofacesig
@ofacesig Год назад
Could you speak more to how you set up your s3 buckets?
@deathcometh61
@deathcometh61 10 месяцев назад
Short answer is all of it. Can only hold 32g get 2tb ecc xl ing ram sticks and force it to your will.
@charleshughes7007
@charleshughes7007 Год назад
I'm running TrueNAS SCALE on a Ryzen 2600 + X570 Taichi + 32GB ECC system with a 6x16TB RAIDZ2 and a 2x4TB mirror and it's been doing great. I'm sure it would work with less memory, but this gives me some space to play around with local VM hosting too.
@zparihar
@zparihar Год назад
Once again great video! Question for you. You mentioned S3 Target. Are you using Minio? And if so, how is the performance when it's running on top of ZFS?
@artlessknave
@artlessknave Год назад
note that there are, or at least used to be, a few, usually very rare, conditions where zfs can need loads and loads of RAM to recover a pool and if it can get it fails to import the pool. similar to how a dedup pool can reach a point where it cannot be loaded due to insufficient RAM. one of the reasons truenas puts swap on every disk is so that if RAM becomes urgently insufficient, it can at least swap. it will be slow as hell, but might have a chance of finishing. of course, if you have backups that mitigates much of that risk
@STS
@STS Год назад
Great video topic and timely for me! I am in the process of deciding how much to expand my TrueNAS Core usage. I currently only utilize it for iscsi (esx). Would like to move to editing videos off of TrueNAS vs copying all assets to my local machine, I was curious about the RAM usage - currently running 4x8gb DDR3 ECC Reg. I could probably stand to search for some 16gb or 32gb dimms.
@youtubegaveawaymychannelname
Hey Tom, Any chance you can do a video on considering Cores VS. Clocks on zfs. Specifically I'd love to see if there is any update to that wisdom when it comes to TrueNAS Scale and TrueNAS Core.
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Maybe I have a troubleshooting video I'm working on that would help you make a decision on your own based on the parameters of the test
@jonathanchevallier7046
@jonathanchevallier7046 Год назад
Thank you for this explanations about ZFS.
@dinkidink5912
@dinkidink5912 Год назад
Just checked my home NAS, it's just a basic media/file server with no need for cache, a touch over 6TB of capacity, current used RAM according to htop is 500MB.
@JoePosillico
@JoePosillico Год назад
Good timing for me on this video. I currently have a Truenas server I built using an old intel i7 system with 32gb of ram and 5 spinning rust drives. I've been running it for a year, and it runs well for backups. I've been thinking about building one specific to VM storage that is more performant, using 4 - 2.5" SSD drives instead of HDDs. Is 128gb of ram just overkill for 15 VMs? Based on this video maybe 64 would be good enough? If there are some goto guides on this, please let me know, otherwise I may just ask this question on your forums.
@DiStickStoffMono0xid
@DiStickStoffMono0xid Год назад
thank you for mentioning video productions using truenas / zfs as this helps me making a decision on a future server upgrade for video / vfx production. The machine is probably going to be NVME based with 100G at the server side and 10x 10G connections to the clients, but it really helps to know that there already are productions running on truenas or zfs because there is not a lot of information to be found with this special use case. btw, would you recommend with above mentioned setup to set the ram to only be a metadata storage and have all file transfers go directly to disk?
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
RAM is used for ARC cache
@jsclayton
@jsclayton Год назад
Have you had any stability issues on Scale tweaking that memory usage switch to allow more than 50%? Seems someone from iX Systems very persuasively advised against going higher on Linux.
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Only if you are using other things that need the memory such as virtualization.
@KarlMeyer
@KarlMeyer Год назад
I wonder how this will apply to Unraid when it gets it's ZFS support update soon.
@blyatspinat
@blyatspinat Год назад
gtfo with unraid :D
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Depends on how they implement it, but it should work.
@udirt
@udirt Год назад
two things to keep in mind: commercial appliances & memory: Oracle SFS boxes: 512GB/Node almost a decade ago. Tegile ZFS based systems - 48GB/Node at start, then 220ish GB (so 480GB per system *plus nvram*, Tegile in 2020... 980GB / system. There have been highly important patches to optimize L2ARC and dedup overhead those guys missed, but if you want to see low latency on ZFS you can either just pretend and diss people who ask about shitty performance admit how high the requirements actually are...
@eugenevdm
@eugenevdm Год назад
Hi there, Thanks so much for the video! It's an eye opener as I thought there would be a "maximum" when running VMs, but clearly not. Unrelated question, which of your videos can I watch to determine if ZFS over iSCSI would be a good way to connect a Proxmox server to a NAS? I'm stuck trying to figure out this architecture? I get building the Proxmox server and I get building the NAS, but I don't know what file system, and what kind of switches for maximum performance?
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
I prefer NFS over iSCSI as a storage for VM's and I don't use Proxmox, I use XCP-NG and I have a video here ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-xTo1F3LUhbE.html on using storage for VMs.
@MattiaMigliorati
@MattiaMigliorati Год назад
thank you for this useful video!
@yourdad9293
@yourdad9293 Год назад
Very interesting.
@UntouchedWagons
@UntouchedWagons Год назад
I've read that the 1GB of RAM for every 1TB of storage is for deduplication but I have no idea. I have 32GB of RAM in my SCALE box, how do I tell ZFS to use more than half of it?
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Set ZFS Arc Size on TrueNAS Scale www.truenas.com/community/threads/zfs-tune-zfs_arc_min-zfs_arc_max.99361/
@Prophes0r
@Prophes0r Год назад
@@LAWRENCESYSTEMS Don't forget to tune zfs_arc_sys_free as well. It is often left out, but is a good safety setting that can let you push WAY closer to the limit with your ARC Max without having to worry about emergency evictions from ARC if something else on the system suddenly wants more memory. zfs_arc_sys_free will start calmly evicting ARC as you get to the limit, instead of waiting until the system is about to OOM.
@loucipher7782
@loucipher7782 Год назад
cant you just use 2tb nvme for the zfs cache? they're so much cheaper compared to that bulk of ram and i dont mind if its just slightly slower as long as they are faster than HDDs
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
That is a more complicated answer ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-M4DLChRXJog.html
@Speccy48k
@Speccy48k Год назад
Thanks for this video. I have plenty of ECC memory: would it be beneficial to use L2ARC or is it NOT required if enough available RAM for ZFS? My understanding is the L2ARC is the equivalent of SWAP so may impact performance. Also, what is interest of using a ZIL/SLOG special device like an Optane drive?
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
RAM is better than L2ARC and Optane would be good for ZIL/SLOG. I have more details on how ZIL/SLOG here ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-M4DLChRXJog.html
@blablabla8297
@blablabla8297 Год назад
Does ZFS benefit from DDR5, or is it better to just buy a larger capacity of DDR4 for the same price?
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Faster memory is better, but that will come down to what your next bottleneck is such as NIC interfaces or workload type.
@blablabla8297
@blablabla8297 Год назад
@@LAWRENCESYSTEMS Thanks. Yeah, I have a gigabit interface with spinning disks on my home NAS, so I thought that may as well go with more RAM as the bottlenecks would probably come from other places anyway.
@RocketLR
@RocketLR Год назад
I've been running the jankiest setup for 3 years now. one old gaming computer converted to a ESXi. Im talking ddr3 and a i7 4770k... Then im running a TrueNAS VM where I've hooked up 3 separate disks as datastores which each hold 1 single vm disk. Then that TrueNAS VM basically raids those 3 disks togheter.
@Mr.Leeroy
@Mr.Leeroy Год назад
To get a good idea about amount of RAM your pool actually wants is to check during a scrub. It will allocate a lot more in the process and free a lot upon completion. P.S. Looking a lot better with that monitoring dashboard in the background. At leas it makes sense.
@sharedknowledge6640
@sharedknowledge6640 Год назад
Nice video and thanks for helping debunk the myths. The level of performance you can get from even a low end TrueNas server completely shames even a high end Unraid server because of ZFS intelligent use of RAM. It’s just Apples and Oranges with TrueNas being a Ferrari and Unraid being an ox cart while Synology and Qnap are somewhere in between. Further even without ECC memory TrueNas is way less likely to have data integrity issues. Unraid loves to kick perfectly good drives out of the array kicking off a series of unwelcome time consuming tasks that just further puts your data needlessly at risk.
@dfgdfg_
@dfgdfg_ Год назад
you alright hun?
@MisterPhysics511
@MisterPhysics511 Год назад
Just making sure I got this right, your purple Nas is only used as a secondary backup server and barely uses 3gb of ram for 4x 8tb drives? Is it able to saturate a regular Gigabit connection on read/write? Thanks
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Yes and Yes
@MikelManitius
@MikelManitius 6 месяцев назад
LOL. love the t-shirt.
@5654Martin
@5654Martin Год назад
Is there an easy way to backup my TrueNAS storage to a third-party location with SFTP etc. in an encrypted and compressed manner?
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Yes, SFTP can be setup under Cloud Credentials as a backup option.
@gjkrisa
@gjkrisa Год назад
With zfs is there away to switch the os or if you broke your os and have to clean install is there a way to not lose that data?
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
ZFS pools can be imported into another system that is at least running the same or newer version of ZFS
@seeingblind2
@seeingblind2 Год назад
How much memory do you need? *YES*
@RocketLR
@RocketLR Год назад
Lawrence what? Lawrence of Arabia? You sound like royalty to me! Are you royalty?! - FMJ Drill Sargent "Earl something something" I just had to get that out of MY system..
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Such a great movie!
@Prophes0r
@Prophes0r Год назад
This is something that needs to be spread because I STILL hear it. The only thing zfs NEEDS ram for is deduplication. Everything else is just nice to have for ARC. That's it. If you need to, you can even disable ARC and have zfs use ZERO extra memory. I'm not sure what your use case would be, but it is doable.
@CoryAlbrecht
@CoryAlbrecht Год назад
Does TrueNAS Scale mean TrueNAS Core on FreeBSD is going to be abandoned?
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Not at this time, they recently released an update to Core.
@stalbaum
@stalbaum Год назад
I always thought, as many dimms as are laying around, capacity over speed.
@cmoullasnet
@cmoullasnet Год назад
You look good with glasses 😎
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Thank you
@luckyz0r
@luckyz0r Год назад
love your videos, they are amazing. but..... where the f*** you buy your t-shirts? :D I really love them continue the good work ;)
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
I have links in the video descriptions that take you to the shirt store lawrence.video/swag/
@lordgarth1
@lordgarth1 Год назад
I have a TB of ECC memory on my TrueNAS server is that enough?
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Depends on your workload, might want to consider more. 😜
@cristianr9168
@cristianr9168 Год назад
Is 128gb overkill I want to turn my 5950x and 128gb into a nas.
@mistercohaagen
@mistercohaagen Год назад
What is the purpose of the NAS? I ran that proc with 64GB of ECC 3200MHz dual rank dimms as a NAS for a while. I found it to be overkill, even with a bunch of VM's and IOMMU passing through a GPU and capture card for an OBS system. 10G ethernet is easier to saturate than you think. Chipset matters too, x570 is probably best for server use with a desktop AM4 chip. I now run a Ryzen 3 3100 & 32GB, and it still saturates the 10G all day, even with a quad NVMe card, and 8x SATA SSD's.
@Prophes0r
@Prophes0r Год назад
Users? Type of data being stored? How much storage? It all matters. Memory for ARC is just a bonus for zfs unless you are doing deduplication. Give it however much you want to. But there will be a point where it doesn't actually do anything for you.
@LackofFaithify
@LackofFaithify Год назад
Not overkill depending on what you want to do. If you want to use ECC, go check out the Asrock Rack motherboards for AM4, they are all serverie and such with ECC support, 10G connections, just be mindful of the limits and weirdness they can have regarding pcie lane usage.
@Mr_Meowingtons
@Mr_Meowingtons Год назад
All of it..
@shephusted2714
@shephusted2714 Год назад
why stop at 128gb? 512 does better and large memory is getting cheaper - it is the best upgrade but large arrays of ssd can easily saturate all but the fastest network links - more ram, nvme and fast network links are the best priorities to focus on with infrastructure upgrades and realizing optimal performance - they are all important
@Prophes0r
@Prophes0r Год назад
The type of data being stored matters too. If blocks aren't accessed frequently, no amount of ram for more ARC is going to matter. The point is that there is a persistent myth that ZFS uses a ton of memory, and it is clearly false. Only deduplication NEEDS memory. Everything else is just a luxury to speed up bursty workloads, or blocks that are constantly accessed.
@tazerpie
@tazerpie Год назад
Why wouldn’t you use a caching nvme ssd?
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Because memory is faster and more effective.
@johnroz
@johnroz Год назад
1GB per TB right?
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Nope
@shotbyschwank
@shotbyschwank 8 месяцев назад
Ltt logo
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS 8 месяцев назад
Nope, LTS logo and it pre-dates the LTT logo.
@thegorn
@thegorn Год назад
I have 512GB ECC RAM is that enough?
@LedufInfraLeDufiNFrA
@LedufInfraLeDufiNFrA Год назад
hey guy' you re testing for a lab : in production environnement with heavy i/o ... memory is really used , so if you have a lot of memory errors (and you can be sure to have many with 128 GBused), you will be pleased to have ECC doing the job for datas corruption. this'is my experience. atom cpu ... okay , you kiled me. with xeon ... ypu used ecc, 500$ for the cpu ... and it,s only working with ECC 😅😅😅😅
@BealleMoriniE
@BealleMoriniE 10 дней назад
Jones Anthony Moore Sandra Robinson Michelle
@NoyesBruce-k4n
@NoyesBruce-k4n 7 часов назад
Brown Anna Harris Robert Johnson Larry
@WillFuI
@WillFuI 4 месяца назад
Me who got a great deal on 192gb of ram
@EtanRowleS
@EtanRowleS 11 часов назад
Robinson Jason Perez Michelle Moore Dorothy
@TechySpeaking
@TechySpeaking Год назад
First
@Itay1787
@Itay1787 Год назад
ZFS need ECC RAM to avoid pool and file corruption I know this from experience…
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Nope, does not NEED it, but it's a nice to have.
@NickyNiclas
@NickyNiclas Год назад
ECC is more important for system stability, say if you have mission critical services running, it can help avoid crashes. Still, memory corruption is pretty rare anyway.
@drescherjm
@drescherjm Год назад
Although I do now have ECC on every zfs system (8 to 10) I have between home and work, I did run zfs systems for several years in production at work without any corruption. The key is to make sure your system is stable before using it. For me that meant 0 errors on memtest86 for 72+ hours of testing. No overclocking of CPU or ram / only JEDEC standard speeds and timings.
@sopota6469
@sopota6469 Год назад
@@LAWRENCESYSTEMS I don’t think he said that meaning it’s a mandatory requirement, but something that can avoid corruption so better be sure to have it. That said, I don’t have any confidence in a system doing very complex tasks like deduplication, full volume snapshots, caching, iscsi, etc. in volumes of 40TB+ without ECC memory. There are very good reasons servers use ECC ram. Saving a few bucks in a multi thousand $ project isn’t worth it.
@LackofFaithify
@LackofFaithify Год назад
@@sopota6469 You really think the type of person that isn't interested in ECC RAM is going to also be the type that also sets up dedupe and all the other bells and whistles on a 40TB system? Or just an average home user and you just have to show off how smart you are?
@davebing11
@davebing11 Год назад
if you DONT use ECC memory on a storage server, you are a fool
@LackofFaithify
@LackofFaithify Год назад
If you don't use ECC memory on a storage server you were probably just an average person called a fool on a Truenas forum and went and bought a synology.
@f.d.castel2821
@f.d.castel2821 Год назад
Yeah. My rubber duck died last year because I didn't use ECC RAM. You have been warned.
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
Or you are someone without a budget for it.
@be-kind00
@be-kind00 Год назад
Disagree. There are thousands of people using synology, qnap, and many other appliances without ecc. How many incidents have we heard that the root cause a failure of a zfs system was a result of not having ecc ram? None in my 40 years of IT and none in the last year of reading hundreds of posts on forums or nas vendor specific user group collaboration sites.
@be-kind00
@be-kind00 Год назад
Fool? A bit strong. Lots of people make educated decisions and are less risk averse that others.
@raghavmahajan3341
@raghavmahajan3341 Год назад
Is it me or the color scheme and the thumbnail just look like LTT.
@msofronidis
@msofronidis Год назад
Is the ZFS cache the memory swap file?
@LAWRENCESYSTEMS
@LAWRENCESYSTEMS Год назад
No that's something different
Далее
TrueNAS Core 13 U4 Update & The Future of TrueNAS
6:08
Will A Guitar Boat Hold My Weight?
00:20
Просмотров 19 млн
What is ECC Memory and Should You Use It In Your NAS?
13:22
Getting the Most Performance out of TrueNAS and ZFS
18:31
TrueNAS: How To Expand A ZFS Pool
18:42
Просмотров 107 тыс.
ZFS Metadata: Special Device And You!
11:41
Просмотров 39 тыс.
Storage Media Life Expectancy: SSDs, HDDs & More!
18:18