We go over the technicals behind enterprise grade homelab datacenters. Everything from setting up hardware, configuring Linux, 40Gb networking, servers, racks, storage arrays, SAS and JBOD configurations, sizing electrical, and heating and cooling calculations! Plus, digging deep on the hard stuff with tutorials to help you with Blueprints, Tips & Tricks and How-tos.
✅ Subscribe to this channel: ru-vid.com ✅ Join as a member of this channel: ru-vid.comjoin
When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network, Newegg, Samsung, Lenovo, Best Buy, LG, Crutchfield, Dell. As an Amazon Associate I earn from qualifying purchases.
📧 For Business Inquiries, please contact us at our Email address linked under Channel Details below.
How do you ensure compatibility with different hardware setups for viewers who may not have access to the specific networking gear you mentioned in this video?
I am also pro reducing eWaste but the main problem is the cost of electricity. Most home lab tech's will want the lostest power consumption as possible which means brand new or very specific hadware options.
I hope that panel isn’t metal because if you’re putting anything that that uses wireless communications inside, you have absolutely absolutely no clue what you’re doing
Where can i find a JBOD on the cheap? my other options is upgrade all my drives to a larger capacity.
5 дней назад
What setup do you use to reach the server via iDrac ? I got an old FirefoxPortable that accepts outdated certificates and also keep an old java7 version installed that can run the console window. Is there a batter way ?
Some motherboards (msi trx40 creator for example) allow you to fix the 2nd pcie slot at X8 and only drop it down if you explicitly set it to bifurcate ....
You don’t run into problems with Unraid at 300 drives, you run into it at 30 drives. It doesn’t even see all the drives in my storage server. Pretty pathetic. Had me baffled for a while doing hardware troubleshooting.
I have both, each running ~150TB of storage. TrueNAS can’t be beat for NAS-only. Shove in 1 TB RAM, get yourself a fast card and switch and forget about it. It’s fast and stable. What else could be more important? What I like about Unraid is that if you lose both parity drives, and then you lose another drive, you only lose that one drive. Same scenario in TrueNAS and you’d lose the array. Container and App Store is good. Things I don’t like, speed and stability primarily, which is kinda poor for a NAS. I’ve found that what’s better than Unraid OR TrueNAS, is an Unraid backup server with a primary NAS as TruNAS inside a VM in Proxmox, only used as a NAS, with VM’s and containers in Proxmox, using TrueNAS as an iSCSI target for fast storage outside of the Proxmox node,and as part of a cluster.
What’s the point with Unraid? I have 2 servers running 2 x 10GbE each. Even when I was first copying data where all the disks were being used, I saw a max just over 1.5Gbps. Pointless. Unraid is SO SLOW.
Digital why did you we need some serious networking, with the right tools and equipment and budget of course I can build the best network home lab ever.
Thanks for the well done tutorial! Curious about the long term stability of ZFS on SSDs with a hardware controller RAID 10. I've used Proxmox for many years, in data center environments, all with LVM on spinning disks. (HP DL360 various generations.) Extremely reliable. Now studying the idea of migrating to SSDs and ZFS, but have read a lot about problems- even with the controller in HBA mode. Seems like it works- for a while. Any thoughts on how to achieve long term bulletproof reliability similar to LVM and hard drives?
Does it matter if the nodes in the cluster have different raid setup but still have the same name? Does replication still works between nodes? Example pool-name: VM-storage node1: raidz1 node2: raidzmirror node3: raidzstripe Thank you.
I think as long as you are all ZFS you are OK. I would be careful and test this first with a live migration but if that works I would assume it's OK. The HA is based of proxmox's ZFS implementation under the hood which I'd imagine confirms closely to open-zfs standard.
@@DigitalSpaceport Ok I'll test it first. I'll let you know if live migration works!. Thank you!
14 дней назад
If you get a crash or power failure, will all the data you think you moved be gone on TruaNAS because of the RAM cache? (you might have said it, but I watched the video in a noisy environment)
No the DATA is transmitted in a txg group and committed directly to disk from RAM constantly. The amount "in flight" between ram and disk is limited by default by a setting called SYNC which in its default state, replies to the calling system/application a successful data transmission only when data is at the disk. Only if you explicitly set that to disabled is data in ram in a non disk backed state. You could then have data in flight damaged, but when you restart the machine it looks for the txg group it had last and will continue writing. That is not terrible if your storing say a static file, however it's potentially catastrophic if you are storing a VM or Database. Hence don't disable sync unless you have a very good reason and know the risks... Which now you do.
You should use the new TrueNAS and it has a very easy to use setup for DMB file shares. The older one most likely has an ACL mismatch if you can't write to the dataset. Hard to troubleshoot that over RU-vid comments, but the new workflow in the latest TrueNAS scale makes this much easier.
One would think but I'm base rate 10c/kWh and so it isn't that bad. I also shed load dynamically via HA and proxmox and don't run all the machines at once very often, unless needed. Usually about 250 total with the house for the electric bill. Cooling is consistently the largest user in the garage.
Not so much but It's a production system in addition to some learning and homelab parts. I keep things kinda segregated in the setup and don't showcase the work stuff really.
Yeah I mentioned to them they need a detailed bios walkthrough guide that's very in depth. A lot of the settings are non-standard they surface to end users.
I built 2 large NAS’ and moved off Unraid to TrueNAS after 6 months. It’s fine if you have a few drives, but it’s not performant enough, the stability is not ready for prime time, and the support sucks. Unraid as a company have major bugs, even ones related to the core data management functionality that they don’t bother fixing. I’d not trust my data with Unraid again.
@@DigitalSpaceport There are too many issues to list. Poor performance is expected given its design, but the reliability and stability is the biggest issue for me. When the servers get busy the UI freezes, safe shutdowns from the shell are unreliable, if you have too many browsers open connecting to UNRAID UI it crashes, and on and on. One time, I had data that was being migrated, was taking up space, but wasn’t there in any other way. Like a black hole. When the forum moderators sent me to log a bug ticket, Unraid didn’t even want me to reproduce it. They said they’d just “monitor the thread”, but the thread was concluded as a bug. They’re not a serious company. It’s cool how you can easily expand, and even if you lost a drive while you had lost your 2 parity drives, you’d still retain the rest of your data, opposed to losing everything like in TrueNAS, but it’s a bit Mickey Mouse. The container support is better than TrueNAS, but I’d rather Proxmox over both of them.
I want build a 2.5gb pfsense router, but the key-component: 4 port Intel i226-V adapter are all made in China, and it's very expensive, this is like a replica of the 2020 global pandemic and mask incident.
I did pfSense in the past on ESXi but would not do it again. A dedicated (low energy) box of pfSense/OPNSense is the way I planned to go until I go some Ubiquiti stuff. But since the intra-VLAN routing (at 10Gbps) is not what I wanted I might go back to a dedicated router box.
For the price, this is epic. And i dont care 10gig network because if you want that speed, you must go pricier ones. The bad part is, in my country i cant import any item more than 150€ or it will stuck in at customs. 🥹
All my desktop VMs on Peoxmox work fine except Windows 11. Even if I go into the BIOS and change the resolution from the incredibly low res it won’t take, RDP won’t work so I need to use 3rd party remote desktop, and I had to run it on a separate server because it won’t support Xeon. On top of all that, Windows 10 Pro doesn’t work if you’re signed in with a Microsoft account, only a local admin account. It’s just terrible. Interestingly, Wubuntu has been amazing. Not so impressed with Mint.
"chia is what is is today becuse of your commitment" what is chia today? seriously, Ive been on this project since pre mainnet, and literally its contributed nothing to blockchain tech... there really isnt much going on with the project at all
Punch Down Patch Panels dont make sense to me, never have... the RJ45 back panel connectors have the same spacing and you can just pull out and replace easily punch down panels completely make that spot unusable if the cable goes dead
I feel like you have already made changes to the votes of each node to hold a quorum.. just removing a node without changing the votes (either required # of votes or the value of a vote from a node) to offset the lost node would still be needed no? I feel like I have got myself in a situation over quorum before.
Quorum s a situation. Remember, don't think in terms of decreasing votes, add votes to active nodes to offset. Yes do a cert refresh and replace with same hostname is ideal. Make sure to do the cert refresh or the node will seem zombie.
Love it! Yep...got a hodgepodge of cables too that I have made great efforts to clean up, but the clutter bug in me always finds a way around that 😂. Good content.
My problem is it's fast to just pop in a replacement, but hard and time consuming to fully remove. I'm taking steps to fix this but yeah clutterbug is me also
19:30 Looks like you could use a PoE-powered switch there, would solve the power problem 21:01 How do you have so many bad punches? Are you using solid conductor cable and a proper punch tool??? The keystone route is a great idea, just costs $$$.
I'm got a replacement panel. It's going to all get a repunch and if there is issues will be easy to fix from now on. I think I had too little slack on the back for some of them. I don't know why I didn't make a proper service loop. Okay I do know why but it was a poor reason.
@@DigitalSpaceport Yee I learned the hard way to leave a service loop. It's not fun re-pulling an 80ft line under fiberglass insulation because it's 1ft short LOL (looking at you porch IP camera....). Along the same idea, running me some OM3 to a few locations and trying to decide if I should get LC-LC keystones so I can patch on the front of the rack or if that's a waste of time/money and just run the OM3 directly to the switch. Patches would look nice but +$10/ea really.