Exactly why would you risk it unless it was huge discounted price and for non important use. The only thing that matters is the NAND quality and controller. Not that its new or big.
Most of the bad Chinese shit has been found to be label stripped and replaced with flat out fraudulence. Storage has become ridiculously cheap. BUT you still gotta stay vigilant. Used to be you could trust WD benchmarking. Now... not so much. And Intel? WTF's goin' on over there. That big green splash NVid about to roll out a tsunami bigger than the other big green wave of LnxMint 22 (aka hurricane Wilma) Is rapidly corroding the stupidity of MS-snapshotting your every keystroke. Wow....
I actually just made a similar move from my 500gb M.2 boot drive to a 2tb 980 evo pro. I booted into a live CD and used dd to clone my boot drive to the new drive, after the clone was complete booted into gparted to expand the local-lvm partition. Once booted back into pve on the new drive i expanded the fs of local-lvm from the cli
Three stupid questions: 1) Do you have a blog post with all of the commands? (specifically - the syntax for the zfs detach command)? 2) I am guessing that this really only works if you going from a smaller drive to a bigger drive, but not the other way around? 3) You mentioned that if you are using EFI, to leave the grub part out. But I thought that after the EFI loads, it will still go to the Grub menu in Proxmox, no? Your help is greatly appreciated. Thank you.
Answers: 1. openzfs.github.io/openzfs-docs/man/master/8/zpool-detach.8.html is the man page. The short syntax is 'zpool detach ' where device is exactly what 'zpool status' shows. 2. It works as long as the amount of space consumed by the zpool will fit on the new drive, since it's done by a zfs resilver and not by copying the block device. Similar to replacing a zfs drive with a smaller one. 3. It Depends. For legacy BIOS booting, the grub loader is in the 1M first partition and then the grub loader loads the kernel / initrd from the EFI partition. For EFI booting without secure boot, grub isn't used at all, systemd boot is loaded straight out of the efi partition. For EFI booting with secure boot, grub is stored on the efi partition. Basically, EFI stores the loader (grub or not) as a file in the FAT partition instead of a dedicated partition. In any case I would copy the 1M partition if it's empty or not.
ZFS does 3 things - redundancy (merging disks into one), volume management (splitting the pool into sub-parts), and a filesystem. You can still use the volume manager and filesystem features and all of the benefits of zfs on a single-disk system.
Clonezilla requires me to keep the system down during the whole process, but also won't expand the partition table unless I do the same process from Clonezilla instead of the booted system.
@@JoseOcampo-g5mno need for that since "fick" in German is exactly the word you though of. "F*k what?" is exactly what came to my mind when I heard of using a cheap Chinese SSD for a boot drive of a machine running stuff that might be important. Now it could work, who knows, but proxmox is not esxi and tends to write a bit more to the bit drive. A large overprovisioning area might help. Still wouldn't do that. I'd use brand name SSD for that, and preferably in a mirror.
Good enough for a test rig. Think I'll stick with established brands for anything that I put into production. Especially when you can get Western Digital, Crucial, and Samsung for less or just a tiny bit more money. At least you know the company will be around if and when you need to ever make a warranty claim. Now that being said the migration info was solid!
Always feels like talking to a friend when watching your videos. Thanks for the explanation. Don't need this now but good to have for future reference.
When I created the partition table, the partitions are expanded to fill the whole drive (since sfdisk was instructed to not use the last-lba and partition 3 size from the old drive). So zfs sees the full space. ZFS will then limit to the space of the smallest mirror in the pool when both are attached, but as soon as I detach the smaller drive the full space is available (even without rebooting). I just rebooted to physically remove the old drive and make sure the new drive is properly bootable (it is).
Recently I bought couple of older unused U.2 800GB SK Hynix SSDs for 40$ each along with 15$ aliexpress PCIe adapters, not the fastest SSDs out there but ~4PBW endurance and power loss protection are nice to have on a server.
Boy oh boy would this video have helped me a couple of months back when I had my primary drive start to fail in my Proxmox backup server. I tried and tried to use Clonezilla to duplicate the failing drive to my new drive but failed miserably. I ended up backing up some key files from /etc and just doing a complete reinstall of PBS on the new drive.
@@blakecasimir It seemed to complete OK when I cloned them, but no matter what I tried, the cloned drive would not boot. It's like the grub stuff didn't transfer over or something.
One of the reasons why I have a pair of 500 gig SSDs in ZFS mirror for boot in ProxMox. There is even special instructions on how to deal with failed ZFS boot drive.
prices on ssd jumped up again - they should fall again at some point. my request for you would to be to make dual low power nas but with some special qualities - mega ram, good nvme caching layers and all flash arrays plus 40g dual port card - it is a lil pie in the sky but maybe you could do something with older hw - even a z420 board. megaram for nas is very inportant but the weakest link is probably networking for smb sector and homelabbers/prosumers but having dual nas is worth the time and money too, you can do point to point with dual port cards and also sync up nases quickly - the 56g cards are like 40 bucks
I'm rather stuck. I don't know what to host. I have a NAS running TrueNAS with missmatched drives And a singular proxmox node with 16gb of ram a 240gb normal SSD and a 512gb SSD. I also have a VPS. The reason why I am stuck is, I can't open ports and I don't know how I can expose things to my domain on the Internet. Some people have said using a VPN, but I'm not sure.
Great Video! Thanks for making it. Quick Question, could you teach us how to power your megalab? What parts did you use to MacGyver you way through it. I would love to copy that from you. Since having a real power supply is bulky for a lab setup. Thanks!
It's this - www.mini-box.com/picoPSU-150-XT-150W-Adapter-Power-Kit I don't remember if I got the 120W or 150W but it's one of those. Not very powerful.
Proxmox is such an unpolished product. Given how long Solid State Drives have been around you'd think it wouldn't be as bad as what it is at destroying SSDs. It's like the SSD Terminator
Thanks, good to learn other ways to do things. Just whould it more correct to zpool attach nvme disk by id of the disk (same way as the first was attached)?
I love how you're a tenth dan wizzard in storage tech, but you tape down your SSD and boot by bridging header pins with a twiddler just like skrubs such as I. Also, I feel I need to raise the pedantry by pointing out you said "cat", but never actually ran /usr/bin/cat.
cat is in /bin, not /usr/bin. Actually the recently the norm is a merged /bin and /usr/bin so both work but /bin is the traditional location. IMO if you say "cat" you should show a feline.
How to build a reliable virtualization host: 1) Desktop board of unknown hardware & driver provinance running Proxmox. 2) FLIGGIDII 2TB SSD without integrated power loss protection. K.
My home-lab and I say thanks. I'd be interested in a deep dive in boot loaders if you are looking for video ideas. I use grub (because that's what Debian installs) but I have a feeling I should really be switching to EFI.
The wearout is a bit annoying with proxmox. I wish they would implement a ram disk for logs like in openmediavault, assume proxmox is too professional😉 meanwhile i use 2 very cheap small sata ssds, in a btrfs raid0. Thats quite performant for the os. And i can replace the ssds without regret. My vms reside on my nvme. I am very happy with the setup.
But...Is MegaLab sitting on top of MegaBox? Also, there was the odd SSHD technology, where a mechanical drive had something like 8 GB of flash storage that worked as cache.
MegaLab is on the exact box it came in. Also, Apple sold a 'Fusion Drive' for awhile that did that, but for consumer stuff it's cheaper, smaller, an easier to have a flash only drive now.
Ive used Clonezilla in the past with great success copying nvme ssds (dual boot Win/Linux systems sometimes) to each other.... does the zfs partition cause problems with Clonezilla?
I tried today replacing second disk in TrueNAS - all was OK but when I tried the new disk to boot the system from it, it failed. So if you could make a video how to replace boot-pool disk in TrueNAS it would be great. Probably something with boot/EFI was not done - apparently "zpool replace..." was not enough to boot from new disk. In proxmox there is this command that does the job but now to do it in TrueNAS?
@@apalrdsadventures this part I am not sure - TrueNAS deals nicely with replacing disk that are in user created pools but boot-pool is created by installer and I was not able to find "replace disk" in menu but I might be wrong. I will try again as it is good to try as long as everything works, not when s..t happened already ;-) But I tried from terminal and all was ok except new disk was not bootable
@@apalrdsadventures but magic of TrueNAS is: you download the backup, install from scratch, upload backup and everything is back except ssh keys so 15min job
man, I am just about to replace my ssd in my proxmox, will follow your guide, lets see where we are in few minutes ;-) EDIT: job done, all successful. I had a bit more complicated setup because: 1- it was a replacement of broken SSD sata that works in mirror with SSD NVMe in my mini pc 2- 3 partitions belonged to Proxmox but 4th one was passed through to VM and there used as TrueNAS storage 3- because of replacement I had to use: zfs replace pool old_partition new_partition rather than attach 4- exactly the same later in TrueNAS 5- after resilvering all is ok and I checked the boot from the new disk only - works as well one comment to your video: attaching "sda1" or "nvme1" to zfs pool is not the best one - better use diskid or at least partuid - your life can get complicated if you just use /dev/sdaX ;-) perfect video, thank you a million!
Glad it's working well for you! zfs isn't super particular about dev names like some other filesystems on Linux, but using uuid is the best practice still.
@@apalrdsadventures yes, but sdX can be renumbered - my TrueNAS has 6 HDD and every reboot is different sdX. But if we use uuid or diskid it remains the same.
you can convert single drives to/from mirrors or add more disks to a mirror using zpool attach and zpool detach. In this example I attach the one new drive then detach the one old drive (so it goes single -> 2-way mirror -> single), but you could just as easily prep the 2 new drives (using the same boot / efi partition process on each drive) then attach both (now in a 3-way mirror). Once resilvering is done with both drives, you can detach the first (now in a 2-way mirror).
@@apalrdsadventures thanks for the information I have started to do it now and when I write the partition file back to the first new disk (not tried the other yet) it comeplets but i get this error too Partition 1 does not start on physical sector boundary. is this ok?
hrm I wonder if your existing drive is using 512 byte sectors and the new drive is using 4096 byte sectors? Usually we partition everything assuming 4096 byte even if the drive claims 512 byte. As to the zpool, is the zpool combined with more disks or is it not using zfs at all?
@@apalrdsadventures Yes its using 512 if i recall now when i installed (just installed with the proxmox installer gui) i remember thinking why would i use zfs with only 1 drive so didnt now that im learning im moving over to 2 drives in a zfs pool, do i have a way around this?
How long did the old SSD last? Ive read some comments say that Proxmox eats consumer grade SATA/NVMe SSD drives. Any tips to prolonging the life of SSD drives used as a Proxmox boot drive? Any issues storing VMs and ISOs on the boot drive?
I don't think it's any more aggressive with boot drives by itself than other server systems. It has usual system logging, which is not a massive amount of data, but add in the VM disks on top of that and it can add up to a lot of background writes. But generally for longer SSD life, using a larger drive and filling it less means each flash cell gets programmed/erased less frequently. The old wisdom was to overprovision (leave empty space in the partition table), but using a modern fs like zfs that supports discard/trim will let the drive know which blocks can be discarded and the free space on the fs is basically the overprovision space. Some zfs tuning can be done (like increasing the block size) as well. Enabling discard support for the VMs also means their free space passes up to the drive as well. I'm using this to store the VMs/CTs on my test system, so this system does see all of the use of the VMs in addition to the Proxmox system itself. It's not doing a whole lot, but the VMs do get created/destroyed often as I often walk through my tutorials on a new VM each time.
@@apalrdsadventuresfrom what I heard proxmox writes quite a lot to the boot drive, as opposed to say esxi (rip), which could be rather safely run from a flash USB. Particularly if you're using it with a couple of other proxmox machines in an HA setup.
You didn't touch on over-provisioning. I typically leave some NVMe free space, either by NS or partition to give some more spare space, although I mostly use old enterprise drives with high endurance where that isn't as important as they usually have plenty of reserved spare space.
ZFS will properly use discard/trim, so unused space will be free for the wear leveling algorithm to use. In my case, the drive was less than half full before, so now it's less than 1/8 full, and has plenty of empty space for flash endurance.
I found that by far the biggest contributors to SSD wear were the HA services, which you can safely disable if you aren't in a cluster or don't need them - pve-ha-crm & pve-ha-lrm