The live migration should have happend without a ping being dropped. The disconnect you saw was only the serial console cutting over to the different hv. If you woulf have done it over ssh, you should have seen no dropped ping or at max one, depending on the speed of your switch.
Just a thought. A nicer test of the ha might be to run your ping command from another node rather than the one you migrated. That way you can see if the service really is fully available to external clients
A few issues to think about when you do migration (live or offline): 1. Try to use the same hardware CPU generation and brand on the nodes. Live migration from Ryzen to older AMD CPUs does not work flawlessly, the destination vm will spike at 100 % CPU and be unresponsive. You will have to restart the vm, so no live migration in this use case. Maybe it has been fixed in Proxmox 8, I used Proxmox 7. 2. Live migration between different processor brands is not possible, so no live migration between AMD and Intel CPUs. 3. Migration (live or offline) of vms with USB-attached devices is not possible. That ruined my idea of having a Home Assistant vm with failover, sigh.
Thanks for another good video. I like the short and sweet approach. Mostly just teasing at what the system can do and driving me to investigate other videos and to review the documentation. I've arrived late to the game and just started using Ceph in the last few months, but I'm all in now. I just love the concepts that Ceph is based upon. Replication across hosts, across drives, and erasure encoding for economy. It's really great stuff! I'm not in the business, but to think that I can play with these tools at home and learn the latest tech for servers and storage just blows my mind. And I'm doing it with "trash" hardware... What fun!
@@Jims-Garage I've been getting on a treat! I've put together a four node cluster with OS-oriented Ceph pools for the VMs. I run the requisite Home Assistant, Minecraft, Pihole and Unbound servers. I built the OS-oriented Ceph FS using largish nvme partitions for replicated block storage spread across hosts. I also built a "fast" working data file system using smallish nvme partitions for metadata and 2TB SATA SSDs for data pools. Those live on four nodes with one nvme and one SATA drive each. Last week I set up a 5-drive SATA USB enclosure and built a "slow" second tier store with replicated nvme pool for metadata and an erasure coded pool spread across the USB HDDs. That is the media library. I've got both Linux and Windows clients using the Ceph file systems. I'm just amazed that I got this all working as easily as I did. It's largely due to you and a few other RU-vidrs who inspire me. (And a wife who let's me geek out and spend too much time playing.) My next project is to get a cloud backup scheme in place for off site storage. I've got an iDrive S3 store good for 5TB and want to get that earning it's keep. Keep up the good work inspiring and teaching the rest of us!
everytime i go to build out a project you put out a similar video going over it. If you somehow put out a video on how to use the zfs over iscsi storage option in proxmox I'll be floored
Great content as usual. Just some notes on the ceph cluster itself. You want to set global flags like NOOUT,NOSCRUB and NO-DEEPSCRUB when rebooting ceph nodes as they will start to rebalance when the first node is down if you dont.
I bounce nodes up and down, take OSDs out, and do lots of monkey business all the time without turning off scrubs, etc. The system starts doing what it's supposed to and heals itself all nice and tidy, then reCephifies when things come back online. Why bother?
@@Jims-Garage lol, but really it will help us me and one of my friend has arround 20,24 VM'S on VMware workstation and I wanted to migrate all to proxmox
Why do you go to the effort of cloning and then moving the disks? You can choose the storage at the time you do the clone. Does that not work with Ceph?
You know.. HA, 25 gbit and all sorts of things, although cute and nice to play around with for me personally they are among the least interesting topics ever... I mean talking from a homelab perspective, nice to play around with but absolutely not needed in that seting. Ok firewall HA will be useful but all the ceph/HA... Just my personal $0.01. Would love to see a video on some of the 'promises' you made earlier like install truenas on that NAS you reviewed last time.