Тёмный

Proxmox High Availability With Ceph 

Jim's Garage
Подписаться 41 тыс.
Просмотров 12 тыс.
50% 1

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 32   
@ketiljo
@ketiljo 3 месяца назад
Thanks for another straight to the point video!
@Jims-Garage
@Jims-Garage 3 месяца назад
Thanks!
@LampJustin
@LampJustin 3 месяца назад
The live migration should have happend without a ping being dropped. The disconnect you saw was only the serial console cutting over to the different hv. If you woulf have done it over ssh, you should have seen no dropped ping or at max one, depending on the speed of your switch.
@Jims-Garage
@Jims-Garage 3 месяца назад
Thanks, yes I did check the output again and saw no dropouts. The next test is to HA the firewall, wish me luck.
@rjarmitag
@rjarmitag 3 месяца назад
Just a thought. A nicer test of the ha might be to run your ping command from another node rather than the one you migrated. That way you can see if the service really is fully available to external clients
@hanscarlsson7276
@hanscarlsson7276 3 месяца назад
A few issues to think about when you do migration (live or offline): 1. Try to use the same hardware CPU generation and brand on the nodes. Live migration from Ryzen to older AMD CPUs does not work flawlessly, the destination vm will spike at 100 % CPU and be unresponsive. You will have to restart the vm, so no live migration in this use case. Maybe it has been fixed in Proxmox 8, I used Proxmox 7. 2. Live migration between different processor brands is not possible, so no live migration between AMD and Intel CPUs. 3. Migration (live or offline) of vms with USB-attached devices is not possible. That ruined my idea of having a Home Assistant vm with failover, sigh.
@TheMaksimSh
@TheMaksimSh 3 месяца назад
can you show Proxmox High Availability with Home Assistant Containers (LXCs or VMs) and Zigbee Stick?
@Jims-Garage
@Jims-Garage 3 месяца назад
It's possible but complex without multiple ZigBee sticks
@TheMaksimSh
@TheMaksimSh 3 месяца назад
@@Jims-Garage this sticks are cheap. Having to wait few days for parts without working home automations is much worser.
@cossierob6143
@cossierob6143 2 месяца назад
Try a network connected co-ordinator like the SLZB-06 that way no reliance on USB for Zigbee.
@TheMaksimSh
@TheMaksimSh 2 месяца назад
@@cossierob6143 nice, that something new to me. LAN access would make this much easier, probably
@TheMaksimSh
@TheMaksimSh 2 месяца назад
@@cossierob6143 which dongle would you recommend for poe? Too many models
@lawrencerubanka7087
@lawrencerubanka7087 Месяц назад
Thanks for another good video. I like the short and sweet approach. Mostly just teasing at what the system can do and driving me to investigate other videos and to review the documentation. I've arrived late to the game and just started using Ceph in the last few months, but I'm all in now. I just love the concepts that Ceph is based upon. Replication across hosts, across drives, and erasure encoding for economy. It's really great stuff! I'm not in the business, but to think that I can play with these tools at home and learn the latest tech for servers and storage just blows my mind. And I'm doing it with "trash" hardware... What fun!
@Jims-Garage
@Jims-Garage Месяц назад
@@lawrencerubanka7087 thanks. Exactly, that's what homelab is all about! Let me know how you get on!
@lawrencerubanka7087
@lawrencerubanka7087 Месяц назад
@@Jims-Garage I've been getting on a treat! I've put together a four node cluster with OS-oriented Ceph pools for the VMs. I run the requisite Home Assistant, Minecraft, Pihole and Unbound servers. I built the OS-oriented Ceph FS using largish nvme partitions for replicated block storage spread across hosts. I also built a "fast" working data file system using smallish nvme partitions for metadata and 2TB SATA SSDs for data pools. Those live on four nodes with one nvme and one SATA drive each. Last week I set up a 5-drive SATA USB enclosure and built a "slow" second tier store with replicated nvme pool for metadata and an erasure coded pool spread across the USB HDDs. That is the media library. I've got both Linux and Windows clients using the Ceph file systems. I'm just amazed that I got this all working as easily as I did. It's largely due to you and a few other RU-vidrs who inspire me. (And a wife who let's me geek out and spend too much time playing.) My next project is to get a cloud backup scheme in place for off site storage. I've got an iDrive S3 store good for 5TB and want to get that earning it's keep. Keep up the good work inspiring and teaching the rest of us!
@ewenchan1239
@ewenchan1239 3 месяца назад
Great video! Just as a head's up -- instead of initiating the migration via the command line, you can just click on the migrate button in the GUI.
@jeefonyoutube
@jeefonyoutube 3 месяца назад
everytime i go to build out a project you put out a similar video going over it. If you somehow put out a video on how to use the zfs over iscsi storage option in proxmox I'll be floored
@tactoad
@tactoad 3 месяца назад
Great content as usual. Just some notes on the ceph cluster itself. You want to set global flags like NOOUT,NOSCRUB and NO-DEEPSCRUB when rebooting ceph nodes as they will start to rebalance when the first node is down if you dont.
@lawrencerubanka7087
@lawrencerubanka7087 Месяц назад
I bounce nodes up and down, take OSDs out, and do lots of monkey business all the time without turning off scrubs, etc. The system starts doing what it's supposed to and heals itself all nice and tidy, then reCephifies when things come back online. Why bother?
@rogerthomas7040
@rogerthomas7040 3 месяца назад
The fun project to cover would be how to shut down a proxmox cluster with Ceph as it does not seem to have an out of the box solution.
@Jims-Garage
@Jims-Garage 3 месяца назад
I would always perform a full backup in case
@muhammadabidsaleem7048
@muhammadabidsaleem7048 2 месяца назад
Hi Jim thank you for this video.We are waiting for your advance SDN video please. thank you
@iron-man1
@iron-man1 3 месяца назад
Now just make a video of migration of virtualbox/vmware workstation/bare machine /esxi to proxmox
@Jims-Garage
@Jims-Garage 3 месяца назад
I did hyper-v, does that count? 😂
@iron-man1
@iron-man1 3 месяца назад
@@Jims-Garage lol, but really it will help us me and one of my friend has arround 20,24 VM'S on VMware workstation and I wanted to migrate all to proxmox
@AdrianuX1985
@AdrianuX1985 3 месяца назад
+1
@BenjaminBenStein
@BenjaminBenStein 3 месяца назад
🎉
@jacobburgin826
@jacobburgin826 3 месяца назад
@magnificoas388
@magnificoas388 3 месяца назад
a small hickup and voilà
@rjarmitag
@rjarmitag 3 месяца назад
Why do you go to the effort of cloning and then moving the disks? You can choose the storage at the time you do the clone. Does that not work with Ceph?
@Jims-Garage
@Jims-Garage 3 месяца назад
Agreed, and I mentioned it on screen. It's for people with existing VMs that want to move to the new Ceph storage.
@JPEaglesandKatz
@JPEaglesandKatz 3 месяца назад
You know.. HA, 25 gbit and all sorts of things, although cute and nice to play around with for me personally they are among the least interesting topics ever... I mean talking from a homelab perspective, nice to play around with but absolutely not needed in that seting. Ok firewall HA will be useful but all the ceph/HA... Just my personal $0.01. Would love to see a video on some of the 'promises' you made earlier like install truenas on that NAS you reviewed last time.
Далее
Let's Build A Smart Home with Home Assistant
39:05
Просмотров 16 тыс.
Highly Available Storage in Proxmox - Ceph Guide
31:13
pumpkins #shorts
00:39
Просмотров 12 млн
Planning A Homelab - Cluster Time! Minis Forum MS-01
16:22
You should be using Proxmox Backup Server
10:38
Просмотров 27 тыс.
OPNSense High Availability - 1 VM, 1 IP!
24:52
Просмотров 12 тыс.
ProxMox High Availability Cluster!
11:08
Просмотров 166 тыс.
Automate Homelab Deployment With Terraform & Proxmox
26:22
Setting Up Proxmox High Availability Cluster & Ceph
16:54
Proxmox VE SDN VXLAN Setup
32:54
Просмотров 3,2 тыс.