Тёмный

High Availability Cluster Configuration in Linux | Configure Cluster Using Pacemaker in CentOS 8 

Nehra Classes
Подписаться 47 тыс.
Просмотров 43 тыс.
50% 1

High Availability Cluster Configuration in Linux | Configure Cluster Using Pacemaker in RHEL 8:
===
For detailed steps please check the pinned comment.
===
====
My i5 10 Gen Laptop With 512 GB SSD & 8 GB DDR4:
amzn.to/30amhRt
====
My DSLR Camera:
amzn.to/36954Ml
===
My Boya Microphone:
amzn.to/3mZavTS
===
Join this channel to get access to perks:
/ @nehraclasses
===
Thanks for watching the video. If it helped you then, please do like & share it with others as well. Feel free to post your queries & suggestions, we will be glad to answer your queries.
If you like our hard work then do subscribe to our channel & turn on the bell notification for latest updates.
===
Contact Us:
Follow our all social media accounts @NehraClasses
Vikas Nehra's Twitter Handle: bit.ly/VikasNehraTwitterHandle
Vikas Nehra's FB Account: / er.vikasnehra
Vikas Nehra's Instagram Handle: / er.vikasnehra
Registration Form: bit.ly/NehraClassesRegForm
Twitter Handle: bit.ly/NehraClassesTwiiterHandle
Facebook Page: nehraclasses
Instagram: / nehraclasses
Telegram Channel: t.me/NehraClasses
WhatsApp Us: bit.ly/2Kpqp5z
Email Us: nehraclasses@gmail.com
===
©COPYRIGHT. ALL RIGHTS RESERVED
#NehraClasses #LinuxTraining #HACluster

Опубликовано:

 

21 сен 2020

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 81   
@NehraClasses
@NehraClasses 3 года назад
HA Cluster Configuration in RHEL 8 (CentOS 8): ============================================== High Availability cluster, also known as failover cluster or active-passive cluster, is one of the most widely used cluster types in a production environment to have continuous availability of services even one of the cluster nodes fails. In technical, if the server running application has failed for some reason (ex: hardware failure), cluster software (pacemaker) will restart the application on the working node. Failover is not just restarting an application; it is a series of operations associated with it, like mounting filesystems, configuring networks, and starting dependent applications. Environment: Here, we will configure a failover cluster with Pacemaker to make the Apache (web) server as a highly available application. Here, we will configure the Apache web server, filesystem, and networks as resources for our cluster. For a filesystem resource, we would be using shared storage coming from iSCSI storage. CentOS 8 High Availability Cluster Infrastructure Host Name IP Address OS Purpose node1.nehraclasses.local 192.168.1.126 CentOS 8 Cluster Node 1 node2.nehraclasses.local 192.168.1.119 CentOS 8 Cluster Node 2 storage.nehraclasses.local 192.168.1.109 CentOS 8 iSCSI Shared Storage virtualhost.nehraclasses.local 192.168.1.112 CentOS 8 Virtual Cluster IP (Apache) Shared Storage Shared storage is one of the critical resources in the high availability cluster as it stores the data of a running application. All the nodes in a cluster will have access to the shared storage for the latest data. SAN storage is the widely used shared storage in a production environment. Due to resource constraints, for this demo, we will configure a cluster with iSCSI storage for a demonstration purpose. [root@storage ~]# dnf install -y targetcli lvm2 iscsi-initiator-utils lvm2 Let’s list the available disks in the iSCSI server using the below command. [root@storage ~]# fdisk -l | grep -i sd Here, we will create an LVM on the iSCSI server to use as shared storage for our cluster nodes. [root@storage ~]# pvcreate /dev/sdb [root@storage ~]# vgcreate vg_iscsi /dev/sdb [root@storage ~]# lvcreate -l 100%FREE -n lv_iscsi vg_iscsi cat /etc/iscsi/initiatorname.iscsi Node 1: InitiatorName=iqn.1994-05.com.redhat:121c93cbad3a Node 2: InitiatorName=iqn.1994-05.com.redhat:827e5e8fecb Enter the below command to get an iSCSI CLI for an interactive prompt. [root@storage ~]# targetcli Output: Warning: Could not load preferences file /root/.targetcli/prefs.bin. targetcli shell version 2.1.fb49 right 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. /> cd /backstores/block /backstores/block> create iscsi_shared_storage /dev/vg_iscsi/lv_iscsi Created block storage object iscsi_shared_storage using /dev/vg_iscsi/lv_iscsi. /backstores/block> cd /iscsi /iscsi> create Created target iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260. /iscsi> cd iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18/tpg1/acls create iqn.1994-05.com.redhat:121c93cbad3a create iqn.1994-05.com.redhat:827e5e8fecb cd /iscsi/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18/tpg1/luns /iscsi/iqn.20...e18/tpg1/luns> create /backstores/block/iscsi_shared_storage Created LUN 0. Created LUN 0->0 mapping in node ACL iqn.1994-05.com.redhat:827e5e8fecb Created LUN 0->0 mapping in node ACL iqn.1994-05.com.redhat:121c93cbad3a /iscsi/iqn.20...e18/tpg1/luns> cd / /> ls o- / ......................................................................................................................... [...] o- backstores .............................................................................................................. [...] | o- block .................................................................................................. [Storage Objects: 1] | | o- iscsi_shared_storage .............................................. [/dev/vg_iscsi/lv_iscsi (10.0GiB) write-thru activated] | | o- alua ................................................................................................... [ALUA Groups: 1] | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | o- fileio ................................................................................................. [Storage Objects: 0] | o- pscsi .................................................................................................. [Storage Objects: 0] | o- ramdisk ................................................................................................ [Storage Objects: 0] o- iscsi ............................................................................................................ [Targets: 1] | o- iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18 ......................................................... [TPGs: 1] | o- tpg1 ............................................................................................... [no-gen-acls, no-auth] | o- acls .......................................................................................................... [ACLs: 2] | | o- iqn.1994-05.com.redhat:121c93cbad3a .................................................................. [Mapped LUNs: 1] | | | o- mapped_lun0 .................................................................. [lun0 block/iscsi_shared_storage (rw)] | | o- iqn.1994-05.com.redhat:827e5e8fecb ................................................................... [Mapped LUNs: 1] | | o- mapped_lun0 .................................................................. [lun0 block/iscsi_shared_storage (rw)] | o- luns .......................................................................................................... [LUNs: 1] | | o- lun0 ......................................... [block/iscsi_shared_storage (/dev/vg_iscsi/lv_iscsi) (default_tg_pt_gp)] | o- portals .................................................................................................... [Portals: 1] | o- 0.0.0.0:3260 ..................................................................................................... [OK] o- loopback ......................................................................................................... [Targets: 0] /> saveconfig Configuration saved to /etc/target/saveconfig.json /> exit Global pref auto_save_on_exit=true Last 10 configs saved in /etc/target/backup/. Configuration saved to /etc/target/saveconfig.json Enable and restart the Target service. [root@storage ~]# systemctl enable target [root@storage ~]# systemctl restart target Configure the firewall to allow iSCSI traffic. [root@storage ~]# firewall-cmd --permanent --add-port=3260/tcp [root@storage ~]# firewall-cmd --reload
@santoshbolar3485
@santoshbolar3485 2 года назад
Sir ur classes are superb, I have learned soo many things in Linux.please upload pcs clustering document also.
@NehraClasses
@NehraClasses 2 года назад
Thank you, all the documents are available in our telegram channel.
@Nk-gaming106
@Nk-gaming106 2 года назад
@@NehraClasses hi sir, I am working professional need help in setting up a pacemaker cluster in our lab. I am open for pay the fee.
@udayarpandey3937
@udayarpandey3937 3 года назад
This is the lecture I was waiting for bro! I hope you will make a series on this with proper explanation rather than just walking though the commands. Thank you for the video.
@NehraClasses
@NehraClasses 3 года назад
Thanks, will definitely make the series of videos. This video I have uploaded on the demand of one of our subscribers, it is public for today only. Tomorrow it will be visible to members only. 🙏
@mushfigmustafazade8880
@mushfigmustafazade8880 9 месяцев назад
This man's videos are precious. Thank you Nehra
@kalyanb6995
@kalyanb6995 3 года назад
Awesome session ji thanks for the video
@NehraClasses
@NehraClasses 3 года назад
Thanks
@lokeshkumar1365
@lokeshkumar1365 2 года назад
This video is informative and could you please create detailed video on HA NFS server for prod level? Thanks
@balaji276
@balaji276 3 года назад
Thanks brother
@smrutiranjandas4766
@smrutiranjandas4766 3 года назад
Super tutorial sir
@NehraClasses
@NehraClasses 3 года назад
Thanks 🙏
@54_amol_more95
@54_amol_more95 2 года назад
Thanks sir
@rakeshverma1707
@rakeshverma1707 3 года назад
Great Bro Superb
@NehraClasses
@NehraClasses 3 года назад
Thanks 🤗
@gam3955
@gam3955 2 года назад
Great, thanks a lot for sharing. Have you a video or web page for the same with Oracle database servers with Pacemaker ?
@NehraClasses
@NehraClasses 2 года назад
No
@tvscprtrade338
@tvscprtrade338 Год назад
can u make video for mysql DB cluster with pacemaker
@KMCreations24
@KMCreations24 4 дня назад
I got more knowledge on clusters, could you provide the document on it please🙏
@pankajnara6528
@pankajnara6528 3 года назад
Nice
@NehraClasses
@NehraClasses 3 года назад
Thanks 😊
@Myyutubee
@Myyutubee Год назад
Thank you so much sir. I have 2 questions. Why lvm needs to be created again on node1 or node 2 when lvm had already been created on iscsi storage server. Why we need another machine for virtual ip ?
@NehraClasses
@NehraClasses Год назад
First Lvm was created on iscsi server, however you can directly use the physical disk instead of lvm if you want and then share it with clients. This shared storage is a block storage and needs to be formatted and partitioned so better to use Lvm so that we can extend it if required. Virtual IP doesn't require any physical machine unless you want to use NAT to hide the ip address of your actual machine.
@Myyutubee
@Myyutubee Год назад
@@NehraClasses Sir, Thank you for your quick response. I agree with advantage of Extending capability of LVM but why to create LVM again on nodes when we have already created it on iscsi server. Doesn't it get reflected on nodes. Cant we directly format it on iscsi server ?
@mohitbaluka8094
@mohitbaluka8094 2 года назад
Sir ..please provide the documentation as well for pcs clustering ??
@NehraClasses
@NehraClasses 2 года назад
Please join channel membership to access all documents.
@davidghitis2586
@davidghitis2586 Год назад
Hello After seting all the iSCSI and the hosts files in node 1 and 2, you said we need to do the yum config-manager --set-enabled HighAvailability but I'm getting No matching repo to modify: HighAvailability. Any idea? I'm using RHEL 8.7. Thanks!
@NehraClasses
@NehraClasses Год назад
Repository name may differenet on RHEL 8, make sure you have the active redhat subscription for that. kindly list the avaialble repos first.
@chillySauceMind
@chillySauceMind 3 года назад
Hello sir, I using RHEL Ec2 instance and i am not able to install pacemaker corosync pcs, are there any other steps to install in ec2 instance?
@NehraClasses
@NehraClasses 3 года назад
Configure EPEL Repository
@ManikandanK-lp6yy
@ManikandanK-lp6yy 2 года назад
16:49 - not understadn how its executed
@davidchang5862
@davidchang5862 2 года назад
Hi. Can u also provide a video detailing how to setup fencing for RHEL ?
@NehraClasses
@NehraClasses 2 года назад
ok, will upload soon.
@davidchang5862
@davidchang5862 2 года назад
Thanks bro.
@fahadalhajri5144
@fahadalhajri5144 2 года назад
Hi Nehra, I followed steps and all run successful, my new disk on storage vm "nvme0n2" I don't see it in node1 and node2 when running lsblk ? here i stopped. any advise ?
@NehraClasses
@NehraClasses 2 года назад
Check your iSCSI target configuration.
@vmalparikh
@vmalparikh 2 года назад
Hi..the Physical and logical volume groups not being displayed in another node.. I have performed all the three commands like pvcan, vgcsan and lvscan....Please help
@NehraClasses
@NehraClasses 2 года назад
please join our channel platinum membership and join our telegram channel for support.
@vmalparikh
@vmalparikh 2 года назад
@@NehraClasses Sir, Could share the documentation? also can you show the fencing through sbd ?
@IlsaTaibani
@IlsaTaibani 5 месяцев назад
Heyy i have the same issue can you please help me out ?
@IlsaTaibani
@IlsaTaibani 5 месяцев назад
​@@vmalparikh i have the same issue how did u solve it ?
@sainiamit4911
@sainiamit4911 3 года назад
Hi sir it seems like some steps are missing for virtualip server.
@NehraClasses
@NehraClasses 3 года назад
No dear, please Check again.
@vipulgajbhiye569
@vipulgajbhiye569 7 месяцев назад
Pcsd service unable to start Error PCS gui
@podishettivikram8681
@podishettivikram8681 2 года назад
How to set up this in lab environment
@NehraClasses
@NehraClasses 2 года назад
First create these four machines, you should have sufficient hardware resources to run all these machines togather.
@LINUXGURU08
@LINUXGURU08 Год назад
stonith.enabled=false is not real time example, requesting you need to share fencing configuration video
@supriyomukherjee4030
@supriyomukherjee4030 Год назад
Please share the command details. Sir
@NehraClasses
@NehraClasses Год назад
its available for members on google drive
@vipin_mishra19
@vipin_mishra19 2 года назад
16:37 --Getting error while authoring the nodes...
@NehraClasses
@NehraClasses 2 года назад
send us error screenshot in telegram
@TheAdventureAwaitsTV
@TheAdventureAwaitsTV 11 месяцев назад
where is the comment for steps I might be blind or crossed eye guy I can't see the documetation
@mayank7616
@mayank7616 2 года назад
From where I can get rpm of that ?
@NehraClasses
@NehraClasses 2 года назад
Which rpm?
@manjeetgupta8462
@manjeetgupta8462 2 года назад
Please sir iska documentation b dijiye.
@NehraClasses
@NehraClasses 2 года назад
please join our telegram channel for the same.
@manjeetgupta8462
@manjeetgupta8462 2 года назад
@@NehraClasses I am already connected with your telegram channel n. Tried to find out the document of above topic near by same date . But not found. I am watching your recent hindi sessions for linux to brush up my concept.
@NehraClasses
@NehraClasses 2 года назад
I will search and let u know once I will get it.
@happy7450
@happy7450 2 года назад
Sir Aap hindi mai Samjha dete to jayada Tik tha
@CROAbomb
@CROAbomb Год назад
What if the storage Server stops?
@user-hu3om1jo8q
@user-hu3om1jo8q Год назад
multipath---存储双活
@MrRamu143
@MrRamu143 3 года назад
Dear Nehra, please provide Documentation also, thanQ
@NehraClasses
@NehraClasses 3 года назад
Will upload soon🙂
@NehraClasses
@NehraClasses 3 года назад
Please check comments, it's already uploaded there in comments.
@MrRamu143
@MrRamu143 3 года назад
@@NehraClasses Unable to apply (follow) steps via video, so please post the setps.
@NehraClasses
@NehraClasses 3 года назад
Please check the comment section of this video, already provided the steps in comments section. See the pinned comment first 🙏
@santoshbolar3485
@santoshbolar3485 2 года назад
Sir ur classes are superb, I have learned soo many things in Linux.please upload pcs clustering document also.
@ragupcr
@ragupcr 3 года назад
Hi., I am unable to install the Pacemaker cluster packages. I have created the repo. Still I am unable to install them. Can you help me with the repo configuration for the Pacemaker cluster.
@NehraClasses
@NehraClasses 3 года назад
Which flavour u r using? RHEL or CentOS For better support join our telegram channel 🙏
@gam3955
@gam3955 2 года назад
Is this run on OEL8 ?
@NehraClasses
@NehraClasses 2 года назад
Yes, it will
@gam3955
@gam3955 2 года назад
@@NehraClasses thanks for your reply
@piyushshipraagarwal996
@piyushshipraagarwal996 Год назад
please share second notepad file
@NehraClasses
@NehraClasses Год назад
already available to members on gdrive.
@Vinutha-xv2kb
@Vinutha-xv2kb Год назад
This course name please
@NehraClasses
@NehraClasses Год назад
Servers Training
@Vinutha-xv2kb
@Vinutha-xv2kb Год назад
@@NehraClasses how can I get this course from redhat
@NehraClasses
@NehraClasses 3 года назад
Discover Shared Storage On both cluster nodes, discover the target using the below command. iscsiadm -m discovery -t st -p IP address Now, login to the target storage with the below command. iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage.x8664:sn.eac9425e5e18 -p IP address -l systemctl restart iscsid systemctl enable iscsid [root@node1 ~]# pvcreate /dev/sdb [root@node1 ~]# vgcreate vg_apache /dev/sdb [root@node1 ~]# lvcreate -n lv_apache -l 100%FREE vg_apache [root@node1 ~]# mkfs.ext4 /dev/vg_apache/lv_apache [root@node2 ~]# pvscan [root@node2 ~]# vgscan [root@node2 ~]# lvscan Finally, verify the LVM we created on node1 is available to you on another node (Ex. node2) using the below commands. ls -al /dev/vg_apache/lv_apache [root@node2 ~]# lvdisplay /dev/vg_apache/lv_apache Make a host entry about each node on all nodes. The cluster will be using the hostname to communicate with each other. vi /etc/hosts Host entries will be something like below. 192.168.1.126 node1.nehraclasses.local node1 192.168.1.119 node2.nehraclasses.local node2 dnf config-manager --set-enabled HighAvailability RHEL 8 Enable Red Hat subscription on RHEL 8 and then enable a High Availability repository to download cluster packages form Red Hat. subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms dnf install -y pcs fence-agents-all pcp-zeroconf Add a firewall rule to allow all high availability application to have proper communication between nodes. You can skip this step if the system doesn’t have firewalld enabled. firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability firewall-cmd --reload Set a password for the hacluster user. passwd hacluster Start the cluster service and enable it to start automatically on system startup. systemctl start pcsd systemctl enable pcsd [root@node1 ~]# pcs host auth node1.nehraclasses.local node2.nehraclasses.local [root@node1 ~]# pcs cluster setup nehraclasses_cluster --start node1.nehraclasses.local node2.nehraclasses.local Enable the cluster to start at the system startup. [root@node1 ~]# pcs cluster enable --all [root@node1 ~]# pcs cluster status [root@node1 ~]# pcs status Fencing Devices The fencing device is a hardware device that helps to disconnect the problematic node by resetting node / disconnecting shared storage from accessing it. This demo cluster is running on top of the VMware and doesn’t have any external fence device to set up. However, you can follow this guide to set up a fencing device. [root@node1 ~]# pcs property set stonith-enabled=false dnf install -y httpd Edit the configuration file. vi /etc/httpd/conf/httpd.conf Add below content at the end of the file on both cluster nodes. SetHandler server-status Require local Edit the Apache web server’s logrotate configuration to tell not to use systemd as cluster resource doesn’t use systemd to reload the service. Change the below line. FROM: /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true TO: /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd.pid" -k graceful > /dev/null 2>/dev/null || true [root@node1 ~]# mount /dev/vg_apache/lv_apache /var/www/ [root@node1 ~]# mkdir /var/www/html [root@node1 ~]# mkdir /var/www/cgi-bin [root@node1 ~]# mkdir /var/www/error [root@node1 ~]# restorecon -R /var/www [root@node1 ~]# cat
@babugowda1683
@babugowda1683 Год назад
Everything was great. But stop playing with your screen recorder console over the screen
@NehraClasses
@NehraClasses Год назад
Oh really
Далее
кажется, началось
00:45
Просмотров 947 тыс.
What's the BEST home server operating system?
17:35
Просмотров 627 тыс.
Proxmox 8 cluster setup with ceph and HA
14:13
Просмотров 21 тыс.
Proxmox 8 Cluster with Ceph Storage configuration
16:38