Sir, fantastic really amazing to learn everything very nice and easy to gether all information, also please help us to share above all information video, single document to uploaded in your Google drive it would be better to go through offline video mode to download, like a good to read it future
Thanks for posting this, it's REALLY helpful. @9:25 you state that each Data SVM has one root volume and one data volume each... but data1_vol1 and data2_vol1 are both in n1_aggr1. I see they're each listed as being in their respective SVMs, but wouldn't you more typically have data2_vol1 in n2_aggr1?
Hi I understand what you say, but since SVMs are clusterwide, in principle a volume can be in any aggregate. Imagine an 8 node cluster, SVMs can have volumes in any of the data aggregates.
If I have 4 nodes in a cluster then how many cluster management lif will be there, 4 lifs or something else, another question is if node1 in a cluster is hosting the cluster management lif then in case node1 gets failed or collapsed then how the entire cluster can be managed, you told something about fail over but it is bit fuzzy to me, could you please explain it as per the nodes mentioned in my question?
It does not matter how many nodes you have in the cluster, you always have 1 and only 1 cluster management lif. If the node that runs the cluster management lif fails, then that lif will automatically fail over to a another node in the cluster (unless of course when you have a single-node cluster). Next to 1 cluster management lif per cluster, every node in the cluster has a node-management lif. The node-management lifs do not failover to another node in the cluster. So in summary: if you have 6 nodes in the cluster, you have 1 cluster management lif and 6 node-management lifs. In the below example you see the output of a 2-node cluster in which you run "net int show -fields role"... nodes cl1-01 and cl1-02 both have a node-mgmt lif And there is 1 cluster-mgmt lif that runs on cl1-01. The 4 lifs with the 'cluster' role, are Cluster Interconnect lifs. cl1::> net int show -fields role,curr-node (network interface show) vserver lif role curr-node ------- ------------ ------- --------- Cluster cl1-01_clus1 cluster cl1-01 Cluster cl1-01_clus2 cluster cl1-01 Cluster cl1-02_clus1 cluster cl1-02 Cluster cl1-02_clus2 cluster cl1-02 cl1 cl1-01_mgmt1 node-mgmt cl1-01 cl1 cl1-02_mgmt1 node-mgmt cl1-02 cl1 cluster_mgmt cluster-mgmt cl1-01
There is not really a naming convention. It is something you think of yourself. 'vol0' is called 'vol0' by default, but you could change that, like you can rename aggr0 to anything you want. Usually people do not change the name of vol0. aggr0 of node 1 is usually called aggr0, aggr0 of node 2 is automatically renamed by the cluster. I usually change aggr0 to aggr0- to keep the aggr0 name unique in the cluster.
Hello, ONTAP allows non disruptive upgrades. You upgrade a running node, and then you reboot the node. After the node is rebooted and back in the cluster you follow the same procedure for the other node. Best practice is to plan your upgrade using the active iq upgrade advisor. Hope this answers your question.
Hi, a node is a single controller. Two controllers in the same chassis will form a single HA-Pair. So: a node is another name for 'controller'/'system'/host' et cetera. It is a single piece of hardware or a single VM. An ONTAP cluster can exist of 1 single node or of sets of two nodes. Every two nodes that you add to a cluster will be an HA-Pair that share the same disks. Hope this helps...