@Pavan Elthepu, You are videos are great. Thank you so much for your efforts. at 12:14 I think required during scheduling should be marked as hard in that table. Please correct me if I am wrong?
i tried same with nodeSelector field please help me to resolve ------- in my case there is 4 node cluster architecture,one is master and remaining are workers, i created pv on worker2,and i want to schedule all pods of application on worker1 i labeled worker1 as kubectl label node worker1 worker1: tmlabs-hp-280-g2-mt ---- #deployment.yaml nodeSelector: worker1: tmlabs-hp-280-g2-mt ------ Error:::: 0/4 nodes are available: 1 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are available: 1 Preemption is not helpful for scheduling, 3 No preemption victims found for incoming pod..
Great content Pavan! Thank you! By the way, could you tell me the k8s yaml extension you’re using? Mine doesn’t have auto complete function for k8s yaml format.
Yes true. Both work on the labels. But the main thing is that there are two labels. One is associated with the pod which is mentioned in the deployment.yaml file such a selector and labels. And another one is node label. Which can be given with the help of "kubectl label node nodename key=value".. So there is not only one label there are two labels. One is pod label which is given in the yaml file and another one is node label which is given in the command.