Hey, My name is Vivek, I am Software Engineer working around Open Source and Cloud Native technologies around Kubernetes primarily using Go as programming language. In this channel I am going to create videos about Containers, Kubernetes, Go programming language and Software Engineering in general. In these videos I am planning to actually show you things by doing, instead of just talking about them; so that you would actually learn about them and can use them either at your work or as some other place.
Sir how can i access for domain i have add domain i am accessing within cluster but i can't access in external world in my domain what i should add in dns for access my application in my domain
There are two processes here..1) the terminal 2) the go executable. The memory constraint is applied to which process? If it applies to the terminal, commenting out the fmt on line21 should allow the go process to execute? If it applies to the go binary, does it inherit the constraint from its parent process, terminal in this case???
Hi Amarjeet, I might be a bit off here, so please read more about it. But if I try to answer your question, the terminal would not be considered a process running in the shell. It would be the programs that we are running from the shell like we ran the go program. For the second question, I am not really sure which constraints are you talking about. If you are talking about memory it can use, we saw that we can configure that using cgroup. If you are talking about the resources that are not configured using cgroup, I am not really sure about that right now. I will have to check that.
Nice explanation. My question is when you create a cgroup and limit the memory to 50MB and start a vscode does it not getting killed since you are using the same shell?
Im a college student that just be start in kubernetes world. If dont have you guys who doing such a quility content.I would not be able to understand those complicate concept. Thank you very much 🎉🎉
Hi Vivek, make run is to operator locally, its working fine. Created the docker image using the cmd, 'docker build -t <operator-name> . and while try to run it using the cmd, docker run <operator-name>, getting error like unable to get kubeconfig-invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable Any suggestions pls?
Hi vivek , Thanks for awesome videos. Need your help : The command for generating code seems to be now outdated . Also, hack/boiler_plate.go.txt does not exist . Can you pls reun the code and shared the updated command ? Or latest blog to do so will also help.
Hi Sandeep, Even though the exact command might not work, but it can easily be figured out. For example paths etc can be corrected to make things work. I might not have time to make another video. Sorry.
@@rocknroll6768 yeah and you’re not crazy about the docs, there’s basically none at all. Or none that are up to date. The tool is great, but I would lock it to a version cause I’ve already had to update my script twice from breaking changes.
btw what is difference between in your method of creating operators vs kubebuilder way of creating it. in kubebuilder it generates everything(boiler plate) in one command like https server and deepcopy and all.
Hi Vivek, thanks for great explaination but i have question Why does pod is still in CrashLoopBackOff state even after my container logs were giving lists of pod and deployment ?
So when we run an application using deployment by setting replica to 1, controllers would always make sure that there is one replica of your application running. But once the pod is listed successfully, your application has nothing to do and it basically completes. Even if it completes/terminates, controllers would try to run it again and the same process would happen again. That is the reason most of the times the applications running on pod are servers that always run. Or we can also change restartpolicy of the pod to not restart once it terminates.
Thanks for the video. One question - how would you track the informer cache latency (freshness) in the event that resync can fail? For example, I set a pod informer resync to 30 seconds. However the recyn in 30 seconds may fail, causing my informer cache to be stale (meaning some new pods are missed). Is there a way I can monitor this staleness?
I am not really sure about the answer. But would we really want to implement this? Considering the fact that k8s is designed in such a way that you dont have any guarantee when a particular thing is going to happen. We can try to configure the resync time maybe.