One thing would have to be mentioned in any case: If I store the secret as environment variable in the deployment, I have the possibility to access this value in the running container instance via the terminal with printenv or env in the container. Here, too, the values are then in plain text. So if a potential attacker gets access to the container, he can easily read the password for the database 🙂
Man, its my first comment on youtube, I really love your videos, im a beginner and whenever i have a problem, your chanel is my first choice, keep going !
I configured my configmaps, and works perfectly with my env values from VUE. But I'm trying to get this values in the frontend pod... i'm no able to do it.... Is there any extra conf ? Thanks a lot for your videos.
You can also pre base64 encode the secret string and put that in the secret.yaml file as well. That way he secret is not stored in plane test in the yaml file its self.
Hi Chris I have a question about kubernetes clusterIP service for pods as a single network point other pods can reach internally, where does its IP exist if I define one on my cluster, how the request travels from external pod to the service to retrieve data or whatever, I think that the virtual IP address for the service exists on the master and not the worker nodes since the worker node can go down and the service is still maintained, the request from the pod goes to the master who determines the service endpoint and routes the request to that IP I'm just saying man what would logically happen any clarification correcting would be really appreciated thanks for the content.
The network layer is controlled on each node by the kube-proxy service. Once you define a ClusterIP, the user-space proxy uses iptables rules which capture traffic to the Service's clusterIP and redirect´s that traffic to the proxy port which proxies the backend Pod. Hope that makes sense.