Hey marcel, great video, really appreciate your efforts 👍🏼👍🏼, i think you should take this prom setup to next level by creating a detailed video on Thanos and it’s components( humble suggestion). 🙂
you definitely slowed down from the last time i saw your videos! before i had to change the playback speed but now it's perfect for me lol keep up the good work Marcel
Muchas gracias por explicar tan claramente tantas cosas indispensables para entender cómo trabajar con estas tecnologías tan nuevas, complejas e impresionantes
Would be great to see a video on how you are using prometheus, grafana, and alert manager for visualizations and alerting. I am assuming you are customizing the system you generated in this video, to some degree. It would also be great to see how you are monitoring workloads. Is it just as simple as creating service monitors?
I only customise verbosity of what metrics are sent out using Prometheus remote write. I dont store metrics in-cluster and send it to a hosted platform, so i dont have to deal with data retention, state and availability. Other than that no customisation is needed since all cluster metrics work out the box. Combine this with fluentd for app logs and developers have all they need to monitor. For custom metrics, only service monitors are needed, yes
If you could make a video about monitoring Kubernetes using Prometheus without helm or Prometheus operator would be appreciated, just to know how Prometheus monitors without CRD.
Thanks Marcel, this video helped to get prometheus installed. But now how do I monitor my custom apps in their namespaces? I would like to start with just a basic scraping.
Nothing fancy, I have an Intel i9 CPU, any i7 would do. Memory is key, especially running bunch of containers, virtual machines and k8s clusters locally. I have 32GB which is more than enough. And SSD, i have a 1TB Samsung SSD (one of those tiny PCI-exp ones😁 )
Love the video... making me think of changing my setup, this looks so much more stitched together... Any chance you can do a similar video, but where the Prometheus and Grafana is off the K8S cluster, imagine a environment with multiple K8S clusters, where Prometheus and Grafana is #1 on a dedicated cluster or where #2 Prometheus & Grafana are on a dedicated EC2 hosts. PS: you did not mention where the prometheus.yaml file is stored to inform Prom about off cluster targets.
You'll need to run a prometheus instance in every cluster for the service monitors to work, and use remote-write to push telemetry out to your central EC2 Prometheus.
@@MarcelDempers hi hi, was starting to think the same, was actually thinking Thanos might be a good way of pulling it together onto a central Prometheus/Grafana stack. With EKS and multiple AZ's the persistent storage etc of course becomes a much more interesting discussion, I got one cluster, spanning 3 subnets, A-App, B-Database, C-Management, and these are then spread over the 3 AZ's. so for the prometheus we want to pin them to Management in A, Management in B, and so on...
@@MarcelDempers hi, can you share your experience on how to configure remote-write in prometheus inside k8s cluster and get metircs on another central prometheus?
Great video! One question , does these manifests needs to be deployed on same namespace as our kubernetes cluster, if not then how does it know to look into namespace into which kubernetes resources are?
Hey Marcel, I followed this guide to run prometheus & grafana in k3s with an ingress instead of port-forward and I did not have to do the datasources fix, it just works. Not sure if they have fixed it or because I'm running it in k3s.
Hey, what do you think of using the annotations on a Pod/Service/Deployment etc to tell Prometheus where and if it should scrape? Any downsides to this over the ServiceMonitors?
besides blackbox exporter is there an exporter that we can use to get the performance of the applications? im used to Dynatrace and NewRelic APMs to get latency of functions being called but im not sure what to use to get this data inside the containers apps
It's a little more complex than dropping NewRelic in there and it's a different type of architecture. Take a look at Jaeger Tracing. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-idDu_jXqf4E.html
I wonder that Which one in the monitor shows it can get the network metrics ? Cpu and ram is normal but how about the network ? How this monitor operator get the network metric?
Hi Marcel What is difference Prom+Grafana vs Kubernetes Dashboard if we want to monitor k8s cluster? Or in which scenarios Prom+Grafana should be used over Kubernetes Dashboard
The kubernetes dashboard mainly relies on basic API metrics used by HPA for example. They are highly aggregated and limited, I.E memory+cpu only. Prometheus solution provides way more in depth telemetry via kube-state-metrics
hi hi, I deployed the 0.9 version ... my EKS cluster is still 1.21... so it seems the data source problem does not exist... ;) but now neither does any of the dashboards... any ideas? do yo maybe have a video that goes after this, that addresses how to add more selectors... trying to instrument a Python Flask app deployed on EKS. thanks
My prometheus pods are using increasingly more RAM up to the point it kills OOMKilled and stop monitoring until i restart the pods, is there any way to minimize this? I was wondering if i add some persistency to the deployment it would do any good
As far as i know by default, Prometheus is an in-memory database and persistence is only to write data to disk to prevent loss during restarts You may need to research config options if its possible to offload some data to disk to reduce memory usage. Alternatively you can shard by running more Prometheus instances (one per namespace etc) to reduce it too
interesting! i just noticed that. Need to learn what it means. (never used server-side apply) Thanks for pointing it out. Learning something new every day
I've dug into this before and Windows is a little tricky at the moment since the kubelet and kube-state-metrics do not provide similar metrics as linux pods. I.E they dont export the same metrics. However - metrics-server (used by autoscalers) does get CPU and memory stats for Windows pods. You can use metrics-server and look on github for a project called metric-server-exporter to export pod cpu and memory. Add a service monitor for that and you can make a custom dashboard for it. That should give you observability into windows workloads for now until better support lands
You can persist data in prometheus by using a persistent volume, you can also persist data in Grafana I believe (have not looked into it) Grafana Loki can definitely persist data too
Kubernetes still supports docker images. The deprecation is about the runtime on the kubelet. Kubernetes will use containerd going forward as a runtime for containers and will not use docker as a runtime. Docker images are still a standard supported by containerd, so your pods will and can still use docker images. kubernetes.io/blog/2020/12/02/dockershim-faq/
@@MarcelDempers ty for answer, seem to me my problemm more deeper)) 1. Install from kuberspray 2. k8s 1.24 3. (major cause) CRI wcih I m using in my cluster is a DOCKER, I think DOCKER major cause, ty again
An comment about the ¨bug" on the datasource, its because of those stupid Network Policies they put in the manifests, I particularly just remove it all, so you also will not have problems exporting grafana to a load balancer.