Тёмный

6 Easy Ways to Improve your Log Dashboards with Grafana and Loki 

Stefan List
Подписаться 337
Просмотров 31 тыс.
50% 1

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 59   
@maxlagus9042
@maxlagus9042 Год назад
Literally the only guide that actually shows how to do stuff! Like from me and my team:)
@condla
@condla Год назад
Thanks for the nice words to you and your team ❤
@lucsteffens
@lucsteffens 10 дней назад
Great tutorial to start with Graphana Loki! 👍👍
@vnavalianyi
@vnavalianyi День назад
Great video! Thanks!
@simonshkilevich3032
@simonshkilevich3032 Год назад
like seriously, god bless you.
@condla
@condla Год назад
Thank you, sounds like this solved an issue for you 😊
@krzysztofwiatrzyk4260
@krzysztofwiatrzyk4260 Год назад
You have presented using ad-hoc filters perfectly to learn! Thanks you dear sir, I was trying to understand it from Grafana docs but it is just overwhelming.
@condla
@condla Год назад
Thanks for the feedback... Anything else that's commonly used but needs clarification?
@iggyvillanueva2022
@iggyvillanueva2022 Год назад
hi, is there a way to get the difference of the timestamp for us to get an api latency and do a trendchart@@condla
@bulmust
@bulmust 8 месяцев назад
It is great. Thanks.
@condla
@condla 5 месяцев назад
Thank you 🙂
@VenkateshMurugadas
@VenkateshMurugadas 2 месяца назад
You saved me a lot of time. Great video
@MattFine
@MattFine Год назад
This was very well done. Thank you. Please continue to make additional videos like this tutorial.
@condla
@condla 9 месяцев назад
thanks for your kind words! considering this ;)
@mex0b0y
@mex0b0y 11 месяцев назад
Thanks bro! It's amazing explanation how using loki more effectively
@condla
@condla 9 месяцев назад
thanks, I'm happy you found the video useful!
@tobiashelbing1233
@tobiashelbing1233 Год назад
Vielen Dank!
@condla
@condla Год назад
Haha, gern geschehen 😊
@nguyentuantu7017
@nguyentuantu7017 3 месяца назад
very useful and reality
@saeedsafavi26
@saeedsafavi26 6 месяцев назад
It was awesome, I learned alot ❤
@fumaremigel
@fumaremigel Месяц назад
Great video! Please make another one like this. For prometheus maybe? Or tempo
@DiTsi
@DiTsi 5 месяцев назад
Great! Thank you
@Babe_Chinwendum
@Babe_Chinwendum Год назад
Thank you so much. This was really helpful!
@condla
@condla Год назад
Thanks for the feedback. I wrote a blog post that accompanies the video, released yesterday: grafana.com/blog/2023/05/18/6-easy-ways-to-improve-your-log-dashboards-with-grafana-and-grafana-loki/
@Babe_Chinwendum
@Babe_Chinwendum Год назад
@@condla Thank you so much I was able to complete a task thanks to this, my logs were in JSON format, logfmt was not parsing and I guess ad-hoc variables would not work in that case
@Lars-pi4vx
@Lars-pi4vx Год назад
Great video! I wished there was more. I wonder if there Is any solution to do such ad hoc filters with "regex" or "pattern" parsed logs?
@condla
@condla Год назад
Thanks 😊. Yes, you can use the regex/pattern parser to do any kind of ad hoc filtering. Examples are dependent on the expressions and patterns of course. What's your log pattern and what would you like to filter for?
@Lars-pi4vx
@Lars-pi4vx Год назад
Hi @@condla , Thanks for your quick reply! This is my loggql for a huge file with more or less unstructured log rows, which shows up the amount of all errors occurred in the selected period: sum by(logMsgMasked) (count_over_time({env=~"$env", job="core-files", filename=~"activities.log"} |~ `(WARNING|ERROR)` | regexp `^\[(?P.+) (?P.+)\]\[PID:(?P\d+)\] level\.(?P\S+): (?P.*)` | regexp `((a|A)ccount #?(?P\d+))` | label_format logMsgMasked="{{regexReplaceAll \"(\\\\d+)\" .logMsg \"\\\\d+\"}}" | line_format "{{.logMsgMasked}}" [$__range])) Suppose there was a log message in the "logMsg" pattern match section: "Memory for account 4711 exhausted by 123456 bytes.", this will be converted to "Memory for account \d+ exhausted by \d+ bytes.". So the converted message should be in an ad hoc filter panel. Activating the adhoc filter on it should display all messages in a corresponding raw message panel below of it, regardless of the number of bytes or the account where the error occurred. I hope I have been able to describe my problem clearly enough.
@agpjustordinaryviewer
@agpjustordinaryviewer Год назад
Currently my office is working some pilot projects to have centralized logging and metrics dashboard using Grafana Loki. We found out Grafana and Loki are powerful tools, however it is quite difficult to find references in Google. This video is very very insightful video for Grafana Loki. However, there is one thing that is not working from our Grafana (v10.0.1). If we change to instant type, then all different values in a pie chart will be aggregated so it will display one value only. This issue doesn't happen in Query type. Have you ever heard about this issue?
@condla
@condla Год назад
Hi @albogp thanks for the feedback. I haven't heard about this yet, but you can ask the community.grafana.com or join the community slack and ask your question there: grafana.slack.com
@joffemannen
@joffemannen 8 месяцев назад
Nice! Got me going. I'm new to LogQL and Grafana, got some Splunk experience and am struggling to translate what I have. But this was a nice start. Any recommended youtubes as next step? I'm still struggling with a few things: 1) The base query is implemented in each panel - a lot of maintenance and I guess the query spends CPU x Nbr of panels. 2) I have a few regexes, I guess I should consider implementing them in the proxy infront of loki so they are available in simple filters for performance and maintenance. 3) The drill downs with Data Links - I only manage to do them in one level, and what corresponds to your "cluster" filter gets stuck for some reason - I want to drill down like 4 levels without making 4 dashboards with 8 separate panels with separate queries because that's a lot of maintenance. 4) Doing some arithmetic, I guess I have to learn transformations - like error rate in %, not in "per second". 5) Combining similar values in the same graph - some of my log entries have 4 timings - time to first byte, request end time etc - right now in 4 panels. 5) Do the same but for logs in BigQuery. I'm sure I'll figure some of this out on my own but one more kick in the right direction would save me some pulled hairs
@condla
@condla 5 месяцев назад
Hey, quite a lot of questions for s small comment block 😅 but let's try: 1) have a look at this grafana.com/blog/2020/10/14/learn-grafana-share-query-results-between-panels-to-reduce-load-time/, plus Loki does a lot of caching, also take a look at recording rules to speed up high volume Loki queries ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-qGyoJPUIOz8.htmlsi=BtYmT94Bt5_U21O3 2
@condla
@condla 5 месяцев назад
2) it depends, generally you want to set as few labels as possible and with a rather lower cardinality during ingest; also it's rather bad practice to set a label is something that already is in the log message. On the other hand, at query time, you want to use as many labels as possible to speed up the query.
@condla
@condla 5 месяцев назад
3) Grafana Scenes enters the conversation. "Hi there, let me help you" grafana.com/developers/scenes/ Scenes can help you achieve the connection of multiple dashboards while keeping the context when jumping back and forth.
@condla
@condla 5 месяцев назад
4) Yes, learn transformations, but you can * also do arithmetics on queries directly * depending on the panel type you will already have suggestions in the suggestions tab of the visualization section that show %
@condla
@condla 5 месяцев назад
5) just click "add query" below the first query of panel and add as many queries to one panel as you want.
@joffemannen
@joffemannen 2 месяца назад
A more concrete question maybe? Was counting top user agents, but now that our traffic has increased we have more than 2000 different user agents per time unit and I run into the max series issue, with a query like topk(50, sum by(user_agent_original) (count_over_time({deployment_environment="prod", event_name="request"} [$__range]))), where I naively first thought the topk(50 would protect me from that limit. It's an instant query, showing a table view with the values as a gauge. I could parse the user agent harder to get major browser version to get the options below 2000, but this is structured metadata, so I can't do that in LogQL, I have to do it in the collector (or in promtail?). I can't increase the 2000 limit, and I don't want to. Any way to rewrite the query to come around this issue?
@girirajb.c3673
@girirajb.c3673 9 месяцев назад
can you send me your promtail configuration for the above dashboard please?
@condla
@condla 5 месяцев назад
There's nothing notable done in promtaill. What's your challenge?
@AjayKumar-lm4yr
@AjayKumar-lm4yr Год назад
How to store Grafana Loki logs in Azure Blob Storage
@condla
@condla 9 месяцев назад
there's several ways you can accomplish this. Either host your own Loki and use Azure blob storage as a storage layer or ingest the logs into Grafana Cloud Loki and configure an export job (grafana.com/blog/2023/06/08/retain-logs-longer-without-breaking-the-bank-introducing-grafana-cloud-logs-export/)
@photographymaniac2529
@photographymaniac2529 8 месяцев назад
This is not working for unstructured logs where we use pattern to match
@condla
@condla 5 месяцев назад
Hey, generally speaking it's working the same way. You just need to define the pattern or a regex first to extract the information your want to visualize. Which metrics do you want to extract from which type of logs?
@abhishekkhanna1349
@abhishekkhanna1349 Год назад
Can you please share the application you used to create this dashboard?
@condla
@condla Год назад
You mean application as in Grafana for creating dashboards? And Grafana Loki as the solution to store and query logs? I'm confused because I put this in the title. If you search online you should be finding tons of resources for both.
@abhishekkhanna1349
@abhishekkhanna1349 Год назад
@@condla I wanted the application code which was generating the logs and trace. Thanks a ton for the video !!
@condla
@condla Год назад
Ahhhh 😁 I've used a dummy observability application that can be deployed to test things like this: follow the link for more information microbs.io/
@bhagyashrighuge4170
@bhagyashrighuge4170 Год назад
Can we use it for Json data
@condla
@condla Год назад
Yes of course, you would just use the json parser instead of the logfmt one
@bganesh3413
@bganesh3413 Год назад
please dont add music br it is very disturbing
@condla
@condla Год назад
Thanks for the feedback.
@wildflowers465
@wildflowers465 9 месяцев назад
It didn't bother me
@iyiempire4667
@iyiempire4667 Год назад
just type simple query: fields @message | filter @message like /$Filter/ | limit 100 dont make it hard
@condla
@condla 5 месяцев назад
Hi, thanks for the comment. this comes with a trade-off and any kind of query language has a certain learning curve. I'm trying to reduce the one for Loki with this video. In the near future you will see Grafana implementing an explore UI that allows you to query and aggregate logs without any query language at all. But users can still make use of the power of LogQL if they want
@utpxxx
@utpxxx Год назад
will this work with the json parser for the aggregation if they are not labels in loki already?
@condla
@condla Год назад
That's correct, given your log line is in json format, you can parse them on-read (at query time) with Loki's json parser.
Далее
Grafana : Loki LogQL
18:03
Просмотров 26 тыс.
Grafana Dashboards
15:18
Просмотров 952
DHH discusses SQLite (and Stoicism)
54:00
Просмотров 81 тыс.