As always if you liked the video, I would be happy if you leave a thumbs up and subscribe for more DevOps content 🙂 ► This video was sponsored by CNCF - www.cncf.io 🙌🏼 ▬▬▬▬▬▬ T I M E S T A M P S ⏰ ▬▬▬▬▬▬ 0:00 - Intro 0:06 - Why we need log data? 0:29 - Challenges of Logging 1:21 - Challenges of Logging in Kubernetes 3:21 - How does Fluent Bit work? 6:47 - Fluent Bit in Kubernetes 7:21 - Advantages of Fluent Bit 8:26 - Fluent Bit vs. Fluentd
It wasn't clear to me near the end the difference between fluent bit and fluentd. From the initial comparison it makes it seem like fluent bit is superior in terms of resource usage and overall footprint, but then we can combine them, but why? Is it that fluent bit sacrifices feature set to enable this better perf and therefore we should prefer it until we need something more advanced?
They do overlap in functionality quite a bit, and you can use fluent-bit by itself quite a bit of the time. The difference is that fluent-bit is designed as a log forwarder whereas fluentd is designed as a log aggregator. fluent-bit is meant to sit with the services producing the logs, preprocessing them before forwarding them on, whereas fluentd is intended to receive log data over a network.
@@talideon hmm, I think I see what you're saying. From what I'm understanding, when I use fluentd, I have td-agent which is by the service which sends the actual logs to fluentd, but it sounds like fluent-bit can essentially replace td-agent in that it sits on the machine where the service is to get,process, then forward the logs. so in my scenario, I have services+(td-agent for each machine that the services are on) -> fluentd -> outputs like elasticsearch for fluentbit, it sounds like you can do either A) services+(fluentbit for each machine that the services are on) -> outputs like elasticsearch? or B) services+(fluentbit for each machine that the services are on) -> fluentd (to collect the logs all in one spot) -> outputs like elasticsearch It would seem like B would be better if you have multiple outputs you are sending to, as then you just have 1 place to change that. I.e. if you wanted to also send info to DataDog, just change it in fluentd, where if you had 5 different machines each with their own fluentbit, you'd have to change the configuration of all of them if you wanted them all to start sending to a new service. Which may or may not be a big effort depending on how your architecture is setup, but either way it's messier since you have to make the change and ensure it's consistent on more machines. The extra step though to aggregate/collect all the logs in one place at fluentd is extra work/processing though. Fluentbit sounds a lot lower cost in terms of memory use though. I'd be curious how fluentbit compares to td-agent edit, I read that "td-agent is a stable distribution package of Fluentd", which I think means that td-agent is actually fluentd or a portion of it. That said, it sounds like fluent-bit is preferred over td-agent for the forwarding portion at least on each machine that has the service you wish to get logs from. I'm curious as to the reliability, as I heard fluentd can store logs on hard drive, and use that as a means to hold/send the logs later if something goes down. I wonder if fluent-bit has that feature
@@timothyn4699 I know that this is a year late but fluentbit is the way to go now with Open Telemetry becoming the industry standard observability collector and protocol. The best practice would be to have fluentbit running on your servers or in your k8s clusters and have them forward logs and metrics to an otel collector cluster over HTTP. The otel collector can then send data to any backend tool that supports it (which is basically all of the big ones) so you can visualize them with Grafana, New Relic, etc. The method of running a log scraper and then sending them to an aggregator like Fluentd works fine but it is much easier to maintain an observability stack across thousands of apps with the fluentbit -> otel collector -> observability backend pattern
Hi Nana, would like to Learn more on to troubleshooting pod errors such as crashloppback off, evicted pods, probe failures. Please make a video on that
I will add something to it. As u mentioned it is implemented as a demonset. But I feel it has more use case when implemented as a side car. This I have implemented in few projects
Great video as always! Thanks, Nana. Can you please make a video where you give a quick demo of how fluent bit works, how to create config files, etc. :)
You are the true DevOps guru !! Like the way you explain. 🙌 The DevOps tool of the month series is awesome ❤️ Btw, I have a query here, if you can help to answer. Considering a large amount of data to be ingested and analyzed for metrics purpose, do you suggest Fluentbit over Loki (coming from Grafana Labs). To me Fluentbit is much appealing in terms of low footprint and ability to serve a variety of different landscape.
please kindly make us a video on how to install and configure fluent Bit in k8s cluster, and link the collected logs and deposit it in an s3 bucket ...
Dear Nana, good afternoon can you please , bring up a video to explain how i can use fluent bit and store data in an s3 bucket, then to be visualized in grafana?
It is very usefull for me I have updated fluent-bit-splunk helmchart version from version 0.12.3 to 0.16.2 but container is not up , here I am getting error message like"backoff"", how to resolve this issue and can u please help me out this