Тёмный
Confluent
Confluent
Confluent
Подписаться
Confluent, founded by the creators of Apache Kafka®, enables organizations to harness business value of live data. The Confluent Platform manages the barrage of stream data and makes it available throughout an organization. It provides various industries, from retail, logistics and manufacturing, to financial services and online social networking, a scalable, unified, real-time data pipeline that enables applications ranging from large volume data integration to big data analysis with Hadoop to real-time stream processing. Backed by Benchmark, Data Collective, Index Ventures and LinkedIn, Confluent is based in Mountain View, California. To learn more, please visit www.confluent.io.
Ignite Series
6:02
4 часа назад
confluent investor testimonials
6:18
2 месяца назад
Introducing Gitpod for Confluent Developer
6:00
2 месяца назад
The Confluent Q1 ‘24 Launch
4:31
2 месяца назад
4 Key Types of Event-Driven Architecture
9:19
2 месяца назад
Комментарии
@thisismissem
@thisismissem 15 часов назад
What was the pattern of an adapter testing a new service & collecting results? I feel like I vaguely heard of it first from (I might be getting the name wrong here) GitHub Scientist?
@Fikusiklol
@Fikusiklol День назад
Is this different from KSQL-db and stream processing happening there? Sorry for being ignorant, just genuinely confused :)
@B-Billy
@B-Billy День назад
Absulutely amazing explanation!! Could you make a video on Kafka, why it is so fast and relieable?
@Zaibacu
@Zaibacu День назад
Didn’t expect much, but found it extremely useful for our case. Great presentation!
@ConfluentDevXTeam
@ConfluentDevXTeam 2 дня назад
Wade here. The first time I ever decomposed a monolith it was an eye-opening experience. We had this live system that needed to stay running, and yet we needed to replace parts of it at the same time. It's the old joke of trying to replace the wings on an airplane while it's in flight. Actually, that's a fun thought experiment that is kind of relevant to this video. How would you replace the wings of a plane while it's in flight. Step 1. Build temporary wings on top of the existing wings (call them "proxy wings" if you like). Step 2. Remove the old wings. Step 3. Build new wings in their place. Step 4. Remove the temporary wings. There's a lot of similarities to the process I describe in this video. But what do you think? If you had to decompose a monolith, would you do it differently? Would you replace the entire monolith at the same time? Would you alter the process I've outlined, or add in any additional steps? I'd love to hear your thoughts.
@ConfluentDevXTeam
@ConfluentDevXTeam 2 дня назад
Hey, it's Lucia! Hopping in here to say that if you're working on transforming, cleaning, and connecting data, we've got resources for you. Head over to Confluent Developer and check out more of our demos: cnfl.io/3X9niaV PS- like I said in the video, I'm happy to answer questions in the comments! Chime in.
@shintaii84
@shintaii84 2 дня назад
I’m sorry to say, but i did not learn anything, besides that we can create a cloud account… I think i can build a script that does this in a few hours with any db. So what is unique here? Why just not a cronjob running my script, saving to postgres or even without saving just put events on the topic, line by line.
@DaveTroiano
@DaveTroiano 2 дня назад
Hello, demo developer Dave here :) Thanks for tuning in! I would call out these uniqueness points, particularly compared to the "script into any db" idea: 1. Ease of use, both in terms of development and deployment. Connector config plus SQL running in a cloud engine is easier than scripting and needing to run that script reliably. To your point, neither approach is very difficult if the goal is to get to a demo point, but the runtime aspect in particular has many big hard things lurking underneath if going beyond demo (next point...) 2. All of the other solution hardening that you would face doing this for real is a lot easier with this approach compared to rolling your own. Resilience with respect to REST API flakiness, fault tolerance with respect to connector infrastructure, logging, etc. Where do you want to build all of these features that you’ll probably need post-demo and how much time do you want to spend on developing these things on your own and maintaining them? In other words, many people would be able to get to the equivalent demo point via a script and postgres pretty quickly, but the marginal effort needed to harden would be significantly higher. 3. Ad hoc streaming / real-time analytics. This is mostly a response to the question “why not use any DB??” This is more a demo about getting started, but it then enables real-time answers to ad hoc questions, say a QoS type question like “how many aircraft at taxiing *currently* and what’s the avg / max taxiing time in the past minute? A Cronjob and postgres might work for batch and answering these kinds of questions after the fact, but the streaming aspect is unique and the reason to be looking at technologies like Kafka and Flink (many more details on the benefits of Flink in this Confluent Developer course: developer.confluent.io/courses/apache-flink/intro/ ). In the case of this example, it's seconds type latency to get from data available via API to "data reflected in a streaming aggregation" given the latency delay inherent in this particular API. Still pretty snappy and a difficult "time to live" bar to achieve with a cronjob and postgres though... 4. This is more of a “coming soon”, but I would expect data quality rules to become available for connectors (not supported as of today). This feature would unlock data quality at the source and help developers to proactively address REST APIs changing under the rug. (In my experience, REST APIs can be a bit of a wild west when it comes to format reliability.) More here: docs.confluent.io/cloud/current/sr/fundamentals/data-contracts.html. This would be a demo enhancement when that feature becomes available, but I’m thinking ahead to yet another problem that developers would face in building a pipeline like this in production and opting for a managed quality control feature rather than having to implement it yourself. Cheers 🙂 Dave
@prashantpathak7711
@prashantpathak7711 3 дня назад
Can you please give the code link i was trying to follow and getting some issues
@ConfluentDevXTeam
@ConfluentDevXTeam 2 дня назад
Wade here. All of the content presented in the video, and more, can be found on Confluent Developer in the "Building Apache Flink Applications in Java" course: developer.confluent.io/courses/flink-java/overview/
@jacobwwarner
@jacobwwarner 3 дня назад
Nice video. I never thought that general REST API Request-Response systems were different from EDA Microservices.
@123oof
@123oof 3 дня назад
Tim is such a good presenter OMG, I just want to learn more....
@quentinautomation
@quentinautomation 4 дня назад
I have learned so much from those videos. thank you
@SagarD-sg8ej
@SagarD-sg8ej 6 дней назад
In the 4 min of producer and 5 min of consumer given me good knowledge
@SagarD-sg8ej
@SagarD-sg8ej 6 дней назад
In the 4 min of producer and 5 min of consumer given me good knowledge
@Nominal_GDP
@Nominal_GDP 7 дней назад
I feel like he's a bit biased
@ConfluentDevXTeam
@ConfluentDevXTeam 5 дней назад
👀I try not to be. But there are many valid ways you can solve a problem, I just have a tendency to prefer the event-driven architectures because I've lived a lot of success with them.
@Sulerhy
@Sulerhy 7 дней назад
incredible visualized video. Thank you so much
@ajitnandakumar
@ajitnandakumar 8 дней назад
Hi Adam, On Reactivity, I understand the difference in Async vs Req/response, but what is the conclusion and difference in reactivity between the two architectures. This was not clear.
@GreatTaiwan
@GreatTaiwan 8 дней назад
can we use it with RabbitMQ instead?
@RaushanKumar-co3wj
@RaushanKumar-co3wj 8 дней назад
awesome .
@user-ev9jg6ts6e
@user-ev9jg6ts6e 8 дней назад
Excellent as always. Thanks, Wade, I highly appreciate.
@ConfluentDevXTeam
@ConfluentDevXTeam 5 дней назад
Wade here. Glad you enjoyed it.
@user-ev9jg6ts6e
@user-ev9jg6ts6e 8 дней назад
The original isFraudulent method also seems to be a query method with a side effect on saving a transaction to database thus violating cqs principle.
@ConfluentDevXTeam
@ConfluentDevXTeam 5 дней назад
Wade here. Yes indeed. Definitely not a great implementation.
@andresdigi25
@andresdigi25 8 дней назад
I know all the smart people use kafka or other systems like sns, sqs etc. But something alwaus bothered me. Why you can not use a database to do that? a well tune postrgress in RDS or dynamo? I mean to store events and lets consumers and producers read/write from that db? why kafka and all these systems are preferred?
@ConfluentDevXTeam
@ConfluentDevXTeam 8 дней назад
Hey there, Adam here. A lot comes down to business needs and your willingness to support custom solutions. People like to use Kafka because it handles a lot of problems out of the box, without you having to customize anything. You can scale to millions of events per second, have high availability to survive individual node failures without degradation or halting of services, and get automatic expiry, cleanup, compaction, schema management, multi-topic transactions, security, access controls, rate limiting, and quotas, all in one package. There's no custom code required, and just as you would have a managed DB on RDS, for example, you can also get managed or serverless Kafka. Another big reason people like to use Kafka is the Kafka Connect framework. When your data is scattered across several systems, it can be problematic to access it in near real-time from other locations. Kafka Connect lets you pipe your data from your databases to Kafka directly, so that you can then read it or route it however you want to wherever you want. Accessing the data is simple, as you have a wide range of different Kafka clients in different languages to integrate into your applications. Not to turn this into an essay, but the gist is that Kafka offers a lot of functionality that you're not going to get if you try to roll your own message broker using Postgres or DynamoDB. It isn't to say you can't do it, but it's one of those things where it may just be simpler to get the tool built for the specific job (near realtime event brokering) and use that. But again, it depends heavily on your business needs!
@andresdigi25
@andresdigi25 8 дней назад
@@ConfluentDevXTeam Adam thanks for taking the time to give such an in depth response. BTW i am not saying kafka is bad idea. In my company I am one of the person that advocates a lot to use EDA with aws service such sns, sqs, eventbridge. But sometimes i question myself if we are taking all the juice from these tools or if those tools under heavy conditions works better than a good customized solution like a DB in RDS. At the end of the day brokers have a persistence layer to store simple messages. Also i guess this comes down to the people you have, if they have experience with these modern tools, or they are more aligned to classic DBMS systems. And also the money you have to spend on a solution, and how big will become in the future. Thanks again.
@AftercastGames
@AftercastGames 7 дней назад
I also prefer databases whenever possible, but the question is, what happens when the database goes down? Event queues have their advantages, especially when money is involved.
@ConfluentDevXTeam
@ConfluentDevXTeam 9 дней назад
Hi everyone, Gilles here, I'm thrilled about the new Data Portal and the time it will save for developers. Do you have any suggestions for features or improvements we should add?
@debabhishek
@debabhishek 10 дней назад
one little details I am searching about fetch or consumer poll . consumer is subscribed to more than one topic or 1 topic ..--> more than one partition , now the leaders for the partition are in different brokers.. ( dont know if you read from leaders or from isr list) ,, even if you read from the isr's , they may fall in different brokers.. .. what consumer do in such cases forward more than one request in different brokers and collate the results and present it to the client ? what if one broker is responding slow .. or not responding at all .. .. if responding slow consumer is ignoring it , it my keep on respond slow. and will be silently ignored. .. can you please write one two lines about consumer fetch.
@debabhishek
@debabhishek 10 дней назад
all the points are interesting.. I was thinking if after consumer fetch if we can explore the threadpool option to speed up the processing speed , got a validation here.. another interesting point is over commit by the consumers.. so does it means that I dont need to commit ( or ack) every record .. suppose my consumer is reading from Topic A and B ( both having two partitions) its enough to commit for the last offset of A1 A2, B1 and B2 . though I am processing more records from these topic partitions I am committing ( ack-ing ) the last offset for each partition. @confluent please correct me if I am wrong
@ConfluentDevXTeam
@ConfluentDevXTeam 10 дней назад
Wade here. In this video, I present a fairly simple API. I'd love to know how you might make it better. Would you use things like lambda's instead of a callback? Would you always return a value so methods can be chained together? Would you use Futures or some other async construct? Let me know what you think.
@sefumies
@sefumies 10 дней назад
Started in good intentions but soon fell into a confluent marketing guide. Also, I’m certain it wasn’t confluent who came up with the terms; serialization and deserialisation! 😂😂😂
@moinkhan
@moinkhan 10 дней назад
What a great overview! Thank you.
@rosius846
@rosius846 11 дней назад
Watching this video makes me think hard about how Dapr enables devs build secure and reliable microservices with less friction.
@Algoritmik
@Algoritmik 11 дней назад
A series of tubes...
@ConfluentDevXTeam
@ConfluentDevXTeam 10 дней назад
Wade here. Well, it's definitely not a big truck. That would be more like a batch process.
@Algoritmik
@Algoritmik 9 дней назад
@@ConfluentDevXTeam I thought it was a reference to en.wikipedia.org/wiki/Series_of_tubes :D
@jairajsahgal7101
@jairajsahgal7101 12 дней назад
Thank you
@marcialabrahantes3369
@marcialabrahantes3369 12 дней назад
Curious what did you imply by "protobuf is not human readable like JSON" in the context of renaming a field ? Per my experience any renames in Protobufs will be a breaking change as all your clients will need to be updated. so most changes are made to be backwards compatible by what you mentioned (new field, existing field gets ignored going forward). So I'm not sure what is the contrasting argument.
@ConfluentDevXTeam
@ConfluentDevXTeam 11 дней назад
Wade here. Protobuf is a binary encoding. When you write a Protobuf message, if you try to inspect it, you end up with the encoded result which isn't human readable. This is different from JSON which is encoded as plain text and is easy to read and understand. JSON is designed to be read and understood by humans. Protobuf is not. As for field names, Protobuf doesn't store field names in the encoded results. Instead, fields are identified by what is essentially a number indicating the position of the field. This has the advantage of reducing the size of the data, but it also means that you should be able to change the name without any issues. As long as the number stays the same, Protobuf doesn't care what name you give the field. Renaming a field is considered a wire-compatible change. If you have found yourself with breaking changes after renaming a field in Protobuf, then your change was probably more complicated than a simple rename, or you may have changed the number assigned to the field.
@rvb3939
@rvb3939 13 дней назад
Hi Wade, great job! Thank you for the high-quality video and explanation. I believe you've nailed this topic and use-case. Can't wait to see your next videos. Cheers, Roberto
@ConfluentDevXTeam
@ConfluentDevXTeam 11 дней назад
Wade here. Thanks for the feedback. Glad you enjoyed the video. There's another five or so coming with this case study, so keep an eye out for those. And if you haven't already, check the playlist for the rest: ru-vid.com/group/PLa7VYi0yPIH0IpUKXb3q7NSjpJGO9GGGZ
@QuanVuHongVN
@QuanVuHongVN 13 дней назад
Any diffrence between Apache Flink Watermark vs Apache Beam Watermark mechanism, or they are the same ?
@sorvex9
@sorvex9 13 дней назад
Stop making so many god damn services dude, maybe then your life would be easier
@JitenPalaparthi
@JitenPalaparthi 14 дней назад
What is the device we can write like that?.. is it just a cam on or
@ConfluentDevXTeam
@ConfluentDevXTeam 12 дней назад
A lightboard!
@user-dj4rw3ck7v
@user-dj4rw3ck7v 14 дней назад
Great work
@flosrv3194
@flosrv3194 16 дней назад
no way to make what you do, sir. I would like to see one day one tutorial where nothing is hidden and all is shown clearly. Well, i have time to die three times before it happens... { "name": "ImportError", "message": "cannot import name 'config' from 'config' (c:\\Users\\flosr\\Engineering\\Data Engineering\\RU-vid API Project\\config.py)", "stack": "--------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[7], line 3 1 import logging 2 import sys, requests ----> 3 from config import config ImportError: cannot import name 'config' from 'config' (c:\\Users\\flosr\\Engineering\\Data Engineering\\RU-vid API Project\\config.py)" }
@abdirahmanburyar
@abdirahmanburyar 16 дней назад
that was great and interesting topic and glad to have it.
@awkomo
@awkomo 16 дней назад
What is in the configuration file "ca.cnf"?
@ConfluentDevXTeam
@ConfluentDevXTeam 16 дней назад
Here is a link to the GItHub repo that goes along with the course. It has a sample ca.cnf, but if you are using Kafka in a production environment you'll want to use a trusted certificate authority rather than the self-signed certificate that was used in this course. github.com/confluentinc/learn-kafka-courses/blob/main/fund-kafka-security/ca.cnf You can find more instructions on how things are set up by following along with the GitHub repo and the guide that goes along with this video: developer.confluent.io/courses/security/hands-on-setting-up-encryption/
@desmontandolaweb
@desmontandolaweb 16 дней назад
How and where can I get that shirt???