Hi! So there is usually a middle man between the DB and Kafka, something called a Connector. Debezium is a good example of that. What the connector does is read from the database's log files and writes to Kafka. Most databases (if not all) has some kind of a log where it records every DB operation. You can replay all the changes by reading this file. For Postgres you have WAL (write ahead log) and for MySQL you have other bin logs. So the connector reads from this log file and writes the changes to Kafka for every change you make to your data.
Lets say I have a order booking system that has order and order details table. Now , one order details has changed. I want to send a complete order event that comprises of order and order details to kafka so that it can be consumed and stored in a time series database as a complete order model. Where exactly will the order details be fetched , because CDC will only tell me order details has changed.
Hi! Thank you for asking. So this is a very good use case of Kafka Streams. Let's say you have a CDC stream for `order` and one CDC stream for `order_details`. The two of these should be in two different kafka topics. Using Kafka Stream or KSQL, you can join the two streams whenever either changes. Do the join based on the `order_id`. Check this out: supergloo.com/kafka-streams/kafka-streams-joins-examples/
@@irtizahafiz Thank you so much for the explanation. I went through the kstreams join example.Lets say my kafka topics store only last 7 days worth of data. Now lets say 20 days later the order details changed so it was sent as an event to order details event topic. When I use kstreams to join with order, in the kafka topic to store order events, it wont find the order because its cleared out. So how is this handled in the above case?
Hi, its good explanation!. Could you make a video of how Microsoft SQL Server based CDC pushes messages to kafka ? I mean the implementation details! Thank You !
There should be a Kafka connector you can utilize. I know Debezium has a few of them, but Google might also offer it as a service. One option might be to use GCP's MySQL equivalent, if you want native integration with BigQuery.
is it near to realtime? if you have master db as rds where write will happen and u would want search as Elastic search but we need to stream data real time will this be real time?
Depends on "how" real time your application needs to be. If you are feeding the CDC data into ES, I believe you will need to re-index which will take time. Personally, I haven't used that pipeline before, so I don't have too much context.