The Kafka Connect JDBC Sink can be used to stream data from a Kafka topic to a database such as Oracle, Postgres, MySQL, DB2, etc. This video explains how to configure it to handle primary keys based on your data using the `pk.mode` and `pk.fields` configuration options.
✍️ [Blog] Kafka Connect JDBC Sink deep-dive: Working with Primary Keys rmoff.net/2021/03/12/kafka-co...
💾 Kafka Connect JDBC Connector download: www.confluent.io/hub/confluen...
📑 Kafka Connect JDBC Sink documentation: docs.confluent.io/kafka-conne...
----
☁️ Confluent Cloud (fully managed Apache Kafka, Kafka Connect, ksqlDB, Schema Registry): www.confluent.io/confluent-cl...
🤔 Questions? Join the Confluent Community at confluent.io/community/ask-th...
----
Learn more about Kafka Connect here:
🏃♂️ Quick: rmoff.dev/what-is-kafka-connect
🚶 More detail: rmoff.dev/zero-to-hero
----
🕐 Timecodes:
00:00:00 Introduction
00:00:12 What is Kafka Connect?
00:00:20 Introduction to the Kafka Connect JDBC sink connector
00:00:42 Primary Keys - introduction
00:01:16 Creating a connector - no primary key in the target database
00:06:12 Creating a connector - using a field from the message value as primary key in the target table
00:12:35 Configuring UPSERT operations for Kafka Connect JDBC sink connector
00:16:17 Creating a connector - composite key from fields in the value of the message
00:19:25 Keys in Kafka Messages
00:20:45 Primitives and Complex data types in Kafka messages keys
00:21:37 Creating a connector - setting the primary key using the a Kafka message's primitive key
00:29:26 Configuring DELETEs from tombstone messages
00:32:05 Using structured keys from Kafka message in the primary key of the target table
00:36:08 Specifying specific fields from structured key to use as primary key
00:37:42 Retaining a field from a structured key as a non-key field in the target table
00:41:34 Recap - pk.mode and pk.fields
00:42:23 Recap - Serialization
3 авг 2024