Use a special Engine and Kafka utility - connect_cdc_sqdata - Latest

Connect CDC (SQData) Kafka Quickstart

Product type
Software
Portfolio
Integrate
Product family
Connect
Product
Connect > Connect CDC (SQData)
Version
Latest
Language
English
Product name
Connect CDC (SQData)
Title
Connect CDC (SQData) Kafka Quickstart
Copyright
2024
First publish date
2000
Last edition
2024-07-30
Last publish date
2024-07-30T20:00:09.892433
  1. Start a special version of the engine where the target URL has been modified to write JSON formatted data to files. Depending on the nature of your normal target URL (does it use a single topic, an "*" dynamic topic, or SETURL specified topics), one or more files will created that can contain timestamped file names.
  2. When Kafka becomes operational, use a Kafka utility (Kcat comes to mind) to read the JSON file(s) and write all the data to Kafka.
  3. Once the last file has been processed, Start the normal Apply Engine to resume operation.
Some things to be considered:
  • None of this may be necessary, given both Kafka's fault tolerance and the capacity of the source to hold the unpublished data. You should consult your Kafka support before going down either path.
  • The size of the files created by the SQDUTIL approach may require more or less space than the size of the JSON formatted files created by the second approach. This is a factor of source data volume and your significantly fewer number of target fields derived from each source segment.
  • The number of files created by the second approach will vary based on the style of your normal target URL and the need to create more than one file based on size or time constraints, potentially complicating the scheduling/operation of the Kafka utility.
  • If the normal destination utilized AVRO rather than traditional JSON formatting, there will be no Confluent Schema Registry in that case only the SQDUtility option can be utilized.