Engines - connect_cdc_sqdata - 4.1

Connect CDC (SQData) Architecture

Product type
Product family
Connect > Connect CDC (SQData)
Product name
Connect CDC (SQData)
Connect CDC (SQData) Architecture
Topic type
How Do I
First publish date

Connect CDC (SQData) provides two complementary types of Engines, the Apply and the Replicator Engine.

The Apply Engine is a multi-faceted and multi-functional component that can read from and write to virtually any type of datastore be it a file, hierarchical or relational database, VSAM file, message queue or TCP/IP port. It can interpret virtually any type of data structure including all forms of relational DDL, COBOL copybooks, JSON and comma delimited files. The most common function performed by an Engine however, is to process data published by a Change Data Capture (CDC) component, applying business rules, if necessary to transform that data and efficiently write AKA Apply that data to a Target datastore of any type on any operating platform.

The Apply Engine is controlled by an SQL like scripting language capable of a wide range of operations ranging from replication of identical source and target structures using a single command to complex business rule based transformations. Connect CDC (SQData) Apply Engine commands and functions provide full procedural control of data filtering, mapping and transformation including manipulation of data at its most elemental level if required.

The Replicator Engine is controlled by a configuration file that merely identifies source and target datastores. It operates in two modes, as a high performance relational source Replicator and as a Distributor for parallel processing of IMS source data.

In Relational Replication mode, it automatically generates industry standard JSON or AVRO formatted data including a seamless interface with Confluent's Schema Registry to further simplify administration while boosting performance.

When operating as a Parallel Processing Distributor IMS CDCRAW records stream to Kafka topics partitioned by Root Key for processing by Apply Engines configured as Kafka Consumer groups. Splitting the stream of published data to be consumed by Apply Engines that write (Apply) that data to target datastores of any type.

Note: Apply Engines writing to Relational targets support what is referred to a SmartApply , a self-Correcting Synchronization technology that provides a second level of conflict detection and resolution.