The replicator engine runs only on Linux and is controlled by a configuration file that merely identifies source and target datastores.
The Replicator Engine is controlled by a configuration file that merely identifies source and target datastores. It operates in two modes, as a high performance relational source Replicator and as a Distributor for parallel processing of IMS source data .
In Relational Replication mode, it automatically generates industry standard JSON or AVRO formatted data including a seamless interface with Confluent's Schema Registry to further simplify administration while boosting performance.
The replicator is much simpler to use and is multi-threaded so it can handle a higher volume of change records.
Use the replicator engine if:
- the source is Db2/z
- the target is Kafka or the file system
- default mapping logic is acceptable (every column in the source table maps to the field name in the messaging Kafka)
The replicator has no filtering or transformation capabilities; it just replicates all fields in all records.
The replicator engine uses a configuration script syntax with none of the additional "Parts" files used in an Apply Engine script. No source or target DESCRIPTIONS are required because the SQData relational capture agent (initially only DB2 12 z/OS) will utilize the source relational database catalog and subsequent source database maintenance "ALTER TABLE" activity to provide the source Table schema information to the replicator engine. The replicator engine can similarly interact with the Confluent Schema Registry to maintain the time sequenced Kafka AVRO schemas. No "CDCPROCs" (change data mapping procedures) are required because the entire before and after content of the source Table will be replicated.