Parse engine - connect_cdc_sqdata - Latest

Connect CDC (SQData) Kafka Quickstart

Product type
Software
Portfolio
Integrate
Product family
Connect
Product
Connect > Connect CDC (SQData)
Version
Latest
Language
English
Product name
Connect CDC (SQData)
Title
Connect CDC (SQData) Kafka Quickstart
Copyright
2024
First publish date
2000
Last edition
2024-07-30
Last publish date
2024-07-30T20:00:09.892433

Scripts can be Parsed on the Linux platform at the command line, inheriting the working directory and other previously established environmental settings or using a shell script.

Syntax

        SQDPARSE <engine>.sqd <engine>.prc [LIST=ALL|SCRIPT] [ <parm1> <parm2> …<parmn>] [> <engine>.prt]

Example 1

Parse, at the command line, an engine script named DB2TOKAF that will perform near real-time replication to Kafka Topics.
sqdparse ./ENGINE/DB2TOKAF.sqd ./ENGINE/DB2TOKAF.prc ENGINE=DB2TOKAF SHOST=ZOS21 SPORT=2626 PUBNM=DB2TOKAF 2>&1 | tee ./ENGINE/DB2TOKAF.pr
Note: The last part of the command, "2>&1 | tee ./ENGINE/DB2TOKAF.prt" causes a copy of the parser report to be displayed on the screen and also written to a file.

Example 2

Shell scripts can be used to invoke the Parser in the Linux environment. Shell scripts are useful for automating the Parser processing and/or for parsing several scripts within the same command. The following shell script named Parse DB2TOKAF.sh uses the same command line from the previous example.
#!/bin/sh
sqdparse ./ENGINE/DB2TOKAF.sqd ./ENGINE/DB2TOKAF.prc ENGINE=DB2TOKAF SHOST=ZOS21 SPORT=2626 PUBNM=DB2TOKAF 2>&1 | tee ./ENGINE/DB2TOKAF.prt
Note:
  • The shell scripts used to parse and run Apply Engines are usually stored in what was previously described as the <SQDATA_VAR_DIR>/<your_hierarchy_here> directory, making this the "Working Directory". That allows for relative paths to be specified for the Engine script file ./ENGINE/DB2TOKAF.sqd as well as other "parts" files referenced in the main engine script such as source descriptions ./DB2DDL/EMP.ddl, etc.
  • At this time IMSTOKAF Apply Engines that will produce AVRO formatted data will require TWO parses, IF one or more of the Source/Target Datastore DESCRIPTIONS have changed. That is because the AVRO schemas will change and the new schemas will need to be added to the Confluent Schema Repository