Connect CDC (SQData) Release 4.0 - connect_cdc_sqdata - Latest

Connect CDC (SQData) Release Notes

Product type
Software
Portfolio
Integrate
Product family
Connect
Product
Connect > Connect CDC (SQData)
Version
Latest
ft:locale
en-US
Product name
Connect CDC (SQData)
ft:title
Connect CDC (SQData) Release Notes
Copyright
2026
First publish date
2018
ft:lastEdition
2026-03-10
ft:lastPublication
2026-03-10T11:41:18.328000
L1_Product_Gateway
Integrate
L2_Product_Segment
Data Integration
L3_Product_Brand
Precisely Connect
L4_Investment_Segment
Application Data Integration
L5_Product_Group
ADI - Connect
L6_Product_Name
Connect CDC

This release contains key changes and new features.

  • Modules now Distributed as Program Objects (vs Object Modules)
  • Db2/z Log Capture (SQDDB2C): Improvements and New features including Schema Evolution
  • Replicator Engine (SQDRPL): "The Replicator" a utility like set-and-forget Engine that dynamically accommodates Db2/z Schema changes
  • Improved connection management between Publishers and Engines
Release Component Description
4.0.56 Apply (SQDENG) and Replicator (SQDRPL) Engines
  • Backport from V4.1.25. Corrected AVRO encoding of Double and Single precision floating point numbers.
  • Backport from V4.1.20. Default the conversion of source code page to UTF-8 for special characters (specifically bad string conversion from IMS to Kafka).
4.0.55 Apply (SQDENG) and Replicator (SQDRPL) Engines Correct issue with the length returned from CONVERT CCSID when converting UTF-16 to another code page.
4.0.54 Replicator Engine (SQDRPL) Add TCP/IP soft keep-alive messages to Publishers to avoid socket disconnection during periods of inactivity.
4.0.53 Apply Engine (SQDENG) IMS Root Segment Keys lost when using REMAP corrected.
4.0.52   Version bump to correct packaging error causing 4.0.51 to report itself as 4.0.50.
4.0.51 Apply Engine (SQDENG) z/OS System Authorization Facility (SAF) capability, added in V4.0.48 contained an bug that has been corrected. See V.4.0.48 for details.
4.0.50 Apply Engine (SQDENG) IMS DBD Parser updated to support additional syntax elements defined as metadata for external applications. See the IBM documentation Defining DBD and PSB metadata to the generation utilities.
4.0.48 z/OS System Authorization Facility (SAF) capability has been added for utility programs SQDCONF and SQDUTIL These utilities can perform a number of functions ranging from Modification and Status Display of a Capture agent to the cleaning of a z/OS System LogStream. This enhancement provides for the optional restriction of specific commands using SAF and the customer's underlying security system (RACF, ACF2 or Top Secret) using classes and profiles. See the two Utility reference manuals for more details regarding z/OS Security Options under Operational Considerations.
4.0.47 IMS z/OS TM Exit Capture SQD0526I - IMS CDC exit globally disabled messages. GENCDC must be run again for all exits using JCL similar to sample member GENICDCL included in the distribution. See Generate IMS TM EXIT Capture Agent for more details. It is critical that the XPARM is loaded into ESCA when an exit is disabled to prevent the SQD0526I message from being issued each time the Exit is called. Precisely highly recommends loading the XPARM into ECSA as noted regardless of whether the capture is disabled or not. Finally, ALL of the ESS modules must be re-linked and updated in the IMS RESLIB or JOBLIB/STEPLIB before IMS is brought up.
  IMS Log Capture (SQDIMSC) Fix invalid safe-restart LSN, that existed under obscure conditions at normal termination, to prevent IMSC_BAD_RECORD error during capture restart.
Note: Some comments repeated from the Installation Guide about applying product maintenance on z/OS.

While maintenance releases generally involve only a subset of Connect CDC SQData modules, updates are packaged as a full installation. Migration of an installation from one environment to the next, development to production for example may involve little more than copying the LOADLIB members to the production libraries. There is nothing that ties the linked modules directly to the LOADLIB.

If however, your environments have different version of Db2 or IMS, then the safest course is to re-link in the the upper environments. This is because we include certain IMS, Db2, CICS and MQ modules. If you are using the product with those modules in lower environments and they are compatible with those in upper environments, then no re-link on upper environments is needed. If they are not compatible, then you have to re-link.
Important: It is highly recommended that you back up your current object and load module libraries before applying any maintenance release. This allows you to quickly back out the maintenance if any problems arise.
4.0.46 Replicator Engine (SQDRPL) (only) Ignore the CDCOP value 'z' created by a Db2/z capture when performing a capture based target refresh. The 'z' is intended to trigger target Delete ALL processing which is meaningless for JSON/AVRO targets.
  Apply (SQDENG) and Replicator (SQDRPL) Engines When connecting to a Confluent Registry will make use of the user-id and password specified in the .netrc file unless directly overridden in the Confluent Registry url userid/password@whatever.
  Db2/z Log Capture (SQDDB2C) Enhancement supporting EDITPROC processing for captured data. An edit procedure assigned to a table by an EDITPROC clause on the CREATE Table statement can be used to transform an entire row and is often used for encryption of data. A new startup parameter, --editproc instructs the capture to request IFI to invoke an EDITPROC, if applicable to the source table, to perform decryption of the log record.
Syntax
apply start <capture_cab_file_name> --editproc
Note: --editproc is dependant on IBMs Db2/z APAR PK79207, delivered in 2009, which provided EDITPROC support in IFCID 306. This will become a standard part of Db2/z Version 13 (Apollo).
  Db2/z Log Capture (SQDDB2C) Enhancement allowing the normal Metadata scan at the start of capture to be skipped. The Metadata scan is used for tracking ddl changes and is required to support Schema Evolution in subscribing Replicator Engines. When all subscribers to a Db2/z Capture are Apply Engines the initial scan can be skipped. While this eliminates one scan of the Db2/z logs going back to the Safe Restart point, the time saved will only be noticeable when a Capture has been stopped for a period of time long enough to generate a very large number of logs.
Syntax
apply start <capture_cab_file_name> --no-ddl-tracking
Note: Verification that ddl tracking has been disabled will be visible on the first few lines in the SQDLOG DD output when the Db2/z Log Capture is started.
4.0.45 IMS Batch Feed Exit Added for IMS Log Capture
  Replicator Engine (SQDRPL) Malformed AVRO timestamp issue.
  Apply Engine (SQDENG) OPTIONS METADATA irregularities in Confluent Schema Registry.
  Db2/z Log Capture (SQDDB2C) MS SQL ODBC Server Time data type formatting issues
4.0.43 Db2/z Log Capture (SQDDB2C) Enhancement supporting capture on individual members of a Db2/z data sharing group.
  Replicator Engine (SQDRPL) Correction to JSON representation of Db2 TIME.
4.0.42 Apply Engine (SQDENG) Parser updated to support additional IMS DBD syntax elements defined as metadata for external applications. See the IBM documentation Defining DBD and PSB metadata to the generation utilities.
  Apply (SQDENG) and Replicator (SQDRPL) Engines and the sqdutil and sqdmon utilities on Linux Make TLS connection to z/OS will fail when the GnuTLS library is not found. They will now report the error and terminate without crashing. A non-standard installation location is the most likely reason the libgnutils.so file cannot be found.
The RHEL7 linux distro and default package manager for example, would place it in /usr/lib64:
lrwxrwxrwx  1 root root       20 Jul  8  2020 libgnutls.so -> libgnutls.so.28.43.3
lrwxrwxrwx  1 root root       20 Sep 11  2019 libgnutls.so.28 -> libgnutls.so.28.43.3
-rwxr-xr-x  1 root root  1300504 Mar 14  2019 libgnutls.so.28.43.3 

In this case the libgnutls.so link is installed in a location included in the 'default library path' : /usr/lib64. If the library has been installed in the wrong location, the following environmental variable can be used but Precisely only recommends doing so in a test environment:SQDATA_GNUTLS_LIBRARY=<path_To_softlink/>libgnutls.so

4.0.41 Db2/z Log Capture (SQDDB2C) Correction addressing corner case that affected a request to LogMiner at Capture initiation. In more technical terms the Db2/z Capture now tolerates a Db2/z IFI 306 request not repositioning LRSN after the metadata discovery phase.
Note: AIX, Linux and Windows platforms were not produced as a part of this release.
4.0.40 Oracle LogMiner Capture (SQDLOGM) Two optional parms were added to the capture configuration CAB file to control the wait interval following an "all caught up" scenario. The two parameters, with their corresponding default values are --small-stall=250 (ms) and --large-stall=1000 (ms). Both values represent fixed wait intervals, that is to say they are not incremental and do not produce longer and longer waits.

Previously the default was 10 seconds which came into use when the Oracle LogMiner continuous mine option was lost due to the requirements of both Oracle RAC and Oracle 19.

The new parms may only be used with sqdconf modify and become effective immediately following sqdconf stop and sqdconf start sequence. NO apply is required.

The --small-stall parm value applies to the first "wait" after capture is caught up. If after the --small-stall wait, the next request also returns no changed data the --large-stall parm value is used for subsequent "waits" until logminer once again returns data to the capture.

The effectiveness or efficiency of the wait interval is solely dependent on transaction patterns which are hard if not impossible to analyze. These two parameters are "tuning" values. Precisely recommends all implementations begin with the defaults and then tune for your environment. If --small-stall is too short there is an increased potential for wasting CPU performing logminer calls, current LSN queries and other factors.

The sqdconf display shows current values ONLY if you have overridden the defaults!

Example:
SQDF803I Instance (Version) : orcl19c (19.0.0.0.0)
SQDF800I Display as of: 2021-01-04 15:11:03 -00:00
SQDF901I Configuration file : /home/sqdata/oracdc/oracdc.cab
SQDF902I Status : MOUNTED,STARTED,STALLED
SQDF903I Configuration key : cab_033B41F7447D5EDA
SQDF904I Allocated entries : 31
SQDF905I Used entries : 5
SQDF906I Active Database : localhost/orcl19c
SQDF907I Start Log Point : 25915451
SQDF908I Last Log Point : 26667017
SQDF940I Last Log Timestamp : 2021-01-04-15.11.03
SQDF981I Safe Restart Point : 26667014
SQDF986I Safe Remine Point : 26637169
SQDF910I Active User : SQDATA
SQDF913I Fix Flags : RETRY,DICT_FROM_LOG
SQDF914I Retry Interval : 30
SQDF919I Active Flags : CDCSTORE,RAW LOG
SQDF980I Pending Flags : CDCSTORE,RAW LOG
SQDF915I Active store name : /home/sqdata/oracdc/oracdc_store.cab
SQDF916I Active store id : cab_061EA9E047296655
SQDF957I small stall (ms) : 50
SQDF958I large stall (ms) : 150
SQDF920I Source Entry : # 0
SQDF930I Key : SQD.ADDRESS
SQDF923I Active Flags : ACTIVE
SQDF928I Last Log Point : 22388290
SQDF950I session # insert : 0
SQDF951I session # delete : 0
SQDF952I session # update : 0
SQDF960I cumul # of insert : 1389
SQDF961I cumul # of delete : 1200
SQDF962I cumul # of update : 72
SQDF925I Active Datastore : cdc:///ORATOKAF
4.0.39 Apply Engine (SQDENG) Correction affecting Relational targets where compensation was failing on multi-key column tables.
4.0.37 Db2/z Log Capture (SQDDB2C) Capture will be more responsive when in a caught-up situation. When the Db2/z Log Miner returns a complete buffer more data is immediately requested. When there are no outstanding UOW's to publish and no new data returned by the Db2/z Log Miner, the capture gradually extends the wait period between Log Miner requests. While that will still occur, it will be more aggressive and use a shorter initial wait. Previously if a partial buffer was received, Capture would wait 3 seconds before requesting more data from the log, and if no data was received, increase the wait request delay by 1 second with a 5 second maximum wait. The initial wait following a partial buffer was reduced 250 ms, followed by 1 second increments up to the maximum 5 second wait. The change provides more sub-second latency on lightly loaded systems.
  Oracle LogMiner Capture (SQDLOGM) Corrected corner case in a very low activity RAC environment where the Oracle SCN (system change number) of the last log record processed has still not advanced by the time the Capture makes a subsequent request to the Oracle LogMiner.
  Replicator Engine (SQDRPL) Corrected key-hash inconsistencies in JSON/AVRO topics that resulted in different partition assignments for insert/update/delete activity for records having the same source key(s).
4.0.36 AVRO
  • Support for logical decimal type with precision <= 18 : New OPTIONS block item:

    ,LOGICAL DECIMAL TYPE (Will produce schemas using LOGICAL DECIMAL TYPE for Decimal source data <= 18 digits otherwise will still be written as a string)

  • Make sure backward compatibility in schema generation, a change was made to add a default value to each field in the schema.
  • Allow precision for iso8601 timestamp - New OPTIONS block item: ,ISO8601 PRECISION <n> (where <n> represents the number of digits of precision requested, 6 for example specifies microseconds. The actual result is dependent on source data precision)

  IMS Log Capture (SQDIMSC)
  • For DL/I Batch, get the log data set name from the batch feed zlog.
  • Fix DL/I Batch log restart error after abnormal termination.
  IMS Batch log Feeder Exit SQDAIBLF must be regenerated via GENXPARM/GENCDC if it was created before release 4.0.10.
4.0.35 Replicator Engine (SQDRPL) Correction when replicating Db2 DECIMAL data type to AVRO.
  Apply (SQDENG) and Replicator Engine (SQDRPL) Correction when replicating COBOL binary fields with implied decimal places, eg: PIC S9(7)v99 BINARY value of 50.00 was output as 5000.
4.0.34 IMS Log Capture (SQDIMSC) Fix the configuration maintenance process to correctly process adding a "feed":
Syntax
$ sqdconf modify <imscdc.cab_file_name>
   --add-feed='<log_stream_name>'
Note: Some modify arguments are global (apply at the cab level), and some apply to a specific ssid. The --add-feed, --remove-feed, and --filter-exit parameters are global and don't accept --ssid. They will fail if --ssid is specified. The --zlog parameter can be global or can apply to a specific ssid. The --lsn, --inactive, and --active all require --ssid and will fail without it.
4.0.33 Oracle LogMiner Capture (SQDLOGM)
  • Oracle RAC for Oracle 11, 12 and 19 - Oracle 11 and 12 captures must have the --disable-continuous-mining parameter added to their Capture Configuration (.cab) files:
    Syntax
    $ sqdconf modify <cab_file_name>
    --disable-continuous-mining
    Keyword and Parameter Descriptions
    <cab_file_name>  - Path and name of the Capture/Publisher Configuration (.cab) file

    --disable-continuous-mining - Oracle RAC capture only. This option instructs the Capture Agent not to use the Oracle LogMiner CONTINUOUS_MINE option which is not supported by Oracle 11 or 12 on a RAC system. The --disable-continuous-mining option is the default for Oracle version 19 on both RAC and non-RAC systems.

  • Oracle Capture now requires SELECT authority on V$THREAD in addition to the other existing LogMiner requirements.
  • Additional Oracle 19 stability improvements including tolerating ORA-1013 generated by user cancellation of the Capture agent.
4.0.32 Apply Engine (SQDENG) Correct issue with Db2/z CDC records with VARCHAR columns containing binary zeros. Data in such a source column was truncated from the first occurrence of binary zeros..
  Oracle LogMiner Capture (SQDLOGM) Oracle 19 stability improvements.
4.0.31 Oracle LogMiner Capture (SQDLOGM) Correct issue related to Oracle V19 continuous mining.
  Apply Engine (SQDENG) and SQDAEMON Correct issue related to handling of sqdmon commands processed by the daemon that prevented agent's context from being found, making the Engine appear unresponsive.
  Apply Engine (SQDENG) New DATASTORE keyword QUERY added to support SELECT clause.

When the source DATASTORE is an RDBMS and it is being read remotely for an initial load or refresh process, a Select statement and WHERE clause can now be specified to limit the source data being selected. There are no other changes required.

Example
DESCRIPTION SQLDDL /home/sqdata/employee.ddl AS S_EMPLOYEE;
DATASTORE RDBMS
          QUERY
          /+
          SELECT EMPNO, LAST_NAME, FIRST_NAME,... FROM PROD.EMP
          WHERE EMPNO = 12345 OR
                EMPNO = 12346
          +/  

         OF RELATIONAL
         AS CDCIN
         DESCRIBED BY S_EMPLOYEE;
Note:
  • The description referenced in the DESCRIBED BY clause must match the DESCRIPTION AS (alias_name) specified for the source table DESCRIPTION.
  • The characters "/+" and "+/" must be used to mark the beginning and ending of the in-line SELECT which may span as many lines as required.
  • Each column defined in the Source description, which may represent a subset of the actual columns in the table, in this example employee.ddl, must be explicitly listed in the select statement. The best way to do this is to copy and paste the columns from the DESCRIPTION file, removing the DATATYPE specification and other constraints. We do not recommend using SELECT * because the order of the results are unpredictable and may not cause the ENGINE to fail resulting in a corrupted Target.
4.0.29 Oracle LogMiner Capture (SQDLOGM) Improvements in log handling resulting in ORA-1291.
4.0.28 IMS TM Exit Capture XPARM capacity limit expanded, tripling the number of objects that can be specified for capture by a single instance of the Exit.
  IMS Log Capture (SQDIMSC) Updated to handle gigantic cascade delete transactions.
  IMS Apply Engine (SQDENG) Improved delete performance
  Oracle LogMiner Capture (SQDLOGM) Improve handling of off-line logs ORA-1291

4.0.26

  • Precisely Rebranding
  • TLS Support - Security Concerns are increasingly leading to the implementation of security protocols between customer data centers and in some cases even within those data centers. Transport Layer Security (TLS) is supported between all components on z/OS and with this release between Connect CDC SQData clients on Linux and Change Data Capture components on z/OS, only.
  • Oracle LogMiner Capture (SQDLOGM): Added support for the Oracle 19 which no longer supports the continuous_mine option.
TLS Support on z/OS

Connect CDC SQData already operates transparently on z/OS under IBM's Application Transparent Transport Layer Security (AT-TLS). Under AT-TLS no changes were required to the base code and only port numbers in the configuration need to be changed, as described below. For more information regarding AT-TLS see your z/OS Systems Programmer.

Once IBM's AT-TLS has been implemented on z/OS the following steps are all that is required by Daemon, Capture and Publisher components and on z/OS only, the SQDATA Apply Engine and SQDUTIL, to be in compliance with TLS:

Note: There are no changes to connection URL's for clients on z/OS.
  1. Request the new secure port to be used by the Daemon.
  2. Request Certificates for MASTER, Daemon and APPLY Engine Tasks.
  3. Stop all SQDATA tasks
  4. Update APPLY Engine source scripts with the new Daemon port. Note, port's are typically implemented using a Parser parameter so script changes may not be required.
  5. Update SQDUTIL JCL and/or SQDPARM lib member's, if any with the new Daemon port.
  6. Run Parse Jobs to update the Parsed Apply Engines in the applicable library referenced by Apply Engine Jobs.
  7. Update the Daemon tasks with new port.
  8. If using the z/OS Master Controller, update the SQDPARM Lib members for the MASTER tasks with new Daemon port.
  9. Start all the SQDATA tasks.
TLS Support on Linux

Linux clients connecting to z/OS Daemons running under IBM's (AT-TLS) and servicing z/OS Change Data Captures now support the TLS handshake. TLS connections to Change Data Capture components running on AIX and Linux are not supported at this time.

The only external prerequisite to enabling TLS on Linux is the GnuTLS secure communications library which implements TLS, DTLS and SSL protocols and technologies including the C language API used by Connect CDC SQData on Linux. On RPM-based Linux distributions, YUM (Yellowdog Updater Modified) can be used to install GnuTLS. For more information regarding YUM or other Package Managers see your Linux Systems Administrator.

Linux clients making TLS connections to z/OS, will by default perform the "typical TLS handshake" where the client uses the server's certificate for authentication and then proceeds with the rest of the handshake process. Specific changes to connection parameters are described below.

The following steps are all that are required on the client side to implement TLS on Linux for the "typical" client side handshake performed by an Engine:
  1. Request the new Port number that was assigned to the z/OS Daemon.
  2. Stop all running Connect CDC SQData Linux Engines, the local Daemon need not be stopped.
  3. Update Engine source DATASTORE URL to use the "cdcs:// URL syntax type to specify that a secure TLS connection is requested (changed from "cdc://" to "cdcs://").
  4. Update Engine source DATASTORE URL to use the TLS z/OS Daemon port.
    Note: The port number is typically implemented using a Parser parameter so script changes may not be required.
  5. Parse the Apply Jobs to create a new <engine>.prc file in the applicable directory.
  6. Start the Connect CDC SQData Linux clients.
Note:
  • If the SQDMON utility is used to connect to a remote z/OS Daemon running under IBM's (AT-TLS), for example to request an "inventory" or "display" the status of a publisher a new --tls parameter must be specified:
    Syntax
    sqdmon inventory //<host_name> [-s port_num | --service=port_num] [--identity=<path_to_nacl_id/nacl_id>] [--tls]
  • If the SQDUTIL is used to connect to a remote Publisher running under IBM's (AT-TLS), to copy/move CDC records to a file, the "cdcs://" URL syntax type must be specified:
    Syntax
    sqdutil copy | move cdcs://<host_name>:<port_num>/<agent_name> <target_url> | DD:<dd_name>
  • If the GnuTLS library is not installed in a standard location that is included in the "default library path" we will be unable to locate the library. The best option in that case is to add the following Environment variable that contains the full path and file name libgnutls.so:
    SQDATA_GNUTLS_LIBRARY=<path to>/libgnutls.so
  • Although uncommon, if yours is a Mutual Auth aka Mutual Authentication implementation, which performs authentication of the client by the server, then two additional environmental variables must be used to identify the client certificate and key. The server will then use the client side certificate to authenticate the client before proceeding with the rest of the handshake.
    SQDATA_TLS_CERT=</directory/file_name>
    SQDATA_TLS_KEY=</directory/file_name>
    The Linux client will by default use the system installed Certificate Authority (CA). If a local CA file is used, it must be specified using a third Environmental variable:
    SQDATA_TLS_CA=</directory/file_name>
Release Component Description
4.0.25 Db2/z Log Capture (SQDDB2C), Db2/LUW Capture and Oracle Capture Optimize processing of compensation records in the undo/redo log to improve performance of full or partial transaction rollbacks.
  IMS Log Capture (SQDIMSC) Filter Exit: suppress commits for transactions where all cdc records were omitted by the filter exit.
  IMS Log Capture (SQDIMSC) Performance Optimization - buffer multiple IMS CDC records into a single zlog message to reduce the number of IXGWRITE requests.
  CDCzLog Publisher (SQDZLOGC) Related to SQDIMSC performance optimization - support multiple IMS CDC records in a single zlog message.
  Db2/z Log Capture (SQDDB2C) Correct issue with monitoring WTOs not being displayed when the commands are entered.
  Apply Engine (SQDENG) Support auto-replication into a delimited type format used by the Syncsort Mimix Share component.
4.0.23 Replicator Engine (SQDRPL)
  • Correct issue with converting dec(n:0) type columns to JSON format.
  • Support columns defined as IDENTITY type columns.
  • Correct issue with default Kafka topic not being recognized.
  CDCzLog Publisher (SQDZLOGC) Correct issue with incorrect display of WTO monitoring messages.
  Db2/z Log Capture (SQDDB2C)
  • Enhancement add –ptk=13 to suppress issuing Db2/z commits during log processing where archive logs are on tape devices.
  • Do not allow any Field Specification statements (INVALID, IFSPACE, etc.) for fields typed as DTBINARY.
  • Correct issue with timestamp columns being formatted incorrectly during a table REFRESH operation.
4.0.19 Db2/z Log Capture (SQDDB2C) Correct abend 0C4 when performing a REFRESH twice in rapid succession for the same table.
4.0.18 IMS Log Capture (SQDIMSC)
  • Correct issue with incorrect change operation being used (U vs R) to check for updates that do not have a before image, resulting in the before image pointer being set to the beginning of the concatenated key.
  • Add a diagnostic PTK (--ptk=15) to disable logstream writes..
  Db2/z Log Capture (SQDDB2C) Correct issue with incorrect transient storage file system (vFILE) values being displayed when capture is not mounted.
4.0.17 Apply Engine (SQDENG) Correct issue with ODBC driver initialization if the Db2 library could not be loaded.
  Replicator Engine (SQDRPL) Include ICU in the build to accommodate CCSID conversions that were not from 1047, 819 or 1208.
4.0.16 ISPF Add product installer dialogs.
  Apply Engine (SQDENG) Correct issue with integer to string conversion when the field is too short to fit the converted value, but truncating the lower order bytes.
  Db2/z Log Capture (SQDDB2C) Add support for GRAPHIC and VARGRAPHIC data types.
4.0.15 Apply Engine (SQDENG)
  • Tolerate IDENTITY keyword in the DDL that occur without any other options.
  • Accommodate oversized char/varchar columns to deal with UTF-8 conversion.
4.0.13 IMS Unload Utility (SQDIMSU) Add support for a record filter exit to the unload process.
  IMS Log Capture (SQDIMSC)
  • Add support for a record filter exit in the IMS log capture process.
  • Allow capture to be stopped when in paused state.
  Db2/z Log Capture (SQDDB2C)
  • Correct issue with detecting an out of space condition in the transient storage file system (vfile).
  • Correct issue with Db2/LUW Capture header out of sync with Db2 Log Capture header.
  Apply Engine (SQDENG) (Kafka Apply) Correct issue with engine deadlocking when a non-zero return code is encountered in an acknowledgement from Kafka.
4.0.12 Apply Engine (SQDENG)
  • Add user metadata support for JSON/AVRO formatted messages.
  • Correct deadlock condition with forked engine (super loader) caused by a double delete on a serialization latch.
  • Correct issue with CCSID_CONVERT function not handling NULL data correctly.
  Keyed File Compare (Differ) (SQDDFCDC) Correct issue with buffer overrun with records approaching 32K in length.
  Db2/z Log Capture (SQDDB2C)
  • Add support for GRAPHIC and VARGRAPHIC data types.
  • Correct issue with capture hanging when transient file system is temporarily in FULL status.
  Replicator Engine (SQDRPL)
  • Many fixes and tweaks related to replication to JSON and AVROmake SQD/CDC message selection based on runtime, not compile timeurl: bug in decoding /key /root key in Kafka url
  • Correct issue with missing --name / -n option in command line syntax.
4.0.9 Apply Engine (SQDENG) Add user metadata support for JSON/Avro formatted messages.
  Keyed File Compare (Differ) (SQDDFCDC) Correct issue with buffer overrun with records approaching 32K in length.
  Db2/z Log Capture (SQDDB2C)
  • Add support for GRAPHIC and VARGRAPHIC data types.
  • Correct various issues with RENAME table for schema evolution.
  • Support VARGRAPHIC data types.
  • Continuation of the schema evolution feature started in 4.0.1 – RENAME, DROP COLUMN.
  Replicator Engine (SQDRPL)
  • Correct issue with gathering stats from worker threads for report consolidation.
  • Continuation of Db2 schema evolution feature for Kafka and Hadoop targets.
4.0.6 All Components Suppress the Private Key I/O error message, which is informational and has no impact on the component operation.
  Db2/z Log Capture (SQDDB2C) Continuation of the schema evolution feature for V4
  CDCzLog Publisher (SQDZLOGC) Correct issue with excessive sleep times when current in the logstream.
  Replicator Engine (SQDRPL) Continuation of Db2 schema evolution feature for V4.

4.0.1

  • Modules now Distributed as Program Objects (vs Object Modules)
  • Db2/z Log Capture (SQDDB2C)
  • Replicator Engine (SQDRPL)
    • Db2/z Sources (Initially)
    • Parallel apply threads for higher throughput to when applicable (Kafka)
    • No DDL required on target side – metadata sent from capture
    • JSON/AVRO messages adjusted automatically when source schemas change
    • Full integration with the Confluent Schema Registry
JCL / PROC Changes zOS
  • V4 JCL, ISPF, PROC, PARM, DBRM and Sample Files Distributed as: SQDATA.V400nn.<type>.TERSE
  • V4 Program Object Files Distributed as: SQDATA.V400nn.PGMOBJ.TERSE
  • JCL Changes
    • ALLOCDS - allocate V4 program object library
    • UNTERLIB - unterses all Connect CDC SQData V4 z/OS libraries
    • UNTEROBJ - unterses the program object modules only
    • BINDSQD - binds new plan/package SQDV4000
    • All SQDLINK jobs - support linking of program object modules
  • PROC Changes
    • SQDLINK - supports linking of program object modules
Db2/z Schema Evolution
  • Requires Db2 V12
  • Sends Metadata Updates to Apply side
    • Replicator dynamically changes target JSON/AVRO to match altered table
    • Apply Engine with DDL will stop if incompatible with altered table
  • Supported Alter Events
    • Add Column
    • Drop Column
    • Rename Column
    • Alter Column Length
    • Alter Column Type → Integer to DECIMAL, CHAR to VARCHAR, etc.
    • Create Table (allows for preregister of tables that have not yet been created)
    • Drop Table
    • Rename Table
  • Requires DATA CAPTURE CHANGES on Two (2) Catalog Tables
    • SYSIBM.SYSTABLES
    • SYSIBM.SYSCOLUMNS
Adding a Non-Existent Table to Capture CAB
  • Via Batch Job
    • Use --pending keyword in SQDCONF add command
    • Table changes will be captured
  • Via ISPF Panels
    • Must enter Table manually - cannot select from catalog if it does not exist
    • Set Pending to Y (default is N)
    • Table changes will be captured when table is created
Replicator Engine
  • High-Performance / High-Throughput
  • Parallel Apply to Kafka or Relational
  • Transactions Kept in Order
  • Schema-Less - No DDL Required
  • Supports Schema Evolution - Automatic update of AVRO schemas in Confluent Repository for Kafka and Hadoop