This release contains key changes and new features.
- Modules now Distributed as Program Objects (vs Object Modules)
- Db2/z Log Capture (SQDDB2C): Improvements and New features including Schema Evolution
- Replicator Engine (SQDRPL): "The Replicator" a utility like set-and-forget Engine that dynamically accommodates Db2/z Schema changes
- Improved connection management between Publishers and Engines
| Release | Component | Description |
|---|---|---|
| 4.0.56 | Apply (SQDENG) and Replicator (SQDRPL) Engines |
|
| 4.0.55 | Apply (SQDENG) and Replicator (SQDRPL) Engines | Correct issue with the length returned from CONVERT CCSID when converting UTF-16 to another code page. |
| 4.0.54 | Replicator Engine (SQDRPL) | Add TCP/IP soft keep-alive messages to Publishers to avoid socket disconnection during periods of inactivity. |
| 4.0.53 | Apply Engine (SQDENG) | IMS Root Segment Keys lost when using REMAP corrected. |
| 4.0.52 | Version bump to correct packaging error causing 4.0.51 to report itself as 4.0.50. | |
| 4.0.51 | Apply Engine (SQDENG) | z/OS System Authorization Facility (SAF) capability, added in V4.0.48 contained an bug that has been corrected. See V.4.0.48 for details. |
| 4.0.50 | Apply Engine (SQDENG) | IMS DBD Parser updated to support additional syntax elements defined as metadata for external applications. See the IBM documentation Defining DBD and PSB metadata to the generation utilities. |
| 4.0.48 | z/OS System Authorization Facility (SAF) capability has been added for utility programs SQDCONF and SQDUTIL | These utilities can perform a number of functions ranging from Modification and Status Display of a Capture agent to the cleaning of a z/OS System LogStream. This enhancement provides for the optional restriction of specific commands using SAF and the customer's underlying security system (RACF, ACF2 or Top Secret) using classes and profiles. See the two Utility reference manuals for more details regarding z/OS Security Options under Operational Considerations. |
| 4.0.47 | IMS z/OS TM Exit Capture | SQD0526I - IMS CDC exit globally disabled messages. GENCDC must be run again for all exits using JCL similar to sample member GENICDCL included in the distribution. See Generate IMS TM EXIT Capture Agent for more details. It is critical that the XPARM is loaded into ESCA when an exit is disabled to prevent the SQD0526I message from being issued each time the Exit is called. Precisely highly recommends loading the XPARM into ECSA as noted regardless of whether the capture is disabled or not. Finally, ALL of the ESS modules must be re-linked and updated in the IMS RESLIB or JOBLIB/STEPLIB before IMS is brought up. |
| IMS Log Capture (SQDIMSC) | Fix invalid safe-restart LSN, that existed under obscure conditions at normal termination, to prevent IMSC_BAD_RECORD error during capture restart. Note: Some comments repeated from the Installation Guide about applying product maintenance on z/OS.
While maintenance releases generally involve only a subset of Connect CDC SQData modules, updates are packaged as a full installation. Migration of an installation from one environment to the next, development to production for example may involve little more than copying the LOADLIB members to the production libraries. There is nothing that ties the linked modules directly to the LOADLIB. If however, your environments have different version of Db2 or IMS, then the safest course is to re-link in the the upper environments. This is
because we include certain IMS, Db2, CICS and MQ modules. If you are using the product with those modules in lower environments and they are compatible with those in upper environments, then no re-link on upper environments is needed. If they are not compatible, then you have to re-link.
Important: It is highly recommended that you back up your current object and load module libraries before applying any maintenance release. This allows you to quickly back out the maintenance if any problems arise.
|
|
| 4.0.46 | Replicator Engine (SQDRPL) (only) | Ignore the CDCOP value 'z' created by a Db2/z capture when performing a capture based target refresh. The 'z' is intended to trigger target Delete ALL processing which is meaningless for JSON/AVRO targets. |
| Apply (SQDENG) and Replicator (SQDRPL) Engines | When connecting to a Confluent Registry will make use of the user-id and password specified in the .netrc file unless directly overridden in the Confluent Registry url userid/password@whatever. | |
| Db2/z Log Capture (SQDDB2C) | Enhancement supporting EDITPROC processing for captured data. An edit procedure assigned to a table by an EDITPROC clause on the CREATE Table statement can be used to transform an entire row and is often used for encryption of data. A new startup parameter, --editproc instructs the capture to request IFI to invoke an EDITPROC, if applicable to the source table, to perform decryption of the log record. Syntax
Note: --editproc is dependant on IBMs Db2/z APAR PK79207, delivered in 2009, which provided EDITPROC support in IFCID 306. This will become a standard part of Db2/z Version 13 (Apollo).
|
|
| Db2/z Log Capture (SQDDB2C) | Enhancement allowing the normal Metadata scan at the start of capture to be skipped. The Metadata scan is used for tracking ddl changes and is required to support Schema Evolution in subscribing Replicator Engines. When all subscribers to a Db2/z Capture are Apply Engines the initial scan can be skipped. While this eliminates one scan of the Db2/z logs going back to the Safe Restart point, the time saved will only be noticeable when a Capture has been stopped for a period of time long enough to generate a very large number of logs. Syntax
Note: Verification that ddl tracking has been disabled will be visible on the first few lines in the SQDLOG DD output when the Db2/z Log Capture is started.
|
|
| 4.0.45 | IMS Batch Feed Exit | Added for IMS Log Capture |
| Replicator Engine (SQDRPL) | Malformed AVRO timestamp issue. | |
| Apply Engine (SQDENG) | OPTIONS METADATA irregularities in Confluent Schema Registry. | |
| Db2/z Log Capture (SQDDB2C) | MS SQL ODBC Server Time data type formatting issues | |
| 4.0.43 | Db2/z Log Capture (SQDDB2C) | Enhancement supporting capture on individual members of a Db2/z data sharing group. |
| Replicator Engine (SQDRPL) | Correction to JSON representation of Db2 TIME. | |
| 4.0.42 | Apply Engine (SQDENG) | Parser updated to support additional IMS DBD syntax elements defined as metadata for external applications. See the IBM documentation Defining DBD and PSB metadata to the generation utilities. |
| Apply (SQDENG) and Replicator (SQDRPL) Engines and the sqdutil and sqdmon utilities on Linux | Make TLS connection to z/OS will fail when the GnuTLS library is not found. They will now report the error and terminate without crashing. A non-standard installation location is the most likely reason the libgnutils.so file cannot be found. The RHEL7 linux distro and default package manager for example, would place it in /usr/lib64:
In this case the libgnutls.so link is installed in a location included in the 'default library path' : /usr/lib64. If the library has been installed in the wrong location, the following environmental variable can be used but Precisely only recommends doing so in a test environment:SQDATA_GNUTLS_LIBRARY=<path_To_softlink/>libgnutls.so |
|
| 4.0.41 | Db2/z Log Capture (SQDDB2C) | Correction addressing corner case that affected a request to LogMiner at Capture initiation. In more technical terms the Db2/z Capture now tolerates a Db2/z IFI 306 request not repositioning LRSN after the metadata discovery phase. Note: AIX, Linux and Windows platforms were not produced as a part of this release.
|
| 4.0.40 | Oracle LogMiner Capture (SQDLOGM) | Two optional parms were added to the capture configuration CAB file to control the wait interval following an "all caught up" scenario. The two parameters, with their corresponding default values are --small-stall=250 (ms) and --large-stall=1000 (ms). Both values represent fixed wait intervals, that is to say they are not incremental and do not produce longer and longer waits. Previously the default was 10 seconds which came into use when the Oracle LogMiner continuous mine option was lost due to the requirements of both Oracle RAC and Oracle 19. The new parms may only be used with sqdconf modify and become effective immediately following sqdconf stop and sqdconf start sequence. NO apply is required. The --small-stall parm value applies to the first "wait" after capture is caught up. If after the --small-stall wait, the next request also returns no changed data the --large-stall parm value is used for subsequent "waits" until logminer once again returns data to the capture. The effectiveness or efficiency of the wait interval is solely dependent on transaction patterns which are hard if not impossible to analyze. These two parameters are "tuning" values. Precisely recommends all implementations begin with the defaults and then tune for your environment. If --small-stall is too short there is an increased potential for wasting CPU performing logminer calls, current LSN queries and other factors. The sqdconf display shows current values ONLY if you have overridden the defaults! Example:
|
| 4.0.39 | Apply Engine (SQDENG) | Correction affecting Relational targets where compensation was failing on multi-key column tables. |
| 4.0.37 | Db2/z Log Capture (SQDDB2C) | Capture will be more responsive when in a caught-up situation. When the Db2/z Log Miner returns a complete buffer more data is immediately requested. When there are no outstanding UOW's to publish and no new data returned by the Db2/z Log Miner, the capture gradually extends the wait period between Log Miner requests. While that will still occur, it will be more aggressive and use a shorter initial wait. Previously if a partial buffer was received, Capture would wait 3 seconds before requesting more data from the log, and if no data was received, increase the wait request delay by 1 second with a 5 second maximum wait. The initial wait following a partial buffer was reduced 250 ms, followed by 1 second increments up to the maximum 5 second wait. The change provides more sub-second latency on lightly loaded systems. |
| Oracle LogMiner Capture (SQDLOGM) | Corrected corner case in a very low activity RAC environment where the Oracle SCN (system change number) of the last log record processed has still not advanced by the time the Capture makes a subsequent request to the Oracle LogMiner. | |
| Replicator Engine (SQDRPL) | Corrected key-hash inconsistencies in JSON/AVRO topics that resulted in different partition assignments for insert/update/delete activity for records having the same source key(s). | |
| 4.0.36 | AVRO |
|
| IMS Log Capture (SQDIMSC) |
|
|
| IMS Batch log Feeder Exit | SQDAIBLF must be regenerated via GENXPARM/GENCDC if it was created before release 4.0.10. | |
| 4.0.35 | Replicator Engine (SQDRPL) | Correction when replicating Db2 DECIMAL data type to AVRO. |
| Apply (SQDENG) and Replicator Engine (SQDRPL) | Correction when replicating COBOL binary fields with implied decimal places, eg: PIC S9(7)v99 BINARY value of 50.00 was output as 5000. | |
| 4.0.34 | IMS Log Capture (SQDIMSC) | Fix the configuration maintenance process to correctly process adding a "feed": Syntax
Note: Some modify arguments are global (apply at the cab level), and some apply to a specific ssid. The --add-feed, --remove-feed, and --filter-exit parameters are global and don't accept --ssid. They will fail if --ssid is specified. The --zlog parameter can be global or can apply to a specific ssid. The --lsn, --inactive, and --active all require --ssid and will fail without it.
|
| 4.0.33 | Oracle LogMiner Capture (SQDLOGM) |
|
| 4.0.32 | Apply Engine (SQDENG) | Correct issue with Db2/z CDC records with VARCHAR columns containing binary zeros. Data in such a source column was truncated from the first occurrence of binary zeros.. |
| Oracle LogMiner Capture (SQDLOGM) | Oracle 19 stability improvements. | |
| 4.0.31 | Oracle LogMiner Capture (SQDLOGM) | Correct issue related to Oracle V19 continuous mining. |
| Apply Engine (SQDENG) and SQDAEMON | Correct issue related to handling of sqdmon commands processed by the daemon that prevented agent's context from being found, making the Engine appear unresponsive. | |
| Apply Engine (SQDENG) | New DATASTORE keyword QUERY added to support SELECT clause. When the source DATASTORE is an RDBMS and it is being read remotely for an initial load or refresh process, a Select statement and WHERE clause can now be specified to limit the source data being selected. There are no other changes required. Example
Note:
|
|
| 4.0.29 | Oracle LogMiner Capture (SQDLOGM) | Improvements in log handling resulting in ORA-1291. |
| 4.0.28 | IMS TM Exit Capture | XPARM capacity limit expanded, tripling the number of objects that can be specified for capture by a single instance of the Exit. |
| IMS Log Capture (SQDIMSC) | Updated to handle gigantic cascade delete transactions. | |
| IMS Apply Engine (SQDENG) | Improved delete performance | |
| Oracle LogMiner Capture (SQDLOGM) | Improve handling of off-line logs ORA-1291 |
4.0.26
- Precisely Rebranding
- TLS Support - Security Concerns are increasingly leading to the implementation of security protocols between customer data centers and in some cases even within those data centers. Transport Layer Security (TLS) is supported between all components on z/OS and with this release between Connect CDC SQData clients on Linux and Change Data Capture components on z/OS, only.
- Oracle LogMiner Capture (SQDLOGM): Added support for the Oracle 19 which no longer supports the continuous_mine option.
Connect CDC SQData already operates transparently on z/OS under IBM's Application Transparent Transport Layer Security (AT-TLS). Under AT-TLS no changes were required to the base code and only port numbers in the configuration need to be changed, as described below. For more information regarding AT-TLS see your z/OS Systems Programmer.
Once IBM's AT-TLS has been implemented on z/OS the following steps are all that is required by Daemon, Capture and Publisher components and on z/OS only, the SQDATA Apply Engine and SQDUTIL, to be in compliance with TLS:
- Request the new secure port to be used by the Daemon.
- Request Certificates for MASTER, Daemon and APPLY Engine Tasks.
- Stop all SQDATA tasks
- Update APPLY Engine source scripts with the new Daemon port. Note, port's are typically implemented using a Parser parameter so script changes may not be required.
- Update SQDUTIL JCL and/or SQDPARM lib member's, if any with the new Daemon port.
- Run Parse Jobs to update the Parsed Apply Engines in the applicable library referenced by Apply Engine Jobs.
- Update the Daemon tasks with new port.
- If using the z/OS Master Controller, update the SQDPARM Lib members for the MASTER tasks with new Daemon port.
- Start all the SQDATA tasks.
Linux clients connecting to z/OS Daemons running under IBM's (AT-TLS) and servicing z/OS Change Data Captures now support the TLS handshake. TLS connections to Change Data Capture components running on AIX and Linux are not supported at this time.
The only external prerequisite to enabling TLS on Linux is the GnuTLS secure communications library which implements TLS, DTLS and SSL protocols and technologies including the C language API used by Connect CDC SQData on Linux. On RPM-based Linux distributions, YUM (Yellowdog Updater Modified) can be used to install GnuTLS. For more information regarding YUM or other Package Managers see your Linux Systems Administrator.
Linux clients making TLS connections to z/OS, will by default perform the "typical TLS handshake" where the client uses the server's certificate for authentication and then proceeds with the rest of the handshake process. Specific changes to connection parameters are described below.
- Request the new Port number that was assigned to the z/OS Daemon.
- Stop all running Connect CDC SQData Linux Engines, the local Daemon need not be stopped.
- Update Engine source DATASTORE URL to use the "cdcs:// URL syntax type to specify that a secure TLS connection is requested (changed from "cdc://" to "cdcs://").
- Update Engine source DATASTORE URL to use the TLS z/OS Daemon port.Note: The port number is typically implemented using a Parser parameter so script changes may not be required.
- Parse the Apply Jobs to create a new <engine>.prc file in the applicable directory.
- Start the Connect CDC SQData Linux clients.
- If the SQDMON utility is used to connect to a remote z/OS Daemon running under IBM's (AT-TLS), for example to request an "inventory" or "display" the status of a publisher a new --tls parameter must be specified:Syntax
sqdmon inventory //<host_name> [-s port_num | --service=port_num] [--identity=<path_to_nacl_id/nacl_id>] [--tls] - If the SQDUTIL is used to connect to a remote Publisher running under IBM's (AT-TLS), to copy/move CDC records to a file, the "cdcs://" URL syntax type must be specified:Syntax
sqdutil copy | move cdcs://<host_name>:<port_num>/<agent_name> <target_url> | DD:<dd_name> - If the GnuTLS library is not installed in a standard location that is included in the "default library path" we will be unable to locate the library. The best option in that case is to add the following Environment variable that contains the full path and file name libgnutls.so:
SQDATA_GNUTLS_LIBRARY=<path to>/libgnutls.so - Although uncommon, if yours is a Mutual Auth aka Mutual Authentication implementation, which performs authentication of the client by the server, then two additional environmental variables must be used to identify the client certificate and key. The server will then use the client side certificate to authenticate the client before proceeding with the rest of the handshake.
SQDATA_TLS_CERT=</directory/file_name> SQDATA_TLS_KEY=</directory/file_name>The Linux client will by default use the system installed Certificate Authority (CA). If a local CA file is used, it must be specified using a third Environmental variable:SQDATA_TLS_CA=</directory/file_name>
| Release | Component | Description |
|---|---|---|
| 4.0.25 | Db2/z Log Capture (SQDDB2C), Db2/LUW Capture and Oracle Capture | Optimize processing of compensation records in the undo/redo log to improve performance of full or partial transaction rollbacks. |
| IMS Log Capture (SQDIMSC) | Filter Exit: suppress commits for transactions where all cdc records were omitted by the filter exit. | |
| IMS Log Capture (SQDIMSC) | Performance Optimization - buffer multiple IMS CDC records into a single zlog message to reduce the number of IXGWRITE requests. | |
| CDCzLog Publisher (SQDZLOGC) | Related to SQDIMSC performance optimization - support multiple IMS CDC records in a single zlog message. | |
| Db2/z Log Capture (SQDDB2C) | Correct issue with monitoring WTOs not being displayed when the commands are entered. | |
| Apply Engine (SQDENG) | Support auto-replication into a delimited type format used by the Syncsort Mimix Share component. | |
| 4.0.23 | Replicator Engine (SQDRPL) |
|
| CDCzLog Publisher (SQDZLOGC) | Correct issue with incorrect display of WTO monitoring messages. | |
| Db2/z Log Capture (SQDDB2C) |
|
|
| 4.0.19 | Db2/z Log Capture (SQDDB2C) | Correct abend 0C4 when performing a REFRESH twice in rapid succession for the same table. |
| 4.0.18 | IMS Log Capture (SQDIMSC) |
|
| Db2/z Log Capture (SQDDB2C) | Correct issue with incorrect transient storage file system (vFILE) values being displayed when capture is not mounted. | |
| 4.0.17 | Apply Engine (SQDENG) | Correct issue with ODBC driver initialization if the Db2 library could not be loaded. |
| Replicator Engine (SQDRPL) | Include ICU in the build to accommodate CCSID conversions that were not from 1047, 819 or 1208. | |
| 4.0.16 | ISPF | Add product installer dialogs. |
| Apply Engine (SQDENG) | Correct issue with integer to string conversion when the field is too short to fit the converted value, but truncating the lower order bytes. | |
| Db2/z Log Capture (SQDDB2C) | Add support for GRAPHIC and VARGRAPHIC data types. | |
| 4.0.15 | Apply Engine (SQDENG) |
|
| 4.0.13 | IMS Unload Utility (SQDIMSU) | Add support for a record filter exit to the unload process. |
| IMS Log Capture (SQDIMSC) |
|
|
| Db2/z Log Capture (SQDDB2C) |
|
|
| Apply Engine (SQDENG) (Kafka Apply) | Correct issue with engine deadlocking when a non-zero return code is encountered in an acknowledgement from Kafka. | |
| 4.0.12 | Apply Engine (SQDENG) |
|
| Keyed File Compare (Differ) (SQDDFCDC) | Correct issue with buffer overrun with records approaching 32K in length. | |
| Db2/z Log Capture (SQDDB2C) |
|
|
| Replicator Engine (SQDRPL) |
|
|
| 4.0.9 | Apply Engine (SQDENG) | Add user metadata support for JSON/Avro formatted messages. |
| Keyed File Compare (Differ) (SQDDFCDC) | Correct issue with buffer overrun with records approaching 32K in length. | |
| Db2/z Log Capture (SQDDB2C) |
|
|
| Replicator Engine (SQDRPL) |
|
|
| 4.0.6 | All Components | Suppress the Private Key I/O error message, which is informational and has no impact on the component operation. |
| Db2/z Log Capture (SQDDB2C) | Continuation of the schema evolution feature for V4 | |
| CDCzLog Publisher (SQDZLOGC) | Correct issue with excessive sleep times when current in the logstream. | |
| Replicator Engine (SQDRPL) | Continuation of Db2 schema evolution feature for V4. |
4.0.1
- Modules now Distributed as Program Objects (vs Object Modules)
- Required for Metal C - used for zIIP enablement
- Unterse job(s) modified to unterse program objects
- Linkedit JCL and PROCs updated
- Db2/z Log Capture (SQDDB2C)
- Db2 Schema Evolution introduced to allow DBA to alter tables without having to worry about impact on capture
- New Db2 Package/Plan → SQDV4000 → allows for coexistence with V3
- New "Pending" Tables
- Replicator Engine (SQDRPL)
- Db2/z Sources (Initially)
- Parallel apply threads for higher throughput to when applicable (Kafka)
- No DDL required on target side – metadata sent from capture
- JSON/AVRO messages adjusted automatically when source schemas change
- Full integration with the Confluent Schema Registry
- V4 JCL, ISPF, PROC, PARM, DBRM and Sample Files Distributed as: SQDATA.V400nn.<type>.TERSE
- V4 Program Object Files Distributed as: SQDATA.V400nn.PGMOBJ.TERSE
- JCL Changes
- ALLOCDS - allocate V4 program object library
- UNTERLIB - unterses all Connect CDC SQData V4 z/OS libraries
- UNTEROBJ - unterses the program object modules only
- BINDSQD - binds new plan/package SQDV4000
- All SQDLINK jobs - support linking of program object modules
- PROC Changes
- SQDLINK - supports linking of program object modules
- Requires Db2 V12
- Sends Metadata Updates to Apply side
- Replicator dynamically changes target JSON/AVRO to match altered table
- Apply Engine with DDL will stop if incompatible with altered table
- Supported Alter Events
- Add Column
- Drop Column
- Rename Column
- Alter Column Length
- Alter Column Type → Integer to DECIMAL, CHAR to VARCHAR, etc.
- Create Table (allows for preregister of tables that have not yet been created)
- Drop Table
- Rename Table
- Requires DATA CAPTURE CHANGES on Two (2) Catalog Tables
- SYSIBM.SYSTABLES
- SYSIBM.SYSCOLUMNS
- Via Batch Job
- Use --pending keyword in SQDCONF add command
- Table changes will be captured
- Via ISPF Panels
- Must enter Table manually - cannot select from catalog if it does not exist
- Set Pending to Y (default is N)
- Table changes will be captured when table is created
- High-Performance / High-Throughput
- Parallel Apply to Kafka or Relational
- Transactions Kept in Order
- Schema-Less - No DDL Required
- Supports Schema Evolution - Automatic update of AVRO schemas in Confluent Repository for Kafka and Hadoop