The OPTIONS command is used to specify certain global script behaviors. The applicable options depend on the nature of the source and target datastores involved. Options that do not apply to a particular datastore are silently ignored.
OPTIONS <option> [, <option>];
Keyword | Description |
---|---|
CDCOP ( '<insert_char>', '<update_char>', '<delete_char>' ) |
When dealing with Change Data, one piece of relevant information is the "Change Op", that indicates the kind of operation being processed. For historical reason, Change Op has been ‘I’ for insert, ‘D’ for delete and ‘R’ for update (or replace in IMS parlance). When dealing with relational data it is more natural to refer to update as ‘U’. Still, for backward compatibility reasons, it is not possible to change the default. This option allows you to choose the 3 characters used to represent the 3 different operations. If you choose to use it is strongly recommended that you stick to CDCOP( 'I', 'U', 'D'). |
CONFLUENT REPOSITORY <registry_url> |
Only applies to AVRO formatted Kafka targets. The url of the Confluent Schema Registry to be used to retrieve/register AVRO schemas. The presence of that option also triggers the generation of a confluent header in front of each AVRO encoded Kafka topic written. |
USE AVRO COMPATIBLE NAMES |
JSON Type Datastore targets only. The Apply Engine automatically transforms source datastore Description field/column names into what is referred to as "lower snake_case" where every alphabetic character is turned to lower case and all non-alphanumeric characters are transformed to '_' (underscore). While the AVRO Type Datastore specification only excludes the '-' (dash) character, Mainframe Datastore Descriptions, such as COBOL Segment Descriptions and DB2 Column Names tend to use All Caps, which is frowned upon in other environments and are therefore changed to lower case. Testing and validating results formatted as AVRO can be a challenge due to the separation of schema from data and the cryptic formatting that separates individual data points. Using this option together with the "temporary" specification of JSON rather than AVRO formatting produces typical JSON output but with the same literal names that will be present in the AVRO schemas making it much easier to validate output before moving to AVRO formatted target Datastores. |
NAMESPACE '<name_space>' |
Optional name_space assigned to the Objects created, that will appear in the JSON/AVRO (header/payload) as the "namespace":"<name_space>" pair. |
METADATA (<metadata_name> [AS <metadata_alias> ] [,<metadata_name> [AS <metadata_alias> ]] [, ...]) |
Metadata header tags for JSON or AVRO target datastores ONLY. See METADATA Header Tags and examples. |
APPLICATION ENCODING SCHEME = <ccsid> |
By default the Engine assumes that Character Encoding on z/OS is Code Page 1047, on Linux and UNIX it is Code Page 819 and on Windows, Code Page 1252. Note: Code page translation performed by the database communications interface will normally be sufficient and is the recommended configuration.
|
ISO8601 PRECISION <n> |
Specifies the precision of the fractional seconds of the METADATA "timestamp" option where <n> must be less than or equal to the precision of the original source log timestamp, the zOS Storeclock. The range of values in ISO8601 PRECISION <n> is 0-6. Zero results in seconds with no precision and 6 is Microseconds. The zOS Storeclock is more precise at 64 bits and technically both hardware and application dependent. If more precision is desired then the METADATA "stck" value containing the actual Storeclock should be used. |
LOGICAL DECIMAL TYPE | Applies to AVRO formatted target data payloads and specifies that the AVRO "Logical Decimal Format" will be used instead of the "string" datatype which is the default for decimal numbers. |
ATOMIC UOW YES | NO | For historical reasons, when processing some large Units-of-Work, it was necessary to treat every cdc record as an individual UOW. Note: This option will be deprecated in a future version of the product.
|
DB2 ODBC BUGGY TIMESTAMP | Some DB2 ODBC drivers have been found to reject nonconforming Timestamps. In particular rejecting records containing a space between the date and the time in the timestamp. This option instructs the Engine to accept such timestamps. Note: This option has been replaced by the "new" DATEFORMAT ISOIBM specification?
|
PSEUDO NULL = YES | NO |
IMS and VSAM Datastores only. For historical reasons, when processing binary source records, the Engine would treat some values as 'NULL" indicators, even though there is no such thing as NULL in either IMS or VSAM data. For backward compatibility reasons this can not be changed but this option changes the product behavior. It is strongly recommended to use PSEUDO NULL = NO, and to reserve NULL for handling of invalid data, for instance with an INVALID <source_field> SETNULL; statement. Pseudo null has no impact at all on non-binary record input. Note: "PSEUDO NULL" will be deprecated in a future version of the product.
|
Example
IFNULL CDCIN.DEPT_NAME SETSPACE;
Notes:
- Access to the Confluent Schema Registry is accomplished through specific Registry URL's. Connect CDC SQData assumes that support for command line url (cURL) and the Confluent Schema Registry has been installed and configured somewhere in your environment for use by both Apply and Replicator Engines. The URL and port number of the Registry will be required in order to read and register schema information.
- Cloud based Schema Registries may require URL Encoding of some characters, see HTML URL Encoding Reference for examples.
- The external library libcurl is required to support communication with the Confluent Schema Registry. Libcurl can be installed using distribution provided package if available or built from source which can be downloaded from https://curl.se/download.html.
- The Confluent Platform is available at https://www.confluent.io/download/.