Changed product behavior in 9.13
Area affected | Old behavior | New behavior | Solutions to preserve old behavior | Release changed in | Reference number |
---|---|---|---|---|---|
Using DMXMFSRT with JCL that contains hex constants | Hex constants are considered to be in EBCDIC encoding and may need to be re-encoded | Hex constants are considered to be in ASCII encoding as expected by MFJSORT | 9.13.13 | DMX-40388 | |
Designing/running new or existing tasks that contain a regular expression with a character class that includes an ambiguous hyphen character. | No design-time or run-time error messages. | Task Editor error message that includes: You have supplied an invalid pattern. invalid range in character class OR run-time error: (RESYNTAXERR) invalid range in character class. | The PCRE library used by Connect ETL for regular expression support was upgraded to avoid security vulnerabilities. The new version has stricter pattern validation that may cause existing tasks that ran successfully in the past to now fail. To prevent the error message(s), escape the hyphen in the regular expression character class with a backslash, or move the hyphen to the beginning of the character class in the regular expression. | 9.13.10 | DMX-37456 |
Changed product behavior in 9.x
Area affected | Old behavior | New behavior | Solutions to preserve old behavior | Release changed in | Reference number |
---|---|---|---|---|---|
File specified by the environment variable DMX_HADOOP_CONF_FILE does not exist | When running in Spark, Connect for Big Data aborts. When running in MapReduce, Connect for Big Data runs without generating warning messages. | When running in Spark or in MapReduce, Connect for Big Data generates a warning message and continues to run. | 9.5.1 | DMX-21085 | |
Custom/extended tasks | When all Connect for Big Data tasks of an IX job are cluster eligible
|
When all Connect for Big Data tasks of an IX job are cluster eligible,
|
9.4.28 | DMX-21405 | |
File specified by the environment variable DMX_HADOOP_CONF_FILE does not exist | When running on Spark in client deploy mode, Connect for Big Data will continue to run without aborting. | When running on Spark in client or cluster deploy mode, Connect for Big Data will abort if the file specified by the environment variable DMX_HADOOP_CONF_FILE does not exist | 9.4.3 | DMX-20140 | |
Execution units in the Connect ETL/Connect for Big Data Job Editor renamed to restartability units. |
In the Connect ETL/Connect for Big Data Job Editor, a group of tasks that share data in the form of pipes, previous/next task pairs, or direct data flows is visible through the "Show Execution Units" context menu item and View menu item |
In the Connect ETL/Connect for Big Data Job Editor, a group of tasks that share data in the form of pipes, previous/next task pairs, or direct data flows is visible through the "Show Restartability Units" context menu item and View menu item |
9.2.2 | DMX-19884 | |
External Metadata dialog in the Connect ETL/Connect for Big Data Task Editor | The Connect ETL/Connect for Big Data Task Editor displays errors for invalid metadata files when you select a file in the External Metadata dialog without specifying a metadata type |
The Connect ETL/Connect for Big Data Task Editor no longer displays errors for invalid metadata files when you select a file in the External Metadata dialog without specifying a metadata type. |
Specify the metadata type before or after selecting the file name and select the Refresh button | 9.0.9 | DMX-18997 |
External Metadata dialog in the Connect ETL/Connect for Big Data Task Editor |
The Connect ETL/Connect for Big Data Task Editor detects the metadata type and prompts you to switch to this type when you specify a metadata file name with an incompatible metadata type. |
The Connect ETL/Connect for Big Data Task Editor no longer detects the metadata type nor does it prompt you to switch to this type when you specify a metadata file name with an incompatible metadata type. Connect ETL/Connect for Big Data generates the errors that result from parsing the metadata file with the wrong metadata type |
Specify the correct metadata type for your metadata file. | 9.0.9 | DMX-18997 |
Selecting a COBOL copybook in the External Metadata dialog in the Task Editor | Connect ETL/Connect for Big Data Task Editor defaults to Micro Focus COBOL data format | Connect ETL/Connect for Big Data Task Editor defaults to VS COBOL II Release 4 format. | Select the appropriate COBOL copybook format if the default is not correct. Existing tasks are not affected. | 9.0.7 | DMX-6841 |
IX job that is not eligible to run on the cluster | The job runs by default on a single cluster node | The job runs by default on the edge node | 9.0.4 | DMX-18771 |
Changed product behavior in 8.x
Area Affected | Old Behavior | New Behavior | Solutions to Preserve Old Behavior | Release changed in | Reference number |
---|---|---|---|---|---|
Driver version number in DataDirect driver file names | Version number was 26 |
Version number is 27. For existing ODBC data source configurations, ensure that the version numbers in the DataDirect driver file names, which are listed in the driver properties section in the odbcinst.ini and odbc.ini files, are updated from 26 to 27. |
8.6.3 | DMX-17942 | |
A packed decimal field containing an invalid value (anything but A-F) in the sign nibble (the last 4 bits). | Treated as a valid number; for example, 0X40 is treated as 4 even though the trailing 0 is not a valid sign value. | Treated as an invalid number; an appropriate warning message is issued, and the value is treated as 0 for all computations | 8.5.15 | DMX-18267 | |
DMXReport output for Connect ETL/Connect for Big Data tasks with Mainframe record formats in the Source Pipe dialog | DMXReport output has the word "record" in the name of the record format (e.g., "Mainframe fixed record length"). | DMXReport output eliminates the word "record" in the name of the record format (e.g., "Mainframe fixed length"). | 8.5.8 | DMX-17960 | |
Tasks that map to COBOL copybooks containing unsigned packed decimal data. Unsigned packed decimal fields are indicated by a USAGE clause of COMP-3 or PACKED-DECIMAL and a PICTURE (PIC) clause that does not contain an "S" (sign). |
All packed decimal data is treated as signed. Negative values are allowed and prefer the "D" sign nibble. Positive values prefer the "C" sign nibble. |
Unsigned packed decimal fields are now treated as unsigned. Negative values are not allowed and will cause validation warnings such as CVROP, CVNEGTOUNSIGN or NEG2USEX. Valid (positive) values in unsigned packed decimal prefer the "F" sign nibble rather than "C" on output. See the Connect ETL/Connect for Big Data help topic, "Decimal, packed, unsigned number data format." |
Modify the COBOL copybook by adding an "S" to the beginning of the PIC clause of each unsigned packed decimal field that you would prefer to treat as signed packed decimal. | 8.5.6 | DMX-17123 |
Connect ETL/Connect for Big Data tasks referencing a variable length array in a reformat. | The variable length array is padded to the maximum number of elements |
The actual number of elements in the input record is output to the variable length array. A task created through the Connect ETL/Connect for Big Data Editor must be resaved to result in the correct behavior. |
8.4.18 | DMX-16897 | |
Connect ETL/Connect for Big Data tasks connecting to HDFS filesystems, where the NameNode hostname specified in the task's HDFS connection does not match the NameNode hostname or HA (High Availability) service name detected at runtime. |
The task fails with the following messages: DMExpress : (HDFSCNNF) HDFS connection to Hadoop namenode (<namenode_host>) failed due to the following reason: (HDFSNNM) server name "<namenode_host>" specified for the connection does not match the namenode host name "<hdfs_server>" in your Hadoop configuration |
The task no longer fails, and the default filesystem (specified by the Hadoop configuration property "fs.defaultFS" or "fs.default.name") takes precedence over the value specified in the task. If the filesystem is an HDFS-compatible filesystem other than HDFS, no messages are issued. If the filesystem is HDFS, the following informational message is issued without affecting the successful completion of the task: DMExpress : (HDFSNNM) an HDFS connection specifies a server name that does not match the namenode hostname "<hdfs_server>" specified in your Hadoop configuration; the value specified in the Hadoop configuration will be used |
8.4 | DMX-16131 | |
Connect Data Connector API. | The struct dmx_connector_FieldProperties in the Connect Data Connector API had eight fields and did not have the ability to set field properties |
dmx_connector_FieldProperties now has an additional field "int m_usageIndicator" for indicating field usage. Existing connectors will need to be modified to initialize the new field. |
8.4 | DMX-15153 | |
Connect Data Connector Java API | The FieldProperties class in the Connect Data Connector Java API did not have a field usage indicator |
FieldProperties now has an additional private data member "usageIndicator" for indicating field usage. There is also an additional member function "getUsageIndicator()" to return its value and an additional constructor to set its value. |
8.4 | DMX-15153 | |
Connect for Big Data jobs run in Hadoop MapReduce | The default MapReduce version was MRv1. To run in YARN (MRv2), you needed to set DMX_HADOOP_MRV to 2. | The default MapReduce version is YARN (MRv2) if MRv1 is not auto-detected. To ensure running in MRv1, set DMX_HADOOP_MRV to 1. | 8.2.1 | DMX-17621 | |
Connect ETL/Connect for Big Data Kerberos environment variables |
Connect ETL/Connect for Big Data used the following Connect ETL/Connect for Big Data Kerberos environment variables to manage Kerberos tickets: DMX_HADOOP_KERBEROS_PRINCIPAL, DMX_HADOOP_KERBEROS_KEYTAB, and DMX_HADOOP_KERBEROS_CACHE |
With the introduction of Kerberos authentication for Teradata databases, Connect ETL/Connect for Big Data Kerberos environment variables are not Hadoop specific. The environment variables are renamed as follows: DMX_KERBEROS_PRINCIPAL, DMX_KERBEROS_KEYTAB, and DMX_KERBEROS_CACHE. In addition, the following new environment variable is introduced: DMX_KERBEROS_CACHE_PREFIX. Backward compatibility for the deprecated environment variables is retained. See the Connect ETL/Connect for Big Data help topic, "Kerberos Authentication in Connect ETL/Connect for Big Data jobs." |
8.0.6 | DMX-15296 | |
Defining Teradata utility settings | Define parameters through the Teradata Utility Settings dialog. | Define parameters through the Source and Target Database Table dialogs. | 8.0.1 | DMX-12226 |
Changed product behavior in 7.x
Area Affected | Old Behavior | New Behavior | Solutions to Preserve Old Behavior | Release changed in | Reference number |
---|---|---|---|---|---|
Netezza load validation when a Connect ETL/Connect for Big Data CHAR field with n characters is mapped to a Netezza CHAR column with less than n characters. | Record is truncated and loaded. Connect ETL/Connect for Big Data issues a warning message. |
Record is rejected and sent to rejected records log file. Connect ETL/Connect for Big Data issues a warning message. Additional messages may be listed in the rejected records exception messages log file. |
7.15.3 | DMX-6556 | |
Netezza load validation when a Connect ETL/Connect for Big Data EN field is mapped to a Netezza INTEGER column with data that forces a numeric overflow. | Value is truncated to maximum integer and loaded. Connect ETL/Connect for Big Data issues a warning message. |
Record is rejected and sent to rejected records log file. Connect ETL/Connect for Big Data issues a warning message. Additional messages may be listed in the rejected records exception messages log file. |
7.15.3 | DMX-6556 | |
Netezza load validation when a Connect ETL/Connect for Big Data EN field is mapped to a Netezza NUMERIC column with data that exceeds precision. | Precision is truncated and loaded. Connect ETL/Connect for Big Data issues a warning message. |
Record is rejected and sent to rejected records log file. Connect ETL/Connect for Big Data issues a warning message. Additional messages may be listed in the rejected records exception messages log file. |
7.15.3 | DMX-6556 | |
Netezza load validation when a Connect ETL/Connect for Big Data EN field is mapped to a Netezza NUMERIC column with data that exceeds scale. | Field is truncated and loaded. Connect ETL/Connect for Big Data issues a warning message. |
Record is rejected and sent to rejected records log file. Connect ETL/Connect for Big Data issues a warning message. Additional messages may be listed in the rejected records exception messages log file. |
7.15.3 | DMX-6556 | |
Connect ETL/Connect for Big Data silent installation. | The installation does not prompt the user to select between a trial and a licensed version, so the recorded installation response file does not include that response. | The installation prompts the user to select between a trial and a licensed version. Re-record the installation to capture the response to this new option and use the new response file for silent installations. | 7.14.12 | DMX-12907 | |
Tasks that extract Unicode encoded character data from Teradata sources using the TTU access method. | Character data from Teradata sources was extracted in the default Teradata client character set interpreted as ASCII. |
For new tasks or tasks resaved in the current release of Connect ETL/Connect for Big Data, the default extraction format of Teradata character data changes. For Teradata sources that do not have any column defined as Unicode, character data will be extracted in the default Teradata client character set and interpreted in the system locale (instead of ASCII). For Teradata sources that have one or more columns defined as Unicode, character data will be extracted as UTF-8. |
Tasks that are not resaved in the current release of the Task Editor will not change. To resave affected tasks and maintain existing behavior, select the option to keep existing behavior when prompted at task open, or when running the Connect ETL/Connect for Big Data Application Upgrade utility (see the Connect ETL/Connect for Big Data Application Upgrade help topic for details). Alternatively, manually change each character column to extract in Locale encoding and treat as ASCII |
||
Greenplum load validation when a Connect ETL/Connect for Big Data CHAR field with n characters is mapped to a Greenplum CHAR column with less than n characters. | Record is truncated and loaded. Connect ETL/Connect for Big Data issues a warning message. | Record is rejected and sent to error table. Connect ETL/Connect for Big Data issues a warning message. | 7.14.7 | DMX-10983 | |
Greenplum load validation when a Connect ETL/Connect for Big Data EN field is mapped to a Greenplum INTEGER column with data that forces a numeric overflow. | Value is truncated to maximum integer and loaded. Connect ETL/Connect for Big Data issues a warning message. | Record is rejected and sent to error table. Connect ETL/Connect for Big Data issues a warning message. | 7.14.7 | DMX-10983 | |
Greenplum load validation when a Connect ETL/Connect for Big Data EN field is mapped to a Greenplum NUMERIC column with data that exceeds precision. | Precision is truncated and loaded. Connect ETL/Connect for Big Data issues a warning message. | Record is rejected and sent to error table. Connect ETL/Connect for Big Data issues a warning message. | 7.14.7 | DMX-10983 | |
Greenplum load validation when a Connect ETL/Connect for Big Data EN field is mapped to a Greenplum NUMERIC column with data that exceeds scale. | Field is truncated and loaded. Connect ETL/Connect for Big Data issues a warning message. | Record is rejected and sent to error table. Connect ETL/Connect for Big Data issues a warning message. | 7.14.7 | DMX-10983 | |
Connect for Big Data tasks with HDFS targets whose disposition is "Overwrite file". | Task aborts if the HDFS target exists. | The HDFS target (and its contents if it is a directory) is removed, and the tasks runs. | 7.14.5 | DMX-10616 | |
Connect for Big Data Sort accelerator. |
Connect for Big Data calculated the input datasize to mappers/reducers based on the HDFS input split size and the number of mappers/reducers. |
New options, dmx.map.datasize.useHDFSsplits and dmx.reduce.datasize.useHDFSsplits, when false (default), allow the user to specify the datasize using options dmx.map.datasize and dmx.reduce.datasize to eliminate the need to query the namenode for the input split size by each mapper/reducer. See the Connect for Big Data Sort Edition User Guide for details. |
7.12.11 | DMX-12349 | |
Vertica load validation when a Connect ETL/Connect for Big Data CHAR field with more characters is mapped to a Vertica CHAR column with less. |
Record inserted with truncation and warning message (for Vertica 5 and earlier). | Record inserted with silent truncation (for Vertica 6 and later). | 7.12.8 | DMX-12122 | |
Vertica load validation when a Connect ETL/Connect for Big Data EN field is mapped to a Vertica INTEGER column with data that forces a numeric overflow. |
Value is truncated to max int and loaded with a warning (for Vertica 5 and earlier). |
Record is rejected with a warning and added to the rejected records log file (for Vertica 6 and later). |
7.12.8 | DMX-12122 | |
Vertica load validation when a Connect ETL/Connect for Big Data EN field is mapped to a Vertica NUMERIC column with data that exceeds precision. |
Excess precision is truncated with a warning (for Vertica 5 and earlier). | Excess precision is silently truncated (for Vertica 6 and later). | 7.12.8 | DMX-12122 | |
Vertica load validation when a Connect ETL/Connect for Big Data EN field is mapped to a Vertica NUMERIC column with data that exceeds scale. |
Field is truncated with a warning (for Vertica 5 and earlier). |
Record is rejected with a warning and added to the rejected records log file (for Vertica 6 and later). |
7.12.8 | DMX-12122 | |
Database Connection dialog. | The DBMS drop down list displayed ODBC as an option. | The DBMS drop down list displays Other as the option that replaces ODBC. From the Access method drop down list, you can select ODBC. | 7.11.6 | DMX-11432 | |
Running DMXJob from the command line with incorrect options. | Error was output using the settings of /LOG option. | Error now output in TEXT format. | 7.4.5 | DMX-7282 | |
Running DMXJob from the command line. | The /LOG option could be included anywhere even if it was in an illegal location. | The placement of the /LOG option must follow documented syntax. | 7.4.5 | DMX-7282 | |
Connect ETL/Connect for Big Data tasks that use time zone columns from Oracle sources. | TZNOTRET message is issued and time zone information is not extracted. | TZNOTRET message no longer appears, time zone information is extracted. | 7.4.2 | DMX-7632 | |
Connect ETL/Connect for Big Data installation and licensing. | License feature was named "Micro Focus Enterprise Server Integration". | License feature is now called "Mainframe Re-hosting Integration". | 7.2.6 | DMX-7978 |
Changed product behavior in 6.x
Area Affected | Old Behavior | New Behavior | Solutions to Preserve Old Behavior | Release changed in | Reference number |
---|---|---|---|---|---|
Arithmetic calculations that result in fractional numbers. | The fractional part of the result is not precise. | The fractional part of the result is precise in most cases. | 6.1.14 | DMX-1738 | |
Saving multiple text logs in the "Connect ETL/Connect for Big Data Server dialog". | Logs were concatenated into a single text file. | Each text log gets saved in a separate file. | 6.1.6 | DMX-6223 | |
Connect ETL/Connect for Big Data tasks that update existing rows in Teradata targets with triggers, foreign key constraints or non-primary indexes. |
The task would run but the transaction order would not be preserved. |
The task now aborts. In order to update a Teradata target having the above properties, customize the load to use TPump. See the topic "Teradata Utility Settings dialog" in the Connect ETL/Connect for Big Data help. |
6.0.4 | DMX-6305 |
Changed product behavior in 5.x
Area Affected | Old Behavior | New Behavior | Solutions to Preserve Old Behavior | Release changed in | Reference number |
---|---|---|---|---|---|
Connect ETL/Connect for Big Data tasks that use the text functions IfContainsAny, IfEqualsAny, FindContainedValue, and URLDecode. | These features were categorized under the product licensing feature "String Functions". |
These features are now categorized under the product licensing feature "Advanced Text Processing". See "Licensing" in the Connect ETL/Connect for Big Data help. |
Please contact dmlicense@precisely.com if you are currently licensed for the Advanced Data Management license package and using any of these functions. You will need new license keys that enable the Advanced Text Processing feature in order to use these text functions. |
5.5.4 | DMX-6291 |
Filters that compare a delimited character field with a string shorter than the field length in a non-ASCII locale. |
The shorter string is incorrectly not padded to the length of the field with the pad byte. |
The shorter string is padded to the length of the field with the pad byte. | 5.4.9 | DMX-6143 | |
Encoded source files with array fields. | The source file encoding was not being passed to individual array fields, as a result array fields were always treated as ASCII. | The source file encoding is being passed to individual array fields. | Do not assign an encoding to the source file, instead change each individual field to have the encoding of the source file, and leave the array fields as ASCII encoded. | 5.3.7 | DMX-7878 |
User defined SQL statement in Connect ETL/Connect for Big Data Task Editor. | The '\' character had to be escaped with another '\', if it was part of a SQL statement defined in the Source or Target Database dialogs. | There is no longer a need for escaping, so '\\' will actually be treated as two '\' characters. | If you have any SQL statements with '\\', change them to '\'. | 5.3.5 | DMX-5891 |
Definition of Teradata source or target in Connect ETL/Connect for Big Data Task Editor. | In the Source and Target Database dialogs, unqualified table names were searched under the user name defined in the Teradata connection. | Unqualified table names are now searched under the default database for the Teradata connection. | Use fully qualified table names. | 5.2.17 | DMX-5456 |
Tasks with a Teradata target that were saved with Connect ETL/Connect for Big Data version 4.7.12.10 or lower. |
Task may have been running successfully. |
Task aborts with the following message: (DBMSABM) abort if error is mandatory for the "Teradata" DBMS. |
Resave the task with the new version of Connect ETL/Connect for Big Data. | 5.2.4 | DMX-4865 |
Non-English versions of Connect ETL/Connect for Big Data running tasks and environment variable PRECISELY_INSERT is defined. |
The job log contained English messages "Contents of ..." | The job log contains localized messages "Contents of ..." | 5.2 | DMX-4784 |
Changed product behavior in 4.x
Area Affected | Old Behavior | New Behavior | Solutions to Preserve Old Behavior | Release changed in | Reference number |
---|---|---|---|---|---|
Composite/array fields used in reformat. | The entire composite/array field was treated as a single field. |
Each elementary field inside the composite/array field will be created separately. For example, if a fixed position composite field was being reformatted into a delimited layout: Input: 01312008abc Output before the change: 01312008,abc Output after the change: 01,31,2007,abc |
In the Reformat Target Layout dialog, use the "Set target format" option to turn the composite/array field into a single field. Consider using enclosing characters for the target file if you need to reuse the target layout in the next tasks. See the topic "Target File dialog" in the online help to learn more about using enclosing characters. |
4.7.3 | DMX-3932 |
Tasks with an encoding specified for fields in a record layout which is associated with a source that has a different encoding. Tasks with an encoding specified for fields in a reformat which is applied to a target that has a different encoding. |
Field level encoding would determine the field encoding and override file level encoding. |
File level encoding will determine the field encoding and override field level encoding. |
When field level encoding is desired, leave the file level encoding unspecified. |
4.7.3 | DMX-4040 |
The Connect service used for communication between the Connect ETL/Connect for Big Data client and the Connect ETL/Connect for Big Data server. |
The Connect Service only used ports for the port mapper and an arbitrary port provided by the port mapper. |
The Connect Service requires the use of a new port, 32636, in addition to the previously used ports. This new port needs to be accessible through any firewalls and/or security policies present between the systems running the Connect ETL/Connect for Big Data client and the system running the Connect ETL/Connect for Big Data server. |
4.7.1 | DMX-4049 | |
Collation and comparison of Unicode encoded values. |
Values are collated and compared according to the Unicode collation algorithm found at http://www.unicode.org/unicode/reports/tr10/using a collation strength of three levels for comparisons. |
Values are collated and compared in Unicode code point order. | Please contact Precisely Support. You can find contact information for your locale on the Precisely website. | 4.4.12 | DMX-4370 |
IsValidDate(datetime) function when datetime evaluates to NULL. | Return value was NULL. | Return value is 0. |
Replace IsValidDate(datetime) with the expression IfThenElse(datetime=NULL, NULL, IsValidDate(datetime)) in your tasks. |
4.3.6 | DMX-1590 |
Any tasks that perform transformations that are impacted by nullability. For example, tasks with source database tables, target database tables, comparisons with empty strings, join fields, new values, etc. |
NULL Handling options for a data type in the Task Settings impacted nullability attributes for fields or values and changed fields and values of that data type to be nullable, if checked. When the NULL Handling settings in a task were checked for a data type, all empty fields or values of that data type in the task would be treated as NULL. When the NULL Handling settings in a task were not checked for a data type, all empty values of that data type in the task would be treated as NULL or NOT NULL depending on the nullability attribute of the individual field. |
Nullability attributes of fields and values are no longer overridden by NULL Handling options in Task Settings. Only fields whose nullability setting is set to "Use Task Setting" are impacted by NULL Handling options in Task Settings. Nullability attributes of fields are determined by the field's settings defined at source field's origin, which may have been defined in a preceding task if the field comes from linked external metadata. Nullability settings of new values are derived from the nullability attributes of the source fields used to create the value. All constants, except the constant NULL, are considered not nullable. |
Resaving your tasks in your newest release may correct all of the problems. Alternatively, change the nullability attributes to "Indicated by empty field" for the target field in a task preceding the task that uses the field in a transformation that relies on NULL behavior. If you have no preceding task, you will need to add a conditional value to check if the field or value is empty and if so, use the NULL constant, else use the field or value. |
3.4.9, 4.3.5 | DMX-3542, DMX-3905 |
Tasks that use the GetExternalFunctionValue() function. | The libraryname parameter can contain an absolute or relative path. |
The libraryname should contain only the file name. Any path specification in the library name is ignored. The path of the library should be set in the appropriate library path environment variable. For detailed information, see the topic "GetExternalFunctionValue function" in the Connect ETL/Connect for Big Data help. |
4.3.3 | DMX-3794 |