The following problems have been fixed since Release 9.13 of Connect.
9.13.22
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
Character columns with data longer than 512 bytes are truncated during extraction. | Running a job or task that extracts a Teradata column as ASCII using an ODBC connection on AIX 7.2. | DMX-41589 | 9.13.22 |
9.13.19
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
Trailing pad bytes (which default to NULL) in text output fields; Value warnings/errors like CVROP in downstream tasks. | Running a task or job where a text constant being concatenated needs to be converted to an encoding where its length may change. | DMX-40103 | 9.13.19 |
Connect ETL crashes when using Report Writer options. | Running a task that uses report writer options like /SECTION, /SECTIONFOOTER. | DMX-41513 | 9.13.19 |
Unable to insert data to target table '<table_name>_dmx_<date_time><random_number>' due to error '[Cloudera][ImpalaJDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: TStatus(statusCode:ERROR_STATUS, sqlState:HY000, errorMessage:AnalysisException: ALTER TABLE not supported on transactional (ACID) table. | Running a job with an Impala target that uses a Hive staging table. | DMX-40897 | 9.13.19 |
9.13.18
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
/tmp/DMX_<timestamp>/.setup: error while loading shared libraries: libnsl.so.1 : cannot open shared object file: No such file or directory. |
Running the Connect ETL installer on RHEL 9 or higher, or a trial installation on RHEL 8 or higher. | DMX-41449 | 9.13.18 |
dmxjob: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory or dmxjob: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory |
Running dmxjob on RHEL 8 or higher after a Connect ETL trial installation. | DMX-41449 | 9.13.18 |
Java exception "java.lang.NoSuchMethodError: org.apache.hadoop.io.compress.SnappyCodec.isNativeCodeLoaded()Z at com.syncsort.dmexpress.hadoop.ao.a(NativeSnappyLoadChecker.java:22)" |
Running Connect for Big Data jobs on Cloudera CDP v7.1.9 and higher | DMX-41454 | 9.13.18 |
9.13.17
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
Connect : (ASSERT) internal error [Unknown exception] occurred in file ...\partitionwriteactivity.cpp at 153 |
Running a Connect ETL task writing to a Snowflake target table through a JDBC connection. | DMX-41372 | 9.13.17 |
Task Editor takes a long time to verify database connection | Connect ETL tasks accessing Snowflake tables using a JDBC connection. | DMX-41372 | 9.13.17 |
Incorrect output | Mainframe target files with floating point numeric fields have some incorrect zero (0) values. | DMX-41367 | 9.13.17 |
Inconsistent list of search results in Connect Global Find | Repeatedly searching directories that contain many tasks and jobs when "Look in subfolders " is checked. |
DMX-40882 | 9.13.17 |
Connect : (CEOFLRECFMT) output file record format inconsistent with expected format (file "<output file>") | Running DMXMMSRT after release 9.13.5 when both input and output record formats are specified as LN (line sequential) in the map file. | DMX-41098 | 9.13.17 |
9.13.16
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
Connect : (RPADD) record 1 to target 1 (length 80) is padded to the minimum length (84), or similar warning indicating a minimum length that is 4 bytes greater than expected. | DMXMFSRT invocation where the corresponding JCL source data set is Micro Focus Line Sequential, and the JCL target data set is fixed with no length specified. | DMX-40786 | 9.13.16 |
9.13.15
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
Open pipes to /dev/null reported by lsof for the cassi32/cassi64 process. | Submitting JCL to a MicroFocus Enterprise Server when Connect DMXMFSRT is configured as the sort replacement. | DMX-40803 | 9.13.15 |
The Task Editor dialog hangs and becomes non-responsive when attempting to map layouts for or sample source and target data files. | Sampling data files or mapping layouts in the Task Editor in release 9.13.14. | DMX-40909 | 9.13.15 |
9.13.14
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
Excessive time spent in the validation and initialization (the steps preceding record processing) of a task definition, approaching and possibly exceeding an hour. | Task definitions including variable length arrays with a maximum size exceeding 5000 elements, denoted in record layouts by occurrences determined by in the Task Editor field definition, repeat determined by in the SDK or DTL command line syntax, or occurs depending on (ODO) in COBOL copybooks. | DMX-40536 | 9.13.14 |
9.13.13
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
Incorrect output | Running multiple tasks/jobs with mainframe VSAM sources that start at the same time. | DMX-40140 | 9.13.13 |
No target file created on Azure Storage | Running a ConnectETL task or job with an empty source. | DMX-39496 | 9.13.13 |
Connect : (COLDSNAV) information about the columns could not be retrieved for "columndata" | Running DataFunnel with a Databricks target. | DMX-37518 | 9.13.13 |
9.13.12
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
Task Editor crash | Editing a task that contains an Excel database connection in release 9.9.1 or later. | DMX-39582 | 9.13.12 |
9.13.10
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
[main] ERROR com.syncsort.datafunnel.Configuration : In map file <source>.databricks.metamap.json: Invalid target_dbms value | Running DataFunnel with a Databricks target when mapping metadata from Netezza, Oracle, or Redshift sources. | DMX-37391 | 9.13.10 |
9.13.9
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
dmxjob returns success even when there are warnings | Running a job on multiple nodes of a Databricks cluster. | DMX-25591 | 9.13.9 |
(CRASHRPT) Crash report archive saved at "<filename>" or Aborted (core dumped) |
Running a task with more than 32768 sources, typically from using wildcards. The correct limit should be 65536 sources. Note: Memory usage increases as the number of sources increases, which may be an issue when available memory is limited or when using 32-bit releases.
|
DMX-38375 | 9.13.9 |
9.13.8
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
Connect Job : (CREATEDIRECTORYERROR) an error occurred while creating directory <directory> () std::exception Job has aborted |
Running a Connect ETL job on Databricks when the job contains a task with multiple targets writing to Azure storage. | DMX-38186 | 9.13.x |
9.13.7
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
(SRPTGEN) DMXReport encountered an error and needs to exit. | Running DMXReport on a task that contains a connection with a repository password. | DMX-36782 | 9.13.7 |
9.13.5
Symptoms | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
(CEOFLRECFMT) output file record format inconsistent with expected format (file "<text>") | Running a DMXMMSRT job which includes a format conversion from variable to fixed or fixed to variable | DMX-35288 | 9.13.5 |
Connect : (CREAT) unable to create "<file>" () (S3IM) Amazon S3 HTTP failed with response code AccessDenied - Anonymous users cannot initiate multipart uploads. Please authenticate. Connect has stopped. | Connect tasks running on Amazon EC2 machine accessing files through Amazon S3 connection. | DMX-35563 | 9.13.x |
Pop up error message in the Connect ETL Task Editor GUI: Please supply an integer greater than 0. | The Connect ETL Task Editor, when attempting to close via "OK" the Source Salesforce.com Object dialog, with extracted field data types address, complex value, json or location in their default format, fixed length with length of 0. | DMX-34738 | 9.13.5 |
(SFDCWSER) Failed to read request. Exceeded max size limit of 10000000 Connect : (SFBNTSUB) unable to submit batch to bulk job "<record id>" |
Connect ETL tasks with a source data size approaching or greater than 10 MB and with a Salesforce object target using the bulk load method. | DMX-35163 | 9.13.5 |
Error messages about truncated JCL, e.g:
|
Running a task or job with a remote mainframe VSAM source whose name exceeds 42 characters | DMX-35213 | 9.13.5 |
Connect : (MAXDS) Increasing the maximum data segment size may improve Connect performance(MEMVL) Connect memory usage is 0 bytes Connect : (INMEM) there is insufficient memory available to Connect to execute (1 in sssmpln) Connect has aborted |
Running a sort or merge task with a large number of output files (500+) | DMX-35049 | 9.13.5 |
(ORACLE_ERROR) oracle error. ORA-03137: malformed TTC packet from client rejected: [kpoal8Check-3] [32768] [0] [0x000000000] [789544] [] [] []. or (ORACLE_ERROR) oracle error. ORA-28714: OCI_BATCH_ERRORS or OCI_RETURN_ROW_COUNT_ARRAY mode can only be specified for INSERT, UPDATE, DELETE or MERGE statement. |
Running a task that accesses an Oracle 12cR2 or higher database through OCI with a user defined SQL statement that contains PL/SQL with non-DML statements (e.g. CREATE, DROP, TRUNCATE) | DMX-27264 | 9.13.5 |
Incorrect output | Running DMXMMSRT with EBCDIC data and JCL that contains both text and hex constants in COND statements | DMX-34412 | 9.13.5 |
Job Editor does not correctly display a lookup source; Distributed execution results in error: Connect ETL: (INERR) an internal error has occurred (419 in /dmxprod/release_up/dmx/UnifiedProduct/Server/dmxjob/modules/SyncsortJobStageThroughSshglue.cpp) |
Designing or running a job where one of the tasks contains a source that was changed to a lookup source after the job was initially created | DMX-28582 | 9.13.5 |
Task hangs indefinitely | Running a task with a remote FTP VSAM source file when the username is exactly 8 characters | DMX-28564 | 9.13.5 |
(LIBHDFSE) following libhdfs error has been encountered: () Cannot allocate memory |
Running a job on a Hadoop cluster where more than 256MB of heap space is required by the Java VM to access large files on HDFS. | DMX-19895 | 9.13.5 |
(SRVCANNOTOPFLWR) unable to open the file /usr/tmp/<filename> or () failed to create or access a temporary file (No such file or directory) or other messages referencing the directory /usr/tmp |
Running Connect ETL tasks/jobs on a Linux or Unix system where /usr/tmp does not exist | DMX-13123 and DMX-7804 | 9.13.5 |
Incorrect output | Sampling data or running a task that contains a link to an external COBOL copybook with one or more fields using the OCCURS 0 TO 1 DEPENDING ON <field> clause | DMX-21013 | 9.13.5 |
9.13.x
Symptom | Area Affected | Reference Number | Release fixed in |
---|---|---|---|
Incorrect output | Running a task that contains a Lookup function using a condition with multiple look in fields, that also use one or more of the operators <, <=, >, or >=. | DMX-34632 | 9.13.4 |
(CLOSE) unable to close /DMX_FTP_MVS/<file_path>() 451 Transfer aborted: send error. |
Running a task that contains a mainframe source accessed through FTP, where bulk filtering is specified for all sources. | DMX-29964 | 9.13.3 |
./ss_functions: line 4289: /tmp/DMX_<datetime>/.setup: Permission denied | Installing Connect ETL on a Linux or Unix system where the temporary directory (/tmp or specified through the environment variable TMPDIR) was mounted with the noexec option. | DMX-24899 | 9.13.2 |
Apache Log4j CVE-2021-44228 and CVE-2021-42550 | CVE-2021-44228 and CVE-2021-42550 have been remediated by upgrading to Log4j 2.12.4 for Java 7 builds | DMX-32594 | 9.13.2 |
Connect : (INERR) an internal error has occurred (990 in /release_up/dmx/UnifiedProduct/SSLibsshDriver/implementation/LibsshWrapperImp.cpp) | Intermittent error when running jobs or tasks that write large amounts of data through an SFTP connection | DMX-17976 | 9.13.2 |
Installation error: "./ss_functions: line <line number>: strings: command not found" and/or "/<temp or install dir>/DMX_<datetime>/.setup: error while loading shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory" |
Installing Connect ETL on RHEL8+ | DMX-29393 | 9.13.2 |
(JSIDEIV) side qualifier is not expected | Running a join task developed using the Task Editor that contains a named value that is both used as a key and referenced in another named value defined before it. | DMX-31352 | 9.13.2 |
Incorrect output | A Connect ETL task writing to a Hive table via JDBC, mapping only some of the table columns, running from the cluster edge node and staging data. | DMX-33557 | 9.13.1 |
Connect : (CRASHRPT) a crash report archive has been saved at "<filename>". Connect : (INERR) an internal error has occurred (-1073741819 in SSSERREQ) | Running a Connect ETL Aggregate task with a 64-bit build when the option "Sort aggregated records on the group by fields for target" is not checked. | DMX-11807 | 9.13.1 |
Apache Log4j CVE-2021-44832 | CVE-2021-44832 has been remediated by having the JndiLookup.class removed from all Log4j 2.x packages in the product. | DMX-31953 | 9.13 |
Error message from the Task Editor: "Failed to load SSAmazonS3Driver_32 library. The specified module could not be found" |
In the Task Editor, opening or creating an Amazon S3 remote connection from 64 bit DMX | DMX-20976 | 9.13 |