Implementing Pre- and Post-Processing of Scheduled Imports and Exports - EnterWorks - EnterWorks Process Exchange (EPX) - Precisely EnterWorks - 10.5

EnterWorks Classic Administration Guide

Product type
Product family
Precisely EnterWorks > EnterWorks
Precisely EnterWorks > EnterWorks Process Exchange (EPX)
Precisely EnterWorks
Product name
Precisely EnterWorks
EnterWorks Classic Administration Guide
Topic type
First publish date

The Scheduled Imports feature provides the option to pre-process files before they are imported. The Scheduled Exports feature provides the option to post-process files after they have been exported. In both cases, the actual processing is handled by a Java class that is an extension of the BaseCustomProcessFile class found in the Services.jar file (or in an application-specific JAR file).

When a processing block is being configured within a Scheduled Import or Scheduled Export, details on the function and configurable parameters for the processing block are shown. The blocks are organized under Exports (<class>) or Imports (<class>) based on how they are predominately used, but some modules can be used for either pre-processing or post-processing.

The table below lists available pre-defined pre/post-processing blocks.

Classpath Description

Generates an update file, setting the desired set of columns to specific values for each primary key in the source file. The resulting file can be submitted to an import. This provides a way to update all records that were included in an export (for example, to indicate the records have been syndicated).

Creates a fixed position file using one or more export files as a source and one or more mapping files to define the format of the output file. One format file is defined for each format record appearing in the file. If multiple files are defined, records in each file must be related by a common key and sorted on that same key. This allows the file processing to complete the file merge in a single pass. The order of the records is determined by the order the file mappings are defined. If there is a one-to-one mapping of the different records, then the same file can be used as the source for each format. The mapping files must be comma-delimited files with the following columns:

  • Description - user-description for field (not used in processing)

  • Type - datatype for field:

    • N - numeric with leading zeros for padding.

    • A - alphanumeric with trailing spaces for padding.

  • Length - number of character columns for field

  • Start - starting column position with first column being 1

  • End - ending column position with first column being 1

  • Value - value for field or export file column reference (denoted by double-pipe characters). A single space can be denoted with: [SPACE].

Each mapping file is validated, ensuring the Start and End positions match the accumulated length of each field.

Generates a Taxonomy Template in XLSX format using the exportTemplate for global attributes and the category-specific attributes for the designated taxonomyNode. They are shaded if they are mapped in the designated Taxonomy Node in the designated Publication Template.

Reads the taxonomy template entries in the file and kicks off a Template Taxonomy export for each one, setting:

  • Parameter1 to the publication name

  • Parameter2 to the taxonomy node

  • Parameter3 to the name of the saved set for each job in the form: 'TaxonomyTemplate_<taxonomyNode>_<datetime>'

  • Parameter4 to the batch number.

Each launched job should use the ProcessTaxonomyExportTemplate post-processing block to generate the corresponding template. The collection of template jobs can be consolidated into a single file using the TaxonomyTemplateExportZip post-processing block.

Removes the first line of the CSV file.

Splits a CSV file into multiple parts, each no larger than the specified maximum number of records. Each part will be named <baseFileName>_<partNumber>.csv. The collection of files will be placed in a ZIP file which is returned.

Splits a delta export into multiple export jobs, each including up to the maximum records per job. This can be used in situations where the target system cannot process large files. The delta date and time is specified and optionally additional conditions. The processing uses this information to generate separate saved sets for each batch of the specified size and then launches Scheduled Export jobs using the designated Scheduled Export as a template for each job, updating only the Parameter1 attribute to the name of the saved set for the part and Parameter2 to the part number, and optionally Parameter3 - Parameter5 with any additional data. This allows the target Scheduled Export to have full control over the file naming convention and how the saved set is used (for example, in 'Saved Set' or 'Root Repository Saved Sets' attributes. This processing can be used in conjunction with any export type and format that can operate on a saved set.

Packages the TaxonomyTemplateExport files for the same batch into a single .zip file. The ProcessTaxonomyTemplatetNodeList processing block launches separate TaxonomyTemplateExport jobs for each taxonomy node listed in the seed file, each being identified as being part of the same batch in Parameter4. This post-processing block collects the files from each job for the same batch and packages them in a single .zip file.

Concatenate a set of files in the designated source directory that match the designated file name pattern, using the header from the designated import template for all files. If the sources files are not identical in structure and the import template contains a superset of attributes, some columns may be padded in each appended file. To prevent the attributes from being cleared, the keepRepoValues import option should be set to true.

Copies the import file using the designated file name, then processes the original file so that it can be processed by a second import job (for example, for another repository or different pre-processing).

Converts the import file from one encoding to another.

Generates a delta file using the current file and the previous one that was processed. The current and previous files must be CSV format. Requires the EnterworksDiff utility be installed and configured on the Enable server. The generated delta file will include the column il_modification_status, indicating whether the record is new, updated, or removed. If there is no previous file, the current file will be processed in full without the il_modification_status column added. If new records need a specific status then the corresponding status attribute should have that default value.

Converts a csv file containing multiple attributes as key-value-uoms into (several) vertical files containing a separate line for each triplet. No more than 500k lines will be saved in each target file, using the naming convention: vertical_<fileNumber>_<sourceFilename>. For example, consider an input file containing the following columns:


This will be converted into the multiple rows, with one row per attribute, with the following headers:


Any global attributes (MFG_PART, STATUS) and extra columns (GROUP_*, DIFFERENTIATOR_*) are ignored.

Note: This class returns the source filename. It does not return the vertical files. Separate jobs must be run to process the generated files.

Imports updates to existing single or multiple code sets from a file. If a single code set is imported, the expected columns are the same that are required when importing a code set through the UI. If multiple code sets are imported, the first column must be the code set name and all codes for that code set must be consecutive. For multiple code sets, all options apply to each code set and the file type must be csv. Each code set must already be defined in EnterWorks - the import will fail if the code set does not exist.

Initiates a 'Save and Send' work item on the designated workflow and starting point for the designated saved set and the specified properties. Several reserved words can be specified for the property values:

  • %savedSetId% - indicates to use the ID for the saved set identified by the savedSetName property.

  • %userId% - use the ID for the user identified by the userName property.

  • %repositoryId% - use the ID for the repository identified by the repositoryName property

Adds columns and values to the import file before loading.

Adds a header line to the CSV import file before loading.

Performs concatenations of data to specific columns within an import template. A formula expression of other columns within the template can be used. Assumes that all columns already exist and does not create/remove any columns.

Preprocesses a .xlsx file containing with possible multiple header rows. Adds the designated columns and their values to the file to facilitate batch processing of the file. Generates a new .xlsx file for import into an EnterWorks repository.

Processes a single file or a zip file containing one or more image files. If the submitted file has the .csv extension, it is passed on for import processing by EPIM. If the submitted file has the zip extension, the contents of the zip file are processed. Any valid image files are copied to the designated image directory. If the submitted file is a valid image file, it is copied to the designated image directory.

Splits a multi-repository comma-delimited CSV export file into separate import files based on the Import Template definitions referenced by the designated Scheduled Imports. Duplicate rows (providing they are consecutive in the file) are removed as well as rows containing no values. Jobs for each separate scheduled import are launched by this module. The main file contains only those columns in the Import Template assigned to the Scheduled Import launching this pre-processing.

Splits the import file into two parts: The first part is processed and the second part is staged in the designated target directory.

Preprocess import file containing dynamic attributes in key/value pairs or optionally in key/value/UOM triplets. Files may contain explicit attribute names or pairs/triplets of columns that are numbered consecutively for each pair/triplet.

When a file is processed, the contents of each record are split into pre-defined parts as defined in the designated import templates and each file is loaded separately. The first part is loaded by this import and subsequent parts are loaded by dependent imports that do not require pre-processing.

If consecutive files contain the same primary key, the values from those lines are combined into a single update (split amongst the defined parts). This allows for vertical files where each row contains the primary key and a single key/value pair or key/value/UOM triplet and multiple rows are for the same repository record.

Except for the last part, any empty rows for a part are filtered out since they will not make any changes to the target record, reducing overall import processing time. All records are included in the last part as it should be the only one that is validated but this requires the parts to be daisy-chained together to ensure it is truly the final part that is loaded.

Each part import template can have up to 1022 attributes, including the primary key.

This class will move all files from source to target directory passed to the pre-processing module that match the specified file patterns (allow up to 20 to be specified as separate arguments for the module) using the asterisk (*) as the wildcard indicator.

Transforms a .csv or .xlsx file into an .xlsx or .csv file containing either:

  • The columns that match the designated import template.

  • Only the valid and transformed columns from the import file.

Optionally, it validates designated columns for required or specific values, and rejects a row if the values are empty or do not match.

Decompresses zip file before processing.

To configure a Scheduled Import or Export with a pre/post-processing block:

  1. Open the Scheduled Import or Scheduled Export repository.

  2. Open the record for the Import or Export.

  3. Open the Import Details or Export Details tab, and open the Import Preprocess Options or Export PostProcess Options sub-tab.

  4. Set Preprocess File or Postprocess File to “Yes”.

  5. Enter the full class path for the processing block class and click the calculate button on the Pre-process Class or Post-process Class field.

  1. The define arguments window will open showing a description for what the block does along with what arguments can be set and the current values.

  1. The argument values can be changed and saved by clicking Update Attributes.