The Scheduled Imports feature provides the option to pre-process files before they are imported. The Scheduled Exports feature provides the option to post-process files after they have been exported. In both cases, the actual processing is handled by a Java class that is an extension of the BaseCustomProcessFile class found in the Services.jar file (or in an application-specific JAR file).
When a processing block is being configured within a Scheduled Import or Scheduled Export, details on the function and configurable parameters for the processing block are shown. The blocks are organized under Exports (com.enterworks.services.exports.<class>) or Imports (com.enterworks.services.imports.<class>) based on how they are predominately used, but some modules can be used for either pre-processing or post-processing.
The table below lists available pre-defined pre/post-processing blocks.
Classpath | Description |
---|---|
com.enterworks.services.exports.CreateUpdateFile |
Generates an update file, setting the desired set of columns to specific values for each primary key in the source file. The resulting file can be submitted to an import. This provides a way to update all records that were included in an export (for example, to indicate the records have been syndicated). |
com.enterworks.services.exports.GenerateFixedPositionFile |
Creates a fixed position file using one or more export files as a source and one or more mapping files to define the format of the output file. One format file is defined for each format record appearing in the file. If multiple files are defined, records in each file must be related by a common key and sorted on that same key. This allows the file processing to complete the file merge in a single pass. The order of the records is determined by the order the file mappings are defined. If there is a one-to-one mapping of the different records, then the same file can be used as the source for each format. The mapping files must be comma-delimited files with the following columns:
Each mapping file is validated, ensuring the Start and End positions match the accumulated length of each field. |
com.enterworks.services.exports.ProcessTaxonomyTemplateExport |
Generates a Taxonomy Template in XLSX format using the exportTemplate for global attributes and the category-specific attributes for the designated taxonomyNode. They are shaded if they are mapped in the designated Taxonomy Node in the designated Publication Template. |
com.enterworks.services.exports.ProcessTaxonomyTemplateNodeList |
Reads the taxonomy template entries in the file and kicks off a Template Taxonomy export for each one, setting:
Each launched job should use the ProcessTaxonomyExportTemplate post-processing block to generate the corresponding template. The collection of template jobs can be consolidated into a single file using the TaxonomyTemplateExportZip post-processing block. |
com.enterworks.services.exports.RemoveHeaderRow |
Removes the first line of the CSV file. |
com.enterworks.services.exports.SplitCsvFile |
Splits a CSV file into multiple parts, each no larger than the specified maximum number of records. Each part will be named <baseFileName>_<partNumber>.csv. The collection of files will be placed in a ZIP file which is returned. |
com.enterworks.services.exports.SplitDeltaExportIntoMultipleParts |
Splits a delta export into multiple export jobs, each including up to the maximum records per job. This can be used in situations where the target system cannot process large files. The delta date and time is specified and optionally additional conditions. The processing uses this information to generate separate saved sets for each batch of the specified size and then launches Scheduled Export jobs using the designated Scheduled Export as a template for each job, updating only the Parameter1 attribute to the name of the saved set for the part and Parameter2 to the part number, and optionally Parameter3 - Parameter5 with any additional data. This allows the target Scheduled Export to have full control over the file naming convention and how the saved set is used (for example, in 'Saved Set' or 'Root Repository Saved Sets' attributes. This processing can be used in conjunction with any export type and format that can operate on a saved set. |
com.enterworks.services.exports.TaxonomyTemplateExportZip |
Packages the TaxonomyTemplateExport files for the same batch into a single .zip file. The ProcessTaxonomyTemplatetNodeList processing block launches separate TaxonomyTemplateExport jobs for each taxonomy node listed in the seed file, each being identified as being part of the same batch in Parameter4. This post-processing block collects the files from each job for the same batch and packages them in a single .zip file. |
com.enterworks.services.imports.ConcatenateCSVFiles |
Concatenate a set of files in the designated source directory that match the designated file name pattern, using the header from the designated import template for all files. If the sources files are not identical in structure and the import template contains a superset of attributes, some columns may be padded in each appended file. To prevent the attributes from being cleared, the keepRepoValues import option should be set to true. |
com.enterworks.services.imports.CopyImportFile |
Copies the import file using the designated file name, then processes the original file so that it can be processed by a second import job (for example, for another repository or different pre-processing). |
com.enterworks.services.imports.EncodeFile |
Converts the import file from one encoding to another. |
com.enterworks.services.imports.EnterworksFileDiff |
Generates a delta file using the current file and the previous one that was processed. The current and previous files must be CSV format. Requires the EnterworksDiff utility be installed and configured on the Enable server. The generated delta file will include the column il_modification_status, indicating whether the record is new, updated, or removed. If there is no previous file, the current file will be processed in full without the il_modification_status column added. If new records need a specific status then the corresponding status attribute should have that default value. |
com.enterworks.services.imports.HorizontalToVerticalAttValUomFileFormat |
Converts a csv file containing multiple attributes as key-value-uoms into (several) vertical files containing a separate line for each triplet. No more than 500k lines will be saved in each target file, using the naming convention: vertical_<fileNumber>_<sourceFilename>. For example, consider an input file containing the following columns: ITEM_ID, MFR_PART, STATUS, GROUP_1, ATTRIBUTE_NAME_1, VALUE_1, UOM_1, DIFFERENTIATOR_1, GROUP_2, ATTRIBUTE_NAME_2, VALUE_2, UOM_2, DIFFERENTIATOR_2, ... This will be converted into the multiple rows, with one row per attribute, with the following headers: ITEM_ID, ATTRIBUTE_NAME, VALUE, UOM Any global attributes (MFG_PART, STATUS) and extra columns (GROUP_*, DIFFERENTIATOR_*) are ignored. Note: This class returns the source filename. It does not return the vertical files. Separate jobs must be run to process the generated files. |
com.enterworks.services.imports.ImportCustomCodeSets |
Imports updates to existing single or multiple code sets from a file. If a single code set is imported, the expected columns are the same that are required when importing a code set through the UI. If multiple code sets are imported, the first column must be the code set name and all codes for that code set must be consecutive. For multiple code sets, all options apply to each code set and the file type must be csv. Each code set must already be defined in EnterWorks - the import will fail if the code set does not exist. |
com.enterworks.services.imports.InitiateSaveAndSendForSavedSet |
Initiates a 'Save and Send' work item on the designated workflow and starting point for the designated saved set and the specified properties. Several reserved words can be specified for the property values:
|
com.enterworks.services.imports.PreProcessAddFields |
Adds columns and values to the import file before loading. |
com.enterworks.services.imports.PreProcessAddHeader |
Adds a header line to the CSV import file before loading. |
com.enterworks.services.imports.PreProcessConcatenateColumns |
Performs concatenations of data to specific columns within an import template. A formula expression of other columns within the template can be used. Assumes that all columns already exist and does not create/remove any columns. |
com.enterworks.services.imports.PreProcessXLSXAddFields |
Preprocesses a .xlsx file containing with possible multiple header rows. Adds the designated columns and their values to the file to facilitate batch processing of the file. Generates a new .xlsx file for import into an EnterWorks repository. |
com.enterworks.services.imports.ProcessImagePackage |
Processes a single file or a zip file containing one or more image files. If the submitted file has the .csv extension, it is passed on for import processing by EPIM. If the submitted file has the zip extension, the contents of the zip file are processed. Any valid image files are copied to the designated image directory. If the submitted file is a valid image file, it is copied to the designated image directory. |
com.enterworks.services.imports.ProcessMultiRepositoryFile |
Splits a multi-repository comma-delimited CSV export file into separate import files based on the Import Template definitions referenced by the designated Scheduled Imports. Duplicate rows (providing they are consecutive in the file) are removed as well as rows containing no values. Jobs for each separate scheduled import are launched by this module. The main file contains only those columns in the Import Template assigned to the Scheduled Import launching this pre-processing. |
com.enterworks.services.imports.SplitImportFile |
Splits the import file into two parts: The first part is processed and the second part is staged in the designated target directory. |
com.enterworks.services.imports.SplitKeyValueUomTriplexFile |
Preprocess import file containing dynamic attributes in key/value pairs or optionally in key/value/UOM triplets. Files may contain explicit attribute names or pairs/triplets of columns that are numbered consecutively for each pair/triplet. When a file is processed, the contents of each record are split into pre-defined parts as defined in the designated import templates and each file is loaded separately. The first part is loaded by this import and subsequent parts are loaded by dependent imports that do not require pre-processing. If consecutive files contain the same primary key, the values from those lines are combined into a single update (split amongst the defined parts). This allows for vertical files where each row contains the primary key and a single key/value pair or key/value/UOM triplet and multiple rows are for the same repository record. Except for the last part, any empty rows for a part are filtered out since they will not make any changes to the target record, reducing overall import processing time. All records are included in the last part as it should be the only one that is validated but this requires the parts to be daisy-chained together to ensure it is truly the final part that is loaded. Each part import template can have up to 1022 attributes, including the primary key. |
com.enterworks.services.imports.TransferFiles |
This class will move all files from source to target directory passed to the pre-processing module that match the specified file patterns (allow up to 20 to be specified as separate arguments for the module) using the asterisk (*) as the wildcard indicator. |
com.enterworks.services.imports.TransformFile |
Transforms a .csv or .xlsx file into an .xlsx or .csv file containing either:
Optionally, it validates designated columns for required or specific values, and rejects a row if the values are empty or do not match. |
com.enterworks.services.imports.UncompressZipFile |
Decompresses zip file before processing. |
To configure a Scheduled Import or Export with a pre/post-processing block:
Open the Scheduled Import or Scheduled Export repository.
Open the record for the Import or Export.
Open the Import Details or Export Details tab, and open the Import Preprocess Options or Export PostProcess Options sub-tab.
Set Preprocess File or Postprocess File to “Yes”.
Enter the full class path for the processing block class and click the calculate button on the Pre-process Class or Post-process Class field.
The define arguments window will open showing a description for what the block does along with what arguments can be set and the current values.
The argument values can be changed and saved by clicking Update Attributes.