DTL jobs are executed by running the same dmxjob command that runs jobs that are developed in and executed through the IDE.
dmxjob | run_spec | restart_report_spec |
where
run_spec | = | run_option [log_option] [compress_option] [runon_option | execution_profile_option] [export_option] [report_option] |
run_option | /RUN job_name [ ENABLERESTART RUNSTATEDIRECTORY run_state_dir ] | |
log_option | = | [ /LOG FORMAT XML|TEXT ] |
compress_option | = | [ /COMPRESSWORKFILES OFF|ON|DYNAMIC ] |
runon_option | = | /RUNON framework |
framework | = |
{MAPREDUCE [SINGLECLUSTERNODE]} {SPARK sparkMasterURL [SINGLECLUSTERNODE]} {LOCALNODE} |
execution_profile_option | = | /PROFILE execution_profile |
execution_profile | = |
{REPOSITORY} {FILE execution_profile_file} |
export_option | = | /EXPORT env_var[=value] … |
report_option | = | /REPORT HADOOPEXECUTION |
restart_report_spec | = | /RESTARTREPORT job-name RUNSTATEDIRECTORY run_state_dir |
For additional information on the dmxjob command, see Running a job from the command prompt in the Connect help.
Statistics
Job execution statistics are written to the job log by default.
Connect Job Completion Status
The value of the exit status value indicates the completion status of Connect jobs.
Exit Value | Completion Status |
---|---|
0 | Job completed successfully. |
100 | Job completed but the generated output may be incorrect or incomplete. |
111 | Job terminated prematurely. |