Orb Data

Orb Data's Blog Site

Workload Automation – Part 2: Inbound Files

Thankfully the need for a bank to courier magnetic tapes by taxi across the city to another bank enabling account transfer and clearing to be processed are long gone. Most data transfer between companies is performed using electronic transfers, which inevitably requires some sort of transfer of files between them.

In this blog I will look at this common business process, which I have referred to as inbound files and look at what options there are within TWS to fully automate it.

Generally these inbound files are sent from either 3rd party companies e.g. suppliers or generated internally from some other business process and frequently contain some variable information e.g. timestamp information within the file name to support multiple inbound files. In this blog I won’t be differentiating between how the files arrive or their naming conventions, but concentrating on how TWS can automate their processing once they do arrive.

Older versions of TWS (pre 8.4) relied upon the use of a file OPENS dependency within the job stream to prevent the job from executing before the inbound file was available. Usually some sort of script was also developed to deal with the possibility of multiple files arriving each with different timestamps or to ensure that files were processed sequentially.

TWS 8.4 introduced support for Event Driven Workload Automation or EDWA, which amongst other things provided the capability for TWS to wait for a file matching a specified name or pattern and then take some action when it arrived. This functionality is provided by a TWS File Created event rule and has some advantages over the file OPENS dependency, such as

TWS 8.5 introduced support for variable table, allowing global and job stream specific variables to be specified and then resolved by TWS at job submission time. Using a job stream specific variable table the name of file created can be pass directly from the TWS event rule monitoring for the file creation, to the job (or jobs) that must process the file. And there is no need to write scripts to support this as it uses out-of-the-box (OOTB) functionality from TWS.

So how do we put all of this together? The steps below show an example of an event rule monitoring for a file named /tmp/Transferred_File.zip and the submission of a job stream ORB_INBOUND_FILE to process the file once it has been created.

Event Rule for File Creation

Create an event rule to monitor for the creation of the file Transferred_File.zip in the path /tmp. You will need to adjust the filename and path accordingly for your actual filename and path as well as the operating system. Use the Dynamic Workload Console (DWC) Tivoli Workload Scheduler-> Design-> Create Event Rules function to create the event rule.

The screen shot below shows the rule, ORB_FTP_FILE_CREATED, in Event Rule editor of the DWC.


The Properties section shows the path to the file and the filename. It also lists the frequency that TWS will check for the existence of the file and which TWS workstation should perform the check. In order for the check to be successful, the path to the file system where the file is created must be accessible by the ssmagent process executing on the workstation specified.

The ssmagent process is started automatically by the monman process with the command “conman startmon”. If the file system where the file is created is a remote file system i.e. NFS mounted or a file share under Windows, you will need to enable the remote file system option in the ssmagent configuration – edit <TWSHOME>/ssm/config/init.cfg and specify “MonitorRemoteFS= on” and then restart monitoring.

The Action section of the event rule defines the job stream to be submitted as shown in the screenshot below.


Notice that the Custom parameter 1 is set to ORB_FILENAME=%{FileCreated.FileName}, which identifies the name of the TWS variable (ORB_FILENAME) we will define for the job used to process the incoming file and the variable used to represent the incoming filename from the event rule. This latter variable can be inserted by clicking the Variable… button next to the field and selecting the File name variable from the list as shown below.


Save the event rule, ensuring that the rule is active (remove the Draft setting before saving) or by activating by selecting the rule from the Event Rule list and clicking Set as Complete. The rule should activate automatically within 5 minutes, but if it does not, try issuing the command “planman deploy” from the MDM to speed things up a bit!

Variable Table for Job Stream

The next step is to create a variable table definition that is used to “pass” the name of the file created that caused the file creation event rule to trigger directly to the job that needs to process the incoming file. Use the DWC Tivoli Workload Scheduler-> Design-> Create Workload Definitions option to create the Variable Table.

The screenshot below shows the Workload Designer creation of the Variable Table named ORB_INBOUND_FILE, which has two variable entries – one for the path to the script to be executed and another “placeholder” variable name ORB_FILENAME that will be used to contain the name of the created file.


Save the variable table definition before continuing to create the job stream definition.

Job and Job Stream

The last step is to create the definition for the job to process the incoming file along with a job stream definition that brings all of the components together. Use the DWC to create the job definition – in my example, the job that will process the incoming file needs to execute the script named process_file.sh which is located in the script directory. The scripts directory is identified by the variable name TWSSCRIPT_PATH that was defined in the previous section.

The screenshot below shows my example job definition in the DWC


Notice that the script accepts a parameter of file=^ORB_FILENAME^, which is the standard syntax for definition of a variable (^ORB_FILENAME^) value to be substituted automatically by TWS at job submission time. Note that it also matches the name of the variable that was defined within the variable table in the previous section.

Now create a job stream definition within the DWC to tie all these things together. The example below shows a job stream named ORB_INBOUND_FILE with the job defined earlier.


Notice that we have added the variable table defined earlier to connect the job stream to the variable table. This allows the name of the file created that causes the event rule to trigger to be passed directly to the job via the variable table definition.

Also note that the job stream has a dependency upon itself, which causes each job stream submitted by the event rule to queue up behind the previous job stream of the same name, thereby forcing the files that were created to be processed one at a time in a sequential manner. The resolution criteria option “Closest preceding” must be selected the Dependency resolution tab for this to function correctly. If the files can be processed in parallel there is no need to set the job stream dependency on itself.

Save the job stream definition and we can start to test the process out.

Testing the solution

To test the solution keep in mind that the event rule will only trigger if there is a state change. This means that is the file name we are monitoring for already exists at the time the rule is activated, then the event rule will not trigger. The file should not exist prior to activating the event rule – if it did exist, delete or rename the file and wait at least 60 seconds before creating the file again.

If everything has been configured correctly, you will see the event rule action trigger in the DWC under Tivoli Workload Scheduler-> Monitor-> Workload Events-> Monitor Triggered Actions as shown in the screenshot below.


Now check under the DWC under Tivoli Workload Scheduler-> Monitor-> Monitor Job Streams and selecting the job stream and job that was submitted – in my example ORB_INBOUND_FILE as shown in the screenshot below


Note that the file=^ORB_FILENAME^ parameter in the job has been substituted with the actual file name causing the event rule to trigger.

The submitted job stream can of course contain any number of jobs required to process the incoming file, such as decompressing the zip file, followed by another job to decrypt the file and lastly the job to process the actual file contents.

Workaround for TWS 8.4

The above process only works if you have TWS 8.5 or higher due to the use of variable tables. If you are currently using TWS 8.4, apart from recommending that you upgrade to a newer version because of the upcoming end of service for TWS 8.4, you can use the same approach, but instead of submitting a job stream submit an ad hoc job definition instead.

The event rule definition is exactly the same as that shown above – only the event action section needs to be changed as shown below to use an ad hoc job submission. In this case the name of the file causing the event rule to trigger is passed directly from the event rule to the ad hoc job being submitted.

Notice that Job Task field contains the path to the script and the name of the script to be executed. The name of the incoming file is passed using the file=%{FileCreated.FileName} parameter, which replaces the name of the incoming automatically. This can be found using the Variable.. button as described above.


After the event rule triggers and the ad hoc job is submitted, verify that the file name has been passed correctly by selecting the job in the DWC and checking the Task Command field as shown in the screenshot below.


Variations on the theme

The above examples are intended to illustrate the process of accepting an inbound file automatically within TWS and passing the file through to a target job used to process the inbound file. Whilst the above example shows these steps it is accurate, but perhaps a little bit simplistic in its approach.

Best practice for inbound files is to use a trigger file that is sent only when the actual file has been transferred successfully to indicate that the data file should now be processed. As there are numerous possibilities when using trigger file mechanisms it is impossible to cover them all here, but the above process can usually be easily adapted to monitor the trigger file and then submit a job to execute the associated data file. Common naming conventions can help reduce the complexity and avoid the necessity of writing scripts e.g. trigger and data files have the same name except for the file type of “trg” for trigger and “dat” for data.

In this article I looked at “pushed” inbound files, in the next article I will look at using built-in TWS functionality to process “pulled” inbound files.

, ,

Leave a Reply

Your email address will not be published. Required fields are marked *