Updated : 09/10/2018
IBM Transformation Extender ships a Docker image starting with the 22.214.171.124 release.
Basic information is published in the Transformation Extender release notes,
section ‘Loading and running the Launcher Docker image’.
1. Basic Docker knowledge
2. Docker support already installed on your Linux distribution.
3. You are ready for a hands-on experience.
If you run commands highlighted with CMD: and use provided artifacts, you should be able to execute these examples. The first ‘Hello Transformation Extender’ example shows how to load the Transformation Extender Docker image, start the container and execute an Transformation Extender map inside the container. The second example shows how to create a scalable Launcher solution that triggers off files using a cooperative listener.
‘Hello Transformation Extender’ Docker example:
Let’s see how we can execute Transformation Extender example ‘sinkmap’ using Transformation Extender Docker.
I downloaded 126.96.36.199 Docker image itx-Launcher-188.8.131.52.21-ubuntu_Docker_image.tar.gz to my local Ubuntu Linux system.
To load this Docker image and execute the example sinkmap.mmc, do this:
Loading the image takes some time, but the good news is, you only need to load the image once.
Next step is to start the image with image id 5a128d8d2583 in interactive mode with the terminal attached. We will use this image ID throughout the blog.
We will be using user ‘root’ user to perform any action in the image. That should not be confused with the root privileges on the host. The root privileges apply only to the Docker container and no harm can be done on the host itself.
Transformation Extender is installed in /opt/ibm/wsdtx.
To finish with this interactive Docker session,
The Docker container is now destroyed. Obviously, if the goal was to create files that sinkmap creates, those output files will be lost at this moment. Directories on the host machine can be mapped in the Docker image and that will be covered in the next section.
Scalable Launcher Solution
Let’s try something way more interesting. How can we setup a scalable Launcher solution that triggers off files, runs inside a Docker container and produces logs and outputs in persistent storage?
We will need to do directory mapping from the host to container, enable Launcher cooperative listener and enable additional logging.
In order to enable cooperative listener, modify dtx.ini:
To enable some Launcher logging, edit dtx.ini:
LogInfo=1 - this enables Info level in Launcher compound log
LauncherLog=ewsc - Launcher start info and errors
and enable periodic status monitoring:
HeartbeatFileInterval=60 - creates a new json file every 60s and reports Launcher activity.
We will set InitPendingHigh/Low values in dtx.ini to insure uniform triggering and load balancing:
Dtx.ini file that we just changed will be used from the host machine, it will be mapped when Docker is started. It’s attached to the blog.
In addition, we need to map the directory where Launcher maps and systems will be deployed, to map input trigger directory and directory where output files and logs will be created.
We create directory /nfshome/vperic/dockerdemo:
(please swap /nfshome/vperic with path appropriate on your host system)
and these sub directories:
config logs maps systems inputs outputs
Modified dtx.ini is in the config sub directory. Simple.mmc map is deployed to the maps folder, and dockerdemo.msl to the systems folder. The Launcher system will trigger off files in the inputs folder, generate results in the outputs folder and produce logs in the logs folder.
All files required to deploy this Launcher system are attached. Be sure to modify server options before deploying to your host. When opened in IFD tool, the system looks like:
Simple map has only one rule. It converts content of the input file to upper case.
The -v option allows files and directories on the host to be mapped inside the container.
Let’s first try to map dtx.ini file and start the Docker container using Transformation Extender installation directory as a starting work directory and check CooperativeListener value in dtx.ini file:
If we do not map directories (no -v switch), we will see the default value of CooperativeListener option retrieved from dtx.ini saved in the Docker image:
We can now map other folders and start the container in interactive mode to get shell prompt and verify mappings:
If we list /nfshome/vperic/dockerdemo folder from inside the container, we will confirm that it is mapped to the folder on the host machine where our maps and systems are deployed.
The only missing part is the Launcher startup configuration. We can build the Launcher startup configuration using launcheradmin.sh command line tools.
Minimal options that are needed to start the Launcher are ‘-auto’ (to start systems automatically) and ‘-addir’ (to add deployment directory where the Launcher will look for msl files).
launcheradmin.sh -auto -addir /nfshome/vperic/dockerdemo/systems
LauncherAdmin.bin is a file where the Launcher options are saved. We can create
LauncherAdmin.bin file insider the container, copy it to the host dir, e.g.
and then map the file using -v option when we start the container. Another approach is to invoke LauncherAdmin.sh command from the Docker run command when we start the Launcher. We will use the latter approach.
The script logs startup info into the Docker.log, executes launcheradmin.sh command to setup the Launcher and starts the Launcher using launcher.sh script. Last ‘tail’ command ensures that the script never returns. Without blocking tail, when Docker entry point script returns, the Docker instance shuts down.
We will use the Docker run -h switch to set the Docker instance hostname and –name command line option to define the Docker instance name. The -d switch insures that Docker instance runs detached from our shell.
The final command to start the Transformation Extender Launcher container is:
Docker ps command lists active Docker containers:
Launcher logs are created in /nfshome/vperic/dockerdemo/logs folder. Log file names are name spaced with the hostname, Launcher1 in our case. E.g.:
Now, we can create the first trigger file to verify Launcher triggering:
The Launcher triggers off in1.txt and creates out1.txt in outputs directory.
As we can see, the map triggered and converted the file content to uppercase.
To interactively examine the Launcher, use Docker exec command. E.g., open a bash session inside active Launcher container:
It gives you the bash prompt insider the container:
You can brows directories in the container and check logs. Use ‘exit’ to finish this interactive session.
Generically, use: docker exec -it Launcher1 <command> to execute commands in the running container.
So, at the moment, we have one active ITX Launcher triggering off
Without doing any additional configuration, we are ready to spawn another Launcher container that will monitor the same directory. The only difference to the previous Docker run command line is the Launcher name and Docker container host name.
Ps Docker command lists two running Launcher containers.
Let’s drop 100000 files in the trigger dir. We will expect that two launchers process about 50000 files each.
Attached is gentriggers.sh script. It creates 100000 sample trigger files in*.txt. Create /nfshome/vperic/dockerdemo/tmp and run gentriggers.sh in it and once all files are created, move them to the trigger directories with:
Both containers will process files and perform automatic load balancing. Json heartbeat activity log files are one way to monitor launchers. We enabled this Launcher feature by setting HeartbeatFileInterval=60 in dtx.ini file.
In my case, after several minutes when all files were processed, Launcher1 json log file CompoundSystem-2018-09-10-18-54-21_Launcher1.json reports total mapping activity:
"History Successes": "50013"
"History Failures": "0",
“History Total Maps": "50013",
and Launcher2 CompoundSystem-2018-09-10-19-23-08_Launcher2.json reports:
"History Successes": "49987",
"History Failures": "0",
"History Total Maps": "49987",
Entire logs are attached to the blog. Review them to see all various statistics that are reported.
Delta sections are showing mapping activities between two scans defined by HeartbeatFileInterval.
Considering that we use very small input files and trivial Transformation Extender mapping, performances of this Launcher demo system greatly depends on the file system speed. I used NFS file system and launchers were I/O bound. Triggering off NFS file system would let me run containers on different hosts and achieve horizontal scaling.
In this example, we run multiple Docker containers on the same host and demo vertical scaling.
To shutdown Launcher containers, perform:
It may seem confusing that two Launchers running on the same host, in two containers, are using the same port 5015. This would not be possible with two launchers running outside containers on the same host due to port conflict. In case of Docker containers, ports are not mapped to the host by default, and there is no conflict. Ports are private to the Docker containers. To map ports to the host, use Docker run -p switch. User needs to ensure that there is no port conflict by choosing different host ports. In general, port mapping would be needed only to enable Launcher monitoring using external tools like management console or Launcher monitoring tool.
Finally, kill containers using e.g. kill command:
Additional Transformation Extender Docker Launcher topics that we hope to cover in future blogs are
Please use the comments section to let me know if you have any questions or are interested in any other topics related to Transformation Extender Launcher containers. Thank you!
Transformation Extender is a is a trademark of IBM Corporation in at least one jurisdiction and is used under license.