Ironstream can capture mainframe data and publish it to Apache Kafka clusters where it can be accessed by Hadoop, Spark, and other open systems. You can configure a new internal Kafka producer that publishes data from z/OS to Kafka brokers via Ironstream topics (a target DESTINATION in the configuration file). You can then write your own Kafka consumer code to capture the data from Kafka brokers and make it accessible for data analytics platforms.
For more information, refer to the “Setting Up Kafka for Ironstream” chapter in the Ironstream Configuration and User’s Guide.