Enabling Message Caching - ironstream_for_elastic - ironstream_for_kafka - ironstream_for_splunk - 7.4

Ironstream for Splunk®/Kafka®/Elastic® for IBM i Ironstream Integration Components Administration

Product type
Software
Portfolio
Integrate
Product family
Ironstream
Product
Ironstream > Ironstream for Elastic®
Ironstream > Ironstream for Kafka®
Ironstream > Ironstream for Splunk®
Version
7.4
Language
English
Product name
Ironstream Splunk®/Kafka®/Elastic®
Title
Ironstream for Splunk®/Kafka®/Elastic® for IBM i Ironstream Integration Components Administration
Copyright
2022
First publish date
2007
Last updated
2023-08-25
Published on
2023-08-28T08:26:48.055356

To store received messages to disk when they cannot be forwarded due to the Kafka broker being unavailable then add ("cache": true) to the kafka.json file.

When caching is enabled a cache file of the format cache_[hostname].db will be created for each system. Once this file reaches the maximum specified size it is renamed to cache_[hostname].db.old and the records are added to a new cache_[hostname].db file. If a cache_[hostname].db.old file already exists, then when the cache_[hostname].db file reaches the maximum size then the cache_[hostname].db.old file will be overwritten, and any unacknowledged messages will be lost.

By default, each cache file has a maximum size of 1GB. This can be configured by adding a "max_cache_size" field to the kafka.json file. The value should represent the maximum number of bytes each cache file can use. The maximum disk space usage for caching will be max_cache_size

* 2 * number of agents.

The process to remove message IDs that have been acknowledged by Kafka from the .meta file is called every minute by default. The rate can be configured using the cleanup_rate field in the kafka.json file.

The cache files can be found in the \logs subdirectory of the Ironstream Integration Components installation path.

To clear the cache for a system, stop the Ironstream MCS service associated with the system, then delete the cache_[hostname].db file and cache_[hostname].db.old if it exists.

Example kafka.json file when using message caching:

{

"brokers": "localhost:9092,foo.example.com:9000", "topic": "topicname",

"cache": true, "max_cache_size": 512000000

}