To store received messages to disk when they cannot be forwarded due to the Kafka broker being unavailable then add ("cache": true) to the kafka.json file.
When caching is enabled a cache file of the format cache_[hostname].db will be created for each system. Once this file reaches the maximum specified size it is renamed to cache_[hostname].db.old and the records are added to a new cache_[hostname].db file. If a cache_[hostname].db.old file already exists, then when the cache_[hostname].db file reaches the maximum size then the cache_[hostname].db.old file will be overwritten, and any unacknowledged messages will be lost.
By default, each cache file has a maximum size of 1GB. This can be configured by adding a "max_cache_size" field to the kafka.json file. The value should represent the maximum number of bytes each cache file can use. The maximum disk space usage for caching will be max_cache_size
* 2 * number of agents.
The process to remove message IDs that have been acknowledged by Kafka from the .meta file is called every minute by default. The rate can be configured using the cleanup_rate field in the kafka.json file.
The cache files can be found in the \logs subdirectory of the Ironstream Integration Components installation path.
To clear the cache for a system, stop the Ironstream MCS service associated with the system, then delete the cache_[hostname].db file and cache_[hostname].db.old if it exists.
Example kafka.json file when using message caching:
{
"brokers": "localhost:9092,foo.example.com:9000", "topic": "topicname",
"cache": true, "max_cache_size": 512000000
}