The size of the transient storage area is also affected by the frequency changed data is applied to a target. Precisely recommends switching to a streaming Apply model for your target or raising the Apply frequency as high as practical when capturing rapidly changing tables in high volume Db2 environments, especially if space is an issue.
While for example changes from a Db2 source to an Oracle, Db2/LUW or other target may only need to be applied once a day, the transient storage would have to be sized large enough to store all of the changed data accumulated during the one-day period. Often however, the estimated size will prove to be inadequate. When that happens the capture will eventually stop mining the Db2 Log and wait for an Engine to connect and Publishing to resume. When the Capture does finally request the next log record from Db2, the required Db2 Archive Logs may have become inaccessible. This would occur if the wait period was long enough or the volume of data changing large enough that the Archive Log retention period was too short.
Best practices for Db2 Archive Log retention will normally ensure that the Archive Logs are accessible. In some environments however this can become an issue. Precisely recommends analysis of the total Db2 workload in all cases because even though only a fraction of all the existing tables may be configured for capture, the Db2/z Log Reader capture potentially requires access to every log Archived since the last Apply cycle.