0

During the incremental snapshot process, it is critical for me to ensure that live change events are prioritized and processed near real-time. However, since my Kafka Consumer operates with a fixed consumption throughput, a challenge arises when a large number of snapshot messages are already queued. In such cases, the live change events are delayed because they are processed sequentially alongside the snapshot data. The goal is to maintain near real-time processing of live changes without being impacted by the potentially high volume of snapshot events in the queue. I have PostgreSQL tables with over 30 million entries using debezium-connector-postgres-3.0.1. Is it possible to control/delay somehow the time between the incremental chunks or approach it in another way?

Incremental snapshot process is working properly. I just try to resolve somehow the above challenge.

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.