Sentry's primary job is to ingest user errors, but it requires two competing requirements: the event ingestion system must be responsive and fast under all types of load, and error data must be accessible in near real-time. To address this, Sentry uses an asynchronous processing system that includes saving events on ClickHouse and performing post-process tasks. However, this creates a problem where it's unclear whether the event has been persisted before trying to read it. To mitigate this, Sentry built a system called Synchronized Consumer that allows a Kafka consumer to pause itself and wait for another consumer to commit an offset before consuming the same message. This ensures that events are processed only when they have been stored in ClickHouse. Additionally, Sentry uses ClickHouse's stronger consistency guarantees, such as using the in_order load balancing schema, which drives the load balancer to pick healthy replicas in a defined order, ensuring reads and writes happen on the same replica.