4.5 Consuming Messages
Consumer groups, offsets, and message consumption patterns.
Video Coming Soon
Consuming Messages
Overview
This lesson covers consuming messages from Kafka using command-line tools, including consumer groups, offsets, and handling both keyed and non-keyed messages.
Why Command Line?
- Quick testing and debugging
- Simulating data flows
- Verifying broker connectivity
- Testing ACL configurations
Environment Setup
Docker Compose with:
- Zookeeper
- Single Kafka broker
- Schema Registry
Key Concepts
Producer Controls Keys
The producer (not consumer) decides whether to send messages with or without keys. This choice impacts partition distribution.
Without Keys
- Round-robin distribution across partitions
- No ordering guarantee
- Even load distribution
With Keys
- Hashing determines partition
- Same key → same partition
- Ordering guaranteed per key
- Related messages grouped together
Creating a Topic
1docker exec -it kafka kafka-topics.sh \\
2 --create --topic payment \\
3 --bootstrap-server localhost:9092 \\
4 --partitions 3 \\
5 --replication-factor 1Producing Messages (For Testing)
Without Keys
1for i in {1..100}; do
2 echo "Message $i" | docker exec -i kafka \\
3 kafka-console-producer.sh \\
4 --broker-list localhost:9092 \\
5 --topic payment
6doneWith Keys
1for i in {1..100}; do
2 echo "Key$i:Message$i" | docker exec -i kafka \\
3 kafka-console-producer.sh \\
4 --broker-list localhost:9092 \\
5 --property "parse.key=true" \\
6 --property "key.separator=:" \\
7 --topic payment
8doneConsuming Messages
Simple Consumer (From Beginning)
1docker exec -it kafka kafka-console-consumer.sh \\
2 --bootstrap-server localhost:9092 \\
3 --topic payment \\
4 --from-beginningMessages appear in order as stored in partitions.
With Consumer Groups
1docker exec -it kafka kafka-console-consumer.sh \\
2 --bootstrap-server localhost:9092 \\
3 --topic payment \\
4 --group group1#### Multiple Consumers in Same Group
Open three terminals and run the same command in each. Kafka divides partitions among consumers:
- Window 1: Messages from Partition 0
- Window 2: Messages from Partition 1
- Window 3: Messages from Partition 2
Each consumer processes messages from assigned partitions only. No two consumers in the same group process the same message.
Consuming Messages with Keys
1docker exec -it kafka kafka-console-consumer.sh \\
2 --bootstrap-server localhost:9092 \\
3 --topic payment \\
4 --property "print.key=true" \\
5 --property "key.separator=|"Output format: Key1|Message1
Consumer Groups Explained
How They Work
- Kafka tracks offsets per consumer group
- Each partition assigned to one consumer in the group
- Load balanced automatically
- Fault tolerant (rebalancing on failure)
Benefits
- Parallel processing
- Scalability
- Fault tolerance
- Independent progress tracking
Multiple Groups
Different consumer groups can read the same topic independently:
- Analytics group
- Processing group
- Archive group
Each maintains its own offset.
Best Practices
Consumer Groups
- Use meaningful group IDs
- Match consumer count to partition count for optimal parallelism
- More consumers than partitions = idle consumers
- Monitor consumer lag
Offset Management
- Commit offsets regularly
- Handle rebalancing gracefully
- Monitor offset lag
- Use auto-commit wisely
Performance
- Batch processing when possible
- Tune fetch sizes
- Configure appropriate timeouts
- Monitor consumer metrics
Summary
Effective message consumption requires:
- Understanding consumer groups
- Proper offset management
- Matching consumer count to partitions
- Handling keys appropriately
- Monitoring and tuning performance
Command-line tools provide a foundation for understanding Kafka consumers before implementing application-level consumers.