方案一:传统模式(需Zookeeper)
1. 拉取镜像
docker pull wurstmeister/zookeeper # Zookeeper镜像:ml-citation{ref="3" data="citationList"}
docker pull wurstmeister/kafka # Kafka镜像:ml-citation{ref="3" data="citationList"}
- 启动 zookeeper
docker run -d --name zookeeper \
-p 2181:2181 \
-v /etc/localtime:/etc/localtime \
wurstmeister/zookeeper
- 启动 kafka
docker run -d --name kafka \
-p 9092:9092 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 \
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
--link zookeeper \
wurstmeister/kafka
4.创建队列
注意 容器里面 kafka 的启动脚本的路径可能存在差异。需要具体找到此路径
/opt/kafka_2.13-2.8.1/bin/kafka-topics.sh
docker exec -it kafka bash
/opt/kafka_2.13-2.8.1/bin/kafka-topics.sh --create \
--bootstrap-server localhost:9092 \
--replication-factor 1 \
--partitions 1 \
--topic test-topic
5.查看队列
/opt/kafka_2.13-2.8.1/bin/kafka-topics.sh --list \
--bootstrap-server localhost:9092
快速Java 代码验证是否创建成功和链接成功
maven
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>3.5.1</version>
</dependency>
生产者
import org.apache.kafka.clients.producer.*;
public class KafkaProducerDemo {
public static void main(String[] args) {
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer<>(props);
ProducerRecord<String, String> record = new ProducerRecord<>("test-topic", "key", "Hello Kafka");
producer.send(record);
producer.close();
}
}
消费者
import org.apache.kafka.clients.consumer.*;
public class KafkaConsumerDemo {
public static void main(String[] args) {
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test-group");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
Consumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Collections.singletonList("test-topic"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord<String, String> record : records) {
System.out.printf("Received: key=%s, value=%s%n", record.key(), record.value());
}
}
}
}
继承到 springboot 敬请期待后续