参考资料:http://blog.csdn.net/honglei915/article/details/37563647
参数说明http://ju.outofmemory.cn/entry/119243
参数说明/Demo:
http://www.aboutyun.com/thread-9906-1-1.html

Kafka+Spark:  
http://shiyanjun.cn/archives/1097.html
http://ju.outofmemory.cn/entry/84636


1. Kafka启动:
  1. 先启动所有节点的zookeeper  : 进入ZOOKEEPER_HOME/bin 执行./zkServer.sh start
  2. 启动所有节点的kafka:进入 KAFKA_HOME/bin 执行 ./
kafka-server-start.sh config/server.properties &  

 

2. 参数说明

2.0 boker参数说明 (配置文件位于config/server.properties)

name 默认值 描述
broker.id none 每一个boker都有一个唯一的id作为它们的名字。 这就允许boker切换到别的主机/端口上, consumer依然知道
enable.zookeeper true 允许注册到zookeeper
log.flush.interval.messages Long.MaxValue 在数据被写入到硬盘和消费者可用前最大累积的消息的数量
log.flush.interval.ms Long.MaxValue 在数据被写入到硬盘前的最大时间
log.flush.scheduler.interval.ms Long.MaxValue 检查数据是否要写入到硬盘的时间间隔。
log.retention.hours 168 控制一个log保留多长个小时
log.retention.bytes -1 控制log文件最大尺寸
log.cleaner.enable false 是否log cleaning
log.cleanup.policy delete delete还是compat. 其它控制参数还包括log.cleaner.threads,log.cleaner.io.max.bytes.per.second,
log.cleaner.dedupe.buffer.size,log.cleaner.io.buffer.size,log.cleaner.io.buffer.load.factor,
log.cleaner.backoff.ms,log.cleaner.min.cleanable.ratio,log.cleaner.delete.retention.ms
log.dir /tmp/kafka-logs 指定log文件的根目录
log.segment.bytes 110241024*1024 单一的log segment文件大小
log.roll.hours 24 * 7 开始一个新的log文件片段的最大时间
message.max.bytes 1000000 + MessageSet.LogOverhead 一个socket 请求的最大字节数
num.network.threads 3 处理网络请求的线程数
num.io.threads 8 处理IO的线程数
background.threads 10 后台线程序
num.partitions 1 默认分区数
socket.send.buffer.bytes 102400 socket SO_SNDBUFF参数
socket.receive.buffer.bytes 102400 socket SO_RCVBUFF参数
zookeeper.connect localhost:2182/kafka 指定zookeeper连接字符串, 格式如hostname:port/chroot。chroot是一个namespace
zookeeper.connection.timeout.ms 6000 指定客户端连接zookeeper的最大超时时间
zookeeper.session.timeout.ms 6000 连接zk的session超时时间
zookeeper.sync.time.ms 2000 zk follower落后于zk leader的最长时间


2.1 producer参数说明(配置文件位于config/producer.properties或者在程序内定义)

#指定kafka节点列表,用于获取metadata,不必全部指定
    metadata.broker.list=192.168.2.105:9092,192.168.2.106:9092

    # 指定分区处理类。默认kafka.producer.DefaultPartitioner,表通过key哈希到对应分区
    #partitioner.class=com.meituan.mafka.client.producer.CustomizePartitioner

    # 是否压缩,默认0表示不压缩,1表示用gzip压缩,2表示用snappy压缩。压缩后消息中会有头来指明消息压缩类型,故在消费者端消息解压是透明的无需指定。
    compression.codec=none
      
    # 指定序列化处理类(mafka client API调用说明-->3.序列化约定wiki),默认为kafka.serializer.DefaultEncoder,即byte[]
    serializer.class=com.meituan.mafka.client.codec.MafkaMessageEncoder
    # serializer.class=kafka.serializer.DefaultEncoder
    # serializer.class=kafka.serializer.StringEncoder

    # 如果要压缩消息,这里指定哪些topic要压缩消息,默认empty,表示不压缩。
    #compressed.topics=

    ########### request ack ###############
    # producer接收消息ack的时机.默认为0.
    # 0: producer不会等待broker发送ack
    # 1: 当leader接收到消息之后发送ack
    # 2: 当所有的follower都同步消息成功后发送ack.
    request.required.acks=0

    # 在向producer发送ack之前,broker允许等待的最大时间
    # 如果超时,broker将会向producer发送一个error ACK.意味着上一次消息因为某种
    # 原因未能成功(比如follower未能同步成功)
    request.timeout.ms=10000
    ########## end #####################

    # 同步还是异步发送消息,默认“sync”表同步,"async"表异步。异步可以提高发送吞吐量,
    # 也意味着消息将会在本地buffer中,并适时批量发送,但是也可能导致丢失未发送过去的消息
    producer.type=sync

    ############## 异步发送 (以下四个异步参数可选) ####################
    # 在async模式下,当message被缓存的时间超过此值后,将会批量发送给broker,默认为5000ms
    # 此值和batch.num.messages协同工作.
    queue.buffering.max.ms = 5000

    # 在async模式下,producer端允许buffer的最大消息量
    # 无论如何,producer都无法尽快的将消息发送给broker,从而导致消息在producer端大量沉积
    # 此时,如果消息的条数达到阀值,将会导致producer端阻塞或者消息被抛弃,默认为10000
    queue.buffering.max.messages=20000

    # 如果是异步,指定每次批量发送数据量,默认为200
    batch.num.messages=500

    # 当消息在producer端沉积的条数达到"queue.buffering.max.meesages"后
    # 阻塞一定时间后,队列仍然没有enqueue(producer仍然没有发送出任何消息)
    # 此时producer可以继续阻塞或者将消息抛弃,此timeout值用于控制"阻塞"的时间
    # -1: 无阻塞超时限制,消息不会被抛弃
    # 0:立即清空队列,消息被抛弃
    queue.enqueue.timeout.ms=-1
    ################ end ###############

    # 当producer接收到error ACK,或者没有接收到ACK时,允许消息重发的次数
    # 因为broker并没有完整的机制来避免消息重复,所以当网络异常时(比如ACK丢失)
    # 有可能导致broker接收到重复的消息,默认值为3.
    message.send.max.retries=3

    # producer刷新topic metada的时间间隔,producer需要知道partition leader的位置,以及当前topic的情况
    # 因此producer需要一个机制来获取最新的metadata,当producer遇到特定错误时,将会立即刷新
    # (比如topic失效,partition丢失,leader失效等),此外也可以通过此参数来配置额外的刷新机制,默认值600000
    topic.metadata.refresh.interval.ms=60000

View Code

 

2.2 consumer参数说明(配置文件位于config/consumer.properties或者在程序内定义)

    # zookeeper连接服务器地址,此处为线下测试环境配置(kafka消息服务-->kafka broker集群线上部署环境wiki)
    # 配置例子:"127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
    zookeeper.connect=192.168.2.225:2181,192.168.2.225:2182,192.168.2.225:2183/config/mobile/mq/mafka

    # zookeeper的session过期时间,默认5000ms,用于检测消费者是否挂掉,当消费者挂掉,其他消费者要等该指定时间才能检查到并且触发重新负载均衡
    zookeeper.session.timeout.ms=5000
    zookeeper.connection.timeout.ms=10000

    # 指定多久消费者更新offset到zookeeper中。注意offset更新时基于time而不是每次获得的消息。一旦在更新zookeeper发生异常并重启,将可能拿到已拿到过的消息
    zookeeper.sync.time.ms=2000

    #指定消费组
    group.id=xxx

    # 当consumer消费一定量的消息之后,将会自动向zookeeper提交offset信息
    # 注意offset信息并不是每消费一次消息就向zk提交一次,而是现在本地保存(内存),并定期提交,默认为true
    auto.commit.enable=true

    # 自动更新时间。默认60 * 1000
    auto.commit.interval.ms=1000

    # 当前consumer的标识,可以设定,也可以有系统生成,主要用来跟踪消息消费情况,便于观察
    conusmer.id=xxx

    # 消费者客户端编号,用于区分不同客户端,默认客户端程序自动产生
    client.id=xxxx

    # 最大取多少块缓存到消费者(默认10)
    queued.max.message.chunks=50

    # 当有新的consumer加入到group时,将会reblance,此后将会有partitions的消费端迁移到新
    # 的consumer上,如果一个consumer获得了某个partition的消费权限,那么它将会向zk注册
    # "Partition Owner registry"节点信息,但是有可能此时旧的consumer尚没有释放此节点,
    # 此值用于控制,注册节点的重试次数.
    rebalance.max.retries=5

    # 获取消息的最大尺寸,broker不会像consumer输出大于此值的消息chunk
    # 每次feth将得到多条消息,此值为总大小,提升此值,将会消耗更多的consumer端内存
    fetch.min.bytes=6553600

    # 当消息的尺寸不足时,server阻塞的时间,如果超时,消息将立即发送给consumer
    fetch.wait.max.ms=5000
    socket.receive.buffer.bytes=655360

    # 如果zookeeper没有offset值或offset值超出范围。那么就给个初始的offset。有smallest、largest、
    # anything可选,分别表示给当前最小的offset、当前最大的offset、抛异常。默认largest
    auto.offset.reset=smallest

    # 指定序列化处理类(mafka client API调用说明-->3.序列化约定wiki),默认为kafka.serializer.DefaultDecoder,即byte[]
    derializer.class=com.meituan.mafka.client.codec.MafkaMessageDecoder

View Code

  

3. 例:

接口 KafkaProperties.java

public interface KafkaProperties {
    final static String zkConnect = "192.168.1.160:2181";
    final static String groupId = "group1";
    final static String topic = "topic1";
    // final static String kafkaServerURL = "192.168.1.160";
    // final static int kafkaServerPort = 9092;
    // final static int kafkaProducerBufferSize = 64 * 1024;
    // final static int connectionTimeOut = 20000;
    // final static int reconnectInterval = 10000;
    // final static String topic2 = "topic2";
    // final static String topic3 = "topic3";
    // final static String clientId = "SimpleConsumerDemoClient";
}

 

生产者 KafkaProducer.java

import java.util.Properties;

import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

public class KafkaProducer extends Thread {
    private final kafka.javaapi.producer.Producer<Integer, String> producer;
    private final String topic;
    private final Properties props = new Properties();

    public KafkaProducer(String topic) {
        props.put("serializer.class", "kafka.serializer.StringEncoder");
        props.put("metadata.broker.list", "192.168.1.160:9092"); // 配置kafka端口
        producer = new kafka.javaapi.producer.Producer<Integer, String>(new ProducerConfig(props));
        this.topic = topic;
    }

    @Override
    public void run() {
        int messageNo = 1;
        while (true) {
            String messageStr = new String("This is a message, number: " + messageNo);
            System.out.println("Send:" + messageStr);
            producer.send(new KeyedMessage<Integer, String>(topic, messageStr));
            messageNo++;
            try {
                sleep(1000);
            } catch (InterruptedException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
        }
    }

}

 

消费者 KafkaConsumer.java

import java.util.Properties;

import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;


public class KafkaConsumer extends Thread {
    private final ConsumerConnector consumer;
    private final String topic;

    public KafkaConsumer(String topic) {
        consumer = kafka.consumer.Consumer.createJavaConsumerConnector(createConsumerConfig());
        this.topic = topic;
    }

    private static ConsumerConfig createConsumerConfig() {
        Properties props = new Properties();
        props.put("zookeeper.connect", KafkaProperties.zkConnect); // zookeeper的地址
        props.put("group.id", KafkaProperties.groupId); // 组ID

        //zk连接超时
        props.put("zookeeper.session.timeout.ms", "40000");
        props.put("zookeeper.sync.time.ms", "200");
        props.put("auto.commit.interval.ms", "1000");
        
        return new ConsumerConfig(props);
    }

    @Override
    public void run() {
        Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
        topicCountMap.put(topic, new Integer(1));
        
        Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap     = consumer.createMessageStreams(topicCountMap);
        
        KafkaStream<byte[], byte[]> stream = consumerMap.get(topic).get(0);
        ConsumerIterator<byte[], byte[]> it = stream.iterator();
        while (it.hasNext()) {
            System.out.println("receive:" + new String(it.next().message()));
            try {
                sleep(1000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }
}

 

执行函数 KafkaConsumerProducerDemo.java

public class KafkaConsumerProducerDemo {
    public static void main(String[] args) {
        KafkaProducer producerThread = new KafkaProducer(KafkaProperties.topic);
        producerThread.start();

        KafkaConsumer consumerThread = new KafkaConsumer(KafkaProperties.topic);
        consumerThread.start();
    }
}

 

 

—————————–

另一个例子:http://www.cnblogs.com/sunxucool/p/3913919.html

Producer端代码

  1) producer.properties文件:此文件放在/resources目录下

#partitioner.class=
metadata.broker.list=127.0.0.1:9092,127.0.0.1:9093
##,127.0.0.1:9093
producer.type=sync
compression.codec=0
serializer.class=kafka.serializer.StringEncoder
##在producer.type=async时有效
#batch.num.messages=100

View Code

  2) LogProducer.java代码样例

package com.test.kafka;

import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.Properties;

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class LogProducer {

    private Producer<String,String> inner;
    public LogProducer() throws Exception{
        Properties properties = new Properties();
        properties.load(ClassLoader.getSystemResourceAsStream("producer.properties"));
        ProducerConfig config = new ProducerConfig(properties);
        inner = new Producer<String, String>(config);
    }

    
    public void send(String topicName,String message) {
        if(topicName == null || message == null){
            return;
        }
        KeyedMessage<String, String> km = new KeyedMessage<String, String>(topicName,message);
        inner.send(km);
    }
    
    public void send(String topicName,Collection<String> messages) {
        if(topicName == null || messages == null){
            return;
        }
        if(messages.isEmpty()){
            return;
        }
        List<KeyedMessage<String, String>> kms = new ArrayList<KeyedMessage<String, String>>();
        for(String entry : messages){
            KeyedMessage<String, String> km = new KeyedMessage<String, String>(topicName,entry);
            kms.add(km);
        }
        inner.send(kms);
    }
    
    public void close(){
        inner.close();
    }
    
    /**
     * @param args
     */
    public static void main(String[] args) {
        LogProducer producer = null;
        try{
            producer = new LogProducer();
            int i=0;
            while(true){
                producer.send("test-topic", "this is a sample" + i);
                i++;
                Thread.sleep(2000);
            }
        }catch(Exception e){
            e.printStackTrace();
        }finally{
            if(producer != null){
                producer.close();
            }
        }

    }

}

View Code

五.Consumer端

  1) consumer.properties:文件位于/resources目录下

zookeeper.connect=127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183
##,127.0.0.1:2182,127.0.0.1:2183
# timeout in ms for connecting to zookeeper
zookeeper.connectiontimeout.ms=1000000
#consumer group id
group.id=test-group
#consumer timeout
#consumer.timeout.ms=5000

View Code

  2) LogConsumer.java代码样例

package com.test.kafka;

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.message.MessageAndMetadata;
public class LogConsumer {

    private ConsumerConfig config;
    private String topic;
    private int partitionsNum;
    private MessageExecutor executor;
    private ConsumerConnector connector;
    private ExecutorService threadPool;
    public LogConsumer(String topic,int partitionsNum,MessageExecutor executor) throws Exception{
        Properties properties = new Properties();
        properties.load(ClassLoader.getSystemResourceAsStream("consumer.properties"));
        config = new ConsumerConfig(properties);
        this.topic = topic;
        this.partitionsNum = partitionsNum;
        this.executor = executor;
    }
    
    public void start() throws Exception{
        connector = Consumer.createJavaConsumerConnector(config);
        Map<String,Integer> topics = new HashMap<String,Integer>();
        topics.put(topic, partitionsNum);
        Map<String, List<KafkaStream<byte[], byte[]>>> streams = connector.createMessageStreams(topics);
        List<KafkaStream<byte[], byte[]>> partitions = streams.get(topic);
        threadPool = Executors.newFixedThreadPool(partitionsNum);
        for(KafkaStream<byte[], byte[]> partition : partitions){
            threadPool.execute(new MessageRunner(partition));
        } 
    }

        
    public void close(){
        try{
            threadPool.shutdownNow();
        }catch(Exception e){
            //
        }finally{
            connector.shutdown();
        }
        
    }
    
    class MessageRunner implements Runnable{
        private KafkaStream<byte[], byte[]> partition;
        
        MessageRunner(KafkaStream<byte[], byte[]> partition) {
            this.partition = partition;
        }
        
        public void run(){
            ConsumerIterator<byte[], byte[]> it = partition.iterator();
            while(it.hasNext()){
                MessageAndMetadata<byte[],byte[]> item = it.next();
                System.out.println("partiton:" + item.partition());
                System.out.println("offset:" + item.offset());
                executor.execute(new String(item.message()));//UTF-8
            }
        }
    }
    
    interface MessageExecutor {
        
        public void execute(String message);
    }
    
    /**
     * @param args
     */
    public static void main(String[] args) {
        LogConsumer consumer = null;
        try{
            MessageExecutor executor = new MessageExecutor() {
                
                public void execute(String message) {
                    System.out.println(message);
                    
                }
            };
            consumer = new LogConsumer("test-topic", 2, executor);
            consumer.start();
        }catch(Exception e){
            e.printStackTrace();
        }finally{
//            if(consumer != null){
//                consumer.close();
//            }
        }

    }

}

View Code

 

版权声明:本文为gnivor原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/gnivor/p/4934265.html