listeners,
Listener List - Comma-separated list of URIs we will listen on and their protocols.
Specify hostname as 0.0.0.0 to bind to all interfaces.
Leave hostname empty to bind to default interface.
Examples of legal listener lists: PLAINTEXT://myhost:9092,TRACE://:9091 PLAINTEXT://0.0.0.0:9092, TRACE://localhost:9093
advertised.listeners,
Listeners to publish to ZooKeeper for clients to use, if different than the listeners above.
In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for `listeners` will be used.
listeners
是kafka真正bind的地址,
/** * An NIO socket server. The threading model is * 1 Acceptor thread that handles new connections * Acceptor has N Processor threads that each have their own selector and read requests from sockets * M Handler threads that handle requests and produce responses back to the processor threads for writing. */ class SocketServer(val config: KafkaConfig, val metrics: Metrics, val time: Time) extends Logging with KafkaMetricsGroup { private val endpoints = config.listeners /** * Start the socket server */ def startup() { endpoints.values.foreach { endpoint => val protocol = endpoint.protocolType val processorEndIndex = processorBeginIndex + numProcessorThreads for (i <- processorBeginIndex until processorEndIndex) processors(i) = newProcessor(i, connectionQuotas, protocol) val acceptor = new Acceptor(endpoint, sendBufferSize, recvBufferSize, brokerId, processors.slice(processorBeginIndex, processorEndIndex), connectionQuotas) acceptors.put(endpoint, acceptor) Utils.newThread("kafka-socket-acceptor-%s-%d".format(protocol.toString, endpoint.port), acceptor, false).start() acceptor.awaitStartup() processorBeginIndex = processorEndIndex } }
在socketServer中,可以看到,确实在SocketServer中accept的是listeners
为每个endpoint都建立acceptor和processer
advertised.listeners
是暴露给外部的listeners,如果没有设置,会用listeners
KafkaServer.startup
/* tell everyone we are alive */ val listeners = config.advertisedListeners.map {case(protocol, endpoint) => if (endpoint.port == 0) (protocol, EndPoint(endpoint.host, socketServer.boundPort(protocol), endpoint.protocolType)) else (protocol, endpoint) } kafkaHealthcheck = new KafkaHealthcheck(config.brokerId, listeners, zkUtils, config.rack, config.interBrokerProtocolVersion) kafkaHealthcheck.startup()
这里读出advertisedListeners,传给KafkaHealthcheck
/** * This class registers the broker in zookeeper to allow * other brokers and consumers to detect failures. It uses an ephemeral znode with the path: * /brokers/ids/[0...N] --> advertisedHost:advertisedPort * * Right now our definition of health is fairly naive. If we register in zk we are healthy, otherwise * we are dead. */ class KafkaHealthcheck(brokerId: Int, advertisedEndpoints: Map[SecurityProtocol, EndPoint], zkUtils: ZkUtils, rack: Option[String], interBrokerProtocolVersion: ApiVersion) extends Logging {
像注释你们看到的,
KafkaHealthcheck就是把broker信息注册到zk里面的ephemeral znode,然后当znode消失就知道broker挂了
所以这里注册到zk中的一定是advertisedListeners
/** * Register this broker as "alive" in zookeeper */ def register() { val jmxPort = System.getProperty("com.sun.management.jmxremote.port", "-1").toInt val updatedEndpoints = advertisedEndpoints.mapValues(endpoint => if (endpoint.host == null || endpoint.host.trim.isEmpty) EndPoint(InetAddress.getLocalHost.getCanonicalHostName, endpoint.port, endpoint.protocolType) //如果没有host,默认读取InetAddress.getLocalHost.getCanonicalHostName else endpoint ) // the default host and port are here for compatibility with older client // only PLAINTEXT is supported as default // if the broker doesn't listen on PLAINTEXT protocol, an empty endpoint will be registered and older clients will be unable to connect val plaintextEndpoint = updatedEndpoints.getOrElse(SecurityProtocol.PLAINTEXT, new EndPoint(null,-1,null)) //生成plaintextEndpoint节点,兼容老版本 zkUtils.registerBrokerInZk(brokerId, plaintextEndpoint.host, plaintextEndpoint.port, updatedEndpoints, jmxPort, rack, //新的版本只会读updatedEndpoints interBrokerProtocolVersion) }
问题是如果kafka间同步到底用的是什么listener,
ReplicaManager.makeFollowers
中会创建FetchThread,
val partitionsToMakeFollowerWithLeaderAndOffset = partitionsToMakeFollower.map(partition => new TopicAndPartition(partition) -> BrokerAndInitialOffset( metadataCache.getAliveBrokers.find(_.id == partition.leaderReplicaIdOpt.get).get.getBrokerEndPoint(config.interBrokerSecurityProtocol), partition.getReplica().get.logEndOffset.messageOffset)).toMap replicaFetcherManager.addFetcherForPartitions(partitionsToMakeFollowerWithLeaderAndOffset)
这个逻辑是,broker间做同步的时候,创建FetchThread时的情况,
可以看到,broker信息还是从metadataCache取到的,
从metadataCache取出相应的broker,然后调用getBrokerEndPoint(config.interBrokerSecurityProtocol),取到相应的endpoint
security.inter.broker.protocol,Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
而用户拿到的broker信息,
KafkaApis.handleTopicMetadataRequest
val brokers = metadataCache.getAliveBrokers val responseBody = new MetadataResponse( brokers.map(_.getNode(request.securityProtocol)).asJava, metadataCache.getControllerId.getOrElse(MetadataResponse.NO_CONTROLLER_ID), completeTopicMetadata.asJava, requestVersion )
这里取决于什么安全协议,request.securityProtocol
public enum SecurityProtocol { /** Un-authenticated, non-encrypted channel */ PLAINTEXT(0, "PLAINTEXT", false), /** SSL channel */ SSL(1, "SSL", false), /** SASL authenticated, non-encrypted channel */ SASL_PLAINTEXT(2, "SASL_PLAINTEXT", false), /** SASL authenticated, SSL channel */ SASL_SSL(3, "SASL_SSL", false), /** Currently identical to PLAINTEXT and used for testing only. We may implement extra instrumentation when testing channel code. */ TRACE(Short.MAX_VALUE, "TRACE", true);
可以看到不同的协议,可以有不同的地址
Broker
/** * Create a broker object from id and JSON string. * * @param id * @param brokerInfoString * * Version 1 JSON schema for a broker is: * { * "version":1, * "host":"localhost", * "port":9092 * "jmx_port":9999, * "timestamp":"2233345666" * } * * Version 2 JSON schema for a broker is: * { * "version":2, * "host":"localhost", * "port":9092 * "jmx_port":9999, * "timestamp":"2233345666", * "endpoints":["PLAINTEXT://host1:9092", "SSL://host1:9093"] * } * * Version 3 (current) JSON schema for a broker is: * { * "version":3, * "host":"localhost", * "port":9092 * "jmx_port":9999, * "timestamp":"2233345666", * "endpoints":["PLAINTEXT://host1:9092", "SSL://host1:9093"], * "rack":"dc1" * } */ def createBroker(id: Int, brokerInfoString: String): Broker = { if (brokerInfoString == null) throw new BrokerNotAvailableException(s"Broker id $id does not exist") try { Json.parseFull(brokerInfoString) match { case Some(m) => val brokerInfo = m.asInstanceOf[Map[String, Any]] val version = brokerInfo("version").asInstanceOf[Int] val endpoints = if (version < 1) throw new KafkaException(s"Unsupported version of broker registration: $brokerInfoString") else if (version == 1) { val host = brokerInfo("host").asInstanceOf[String] val port = brokerInfo("port").asInstanceOf[Int] Map(SecurityProtocol.PLAINTEXT -> new EndPoint(host, port, SecurityProtocol.PLAINTEXT)) } else { val listeners = brokerInfo("endpoints").asInstanceOf[List[String]] listeners.map { listener => val ep = EndPoint.createEndPoint(listener) (ep.protocolType, ep) }.toMap } val rack = brokerInfo.get("rack").filter(_ != null).map(_.asInstanceOf[String]) new Broker(id, endpoints, rack) case None => throw new BrokerNotAvailableException(s"Broker id $id does not exist") } } catch { case t: Throwable => throw new KafkaException(s"Failed to parse the broker info from zookeeper: $brokerInfoString", t) } } }
可以看到,老版本的是用host,port
而新版本都是用endpoints,里面可以定义各种协议下的listeners
zkUtil
/** * This API takes in a broker id, queries zookeeper for the broker metadata and returns the metadata for that broker * or throws an exception if the broker dies before the query to zookeeper finishes * * @param brokerId The broker id * @return An optional Broker object encapsulating the broker metadata */ def getBrokerInfo(brokerId: Int): Option[Broker] = { readDataMaybeNull(BrokerIdsPath + "/" + brokerId)._1 match { case Some(brokerInfo) => Some(Broker.createBroker(brokerId, brokerInfo)) case None => None } }
zkUtil只是读出zk中相应的内容并createBroker
结论,
listeners,用于server真正bind
advertisedListeners, 用于开发给用户,如果没有设定,直接使用listeners
当前kafka没有区分内外部的流量,一旦设置advertisedListeners,所有流量都会使用这个配置,明显不合理啊
会解决这个问题