失眠网,内容丰富有趣,生活中的好帮手!
失眠网 > kafak集群部署配置 开启SASL_PLAINTEXT认证以及acl权限控制

kafak集群部署配置 开启SASL_PLAINTEXT认证以及acl权限控制

时间:2020-09-16 05:11:58

相关推荐

kafak集群部署配置 开启SASL_PLAINTEXT认证以及acl权限控制

SASL_PLAINTEXT认证 本人认为就是consumer连接broker开启了用户名,密码认证acl权限控制 就是指针对用户 配置拥有哪些操作权限,如 topic的读,写,group的读,topic的创建,删除,等等都是可控制的权限

server.properties配置

# Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements. See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License. You may obtain a copy of the License at## /licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.broker.id=37############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from # .InetAddress.getCanonicalHostName() if not configured.# FORMAT:#listeners = listener_name://host_name:port# EXAMPLE:#listeners = PLAINTEXT://your.host.name:9092##listeners=SASL_PLAINTEXT://100.100.184.145:9092# Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value# returned from .InetAddress.getCanonicalHostName().inter.broker.listener.name=INTERNAL_LISTENERlistener.security.protocol.map=LOCAL_LISTENER:SASL_PLAINTEXT,INTERNAL_LISTENER:SASL_PLAINTEXT,EXTERNAL_LISTENER:SASL_PLAINTEXTlisteners=LOCAL_LISTENER://127.0.0.1:9092,INTERNAL_LISTENER://100.100.111.111:9093,EXTERNAL_LISTENER://100.100.111.111:17002advertised.listeners=INTERNAL_LISTENER://100.100.111.111:9093,EXTERNAL_LISTENER://10.28.88.61:17002#100.100.111.111为部署本台kafka的真实ip地址,我们可以看出来,本台kafka暴露了三个端口#9092,9093,17002作为接收请求的端口。然后给三个端口分别定义了listenr名字以及通信协议#advertised.listeners中EXTERNAL_LISTENER://10.28.88.61:17002表示#EXTERNAL_LISTENER开启的端口注册到zk中元数据的地址信息,当client端首次连接#kafka通过100.100.111.111:17002,获取元数据的时候会获取到这个地址10.28.88.61:17002,#然后在通过元数据中的ip地址10.28.88.61:17002继续与kafka连接,所以当我们需要ngix反向代理来跟#kafka通信的时候一般就是这么配置,10.28.88.61:17002就是ngix配置的反向代理到#100.100.111.111:17002。此时client端配置连接的"bootstrap.servers"=10.28.88.61:17002就行# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the work.threads=3# The number of threads that the server uses for processing requests, which may include disk I/Onum.io.threads=8# The send buffer (SO_SNDBUF) used by the socket serversocket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket serversocket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log fileslog.dirs=/home/whtemp/kafka/kafka-logs# The default number of log partitions per topic. More partitions allow greater# parallelism for consumption, but this will also result in more files across# the brokers.num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.# This value is recommended to be increased for installations with data dirs located in RAID array.num.recovery.threads.per.data.dir=1############################# Internal Topic Settings ############################## The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.offsets.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync# the OS cache lazily. The following configurations control the flush of data to disk.# There are a few important trade-offs here:# 1. Durability: Unflushed data may be lost if you are not using replication.# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.# The settings below allow one to configure the flush policy to flush data after a period of time or# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can# be set to delete segments after a period of time, or after a given size has accumulated.# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens# from the end of the log.# The minimum age of a log file to be eligible for deletion due to agelog.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining# segments drop below log.retention.bytes. Functions independently of log.retention.hours.#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according# to the retention policieslog.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=101.913.89.166:2128# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.# The default value for this is 3 seconds.# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.group.initial.rebalance.delay.ms=0##security.inter.broker.protocol=SASL_PLAINTEXT##security.inter.broker.protocol=INTERNAL_LISTENERsasl.enabled.mechanisms=PLAIN sasl.mechanism.inter.broker.protocol=PLAIN authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer allow.everyone.if.no.acl.found=false#allow.everyone.if.no.acl.found=false的时候表示任何客户端的操作,如果发现你未配置acl权限,就不能#访问除非给对应的consumer或者producer通过sals认证连接的时候配置了相关权限。超级管理员不受限制。#如果配置ture了,那么如果通过sals认证连接的时候,对应的用户未配置权限也可以操作,但是连接的#用户你配置了权限,但是此时你操作的权限不对,这时候会报权限异常super.users=User:admin#表示超级管理员是admin##security.protocol=INTERNAL_LISTENERsecurity.protocol=SASL_PLAINTEXTsasl.mechanism=PLAIN

几个配置说明

#100.100.111.111为部署本台kafka的真实ip地址,我们可以看出来,本台kafka暴露了三个端口#9092,9093,17002作为接收请求的端口。然后给三个端口分别定义了listenr名字以及通信协议#advertised.listeners中EXTERNAL_LISTENER://10.28.88.61:17002表示#EXTERNAL_LISTENER开启的端口注册到zk中元数据的地址信息,当client端首次连接#kafka通过100.100.111.111:17002,获取元数据的时候会获取到这个地址10.28.88.61:17002,#然后在通过元数据中的ip地址10.28.88.61:17002继续与kafka连接,所以当我们需要ngix反向代理来跟#kafka通信的时候一般就是这么配置,10.28.88.61:17002就是ngix配置的反向代理到#100.100.111.111:17002。此时client端配置连接的"bootstrap.servers"=10.28.88.61:17002就行#allow.everyone.if.no.acl.found=false的时候表示任何客户端的操作,如果发现你未配置acl权限,就不能#访问除非给对应的consumer或者producer通过sals认证连接的时候配置了相关权限。超级管理员不受限制。#如果配置ture了,那么如果通过sals认证连接的时候,对应的用户未配置权限也可以操作,但是连接的#用户你配置了权限,但是此时你操作的权限不对,这时候会报权限异常

kafka_server_jaas.conf 的配置

KafkaServer {org.mon.security.plain.PlainLoginModule requiredusername="admin" #内部通信用password="kafka" #内部通信用user_zsh="niubi"#用户zsh 密码niubiuser_dataflow="dataflow"user_crawler="crawler"user_taskcenter="taskcenter"user_admin="kafka";#用户admin 密码kafka};

kafka_client_jaas.conf的配置超级管理员

KafkaClient {org.mon.security.plain.PlainLoginModule requiredusername="admin"password="kafka";};

也可以是普通用户

KafkaClient {org.mon.security.plain.PlainLoginModule requiredusername="dataflow"password="dataflow";};

java代码自然是

//当然你也可以换成超级管理员的密码props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,"SASL_PLAINTEXT");props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");props.put("sasl.jaas.config","org.mon.security.plain.PlainLoginModule required username='dataflow' password='dataflow';");

package zktest.zktest;import java.util.Arrays;import java.util.HashMap;import java.util.List;import java.util.Map;import org.apache.kafka.clients.admin.AdminClient;import org.apache.kafka.clients.admin.AdminClientConfig;import org.apache.kafka.clients.admin.DescribeAclsResult;import org.apache.kafka.clients.admin.KafkaAdminClient;import org.mon.acl.AccessControlEntry;import org.mon.acl.AclBinding;import org.mon.acl.AclBindingFilter;import org.mon.acl.AclOperation;import org.mon.acl.AclPermissionType;import org.mon.resource.PatternType;import org.mon.resource.ResourcePattern;import org.mon.resource.ResourceType;//import org.springframework.kafka.core.KafkaAdmin;public class AclTest {public static void main(String[] args) {Map<String, Object> configs = new HashMap<>();// broker地址,多个用逗号分割,这里用了ngix的地址,如果不需要ngix,请直接改为kafka的ip地址configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "10.28.88.61:17002");configs.put("security.protocol", "SASL_PLAINTEXT");configs.put("sasl.mechanism", "PLAIN");// 登录broker的账户 admin是管理员configs.put("sasl.jaas.config","org.mon.security.scram.ScramLoginModule required username=\"admin\" password=\"kafka\";");AdminClient adminClient = KafkaAdminClient.create(configs);// principal:User:test2是需要赋予权限的帐号// host:主机 (*号即可)// operation:权限操作// permissionType:权限类型AccessControlEntry ace = new AccessControlEntry("User:dataflow", "*", AclOperation.READ, AclPermissionType.ALLOW);// resourceType:资源类型(topic)// name:topic名称// patternType:资源模式类型//下面的写法表示当client用sals认证的时候使用dataflow这个用户连接的时候,当使用groupid=wwaaaddfw,对topic-name17仅仅有读权限ResourcePattern rp = new ResourcePattern(ResourceType.TOPIC, "topic-name17", PatternType.LITERAL);ResourcePattern rp1 = new ResourcePattern(ResourceType.GROUP, "wwaaaddfw", PatternType.LITERAL);AclBinding ab = new AclBinding(rp, ace);AclBinding ab1 = new AclBinding(rp1, ace);// 多个权限赋予可以传listList<AclBinding> ablist = Arrays.asList(ab,ab1);adminClient.createAcls(ablist);// 可以查看赋予用户的所有权限DescribeAclsResult b = adminClient.describeAcls(AclBindingFilter.ANY);System.out.println(b.values());adminClient.close();}}

client consumer的例子

package zktest.zktest;import java.time.Duration;import java.util.Arrays;import java.util.Properties;import org.apache.monClientConfigs;import org.apache.kafka.clients.consumer.Consumer;import org.apache.kafka.clients.consumer.ConsumerConfig;import org.apache.kafka.clients.consumer.ConsumerRecord;import org.apache.kafka.clients.consumer.ConsumerRecords;import org.apache.kafka.clients.consumer.KafkaConsumer;import org.mon.config.SaslConfigs;import org.mon.serialization.StringDeserializer;public class HelloWorldConsumer {public static void main(String[] args) throws InterruptedException {Properties props = new Properties();//这里我使用的是ngix,如果不需要ngix,请直接改为kafka的ip地址props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "10.28.88.61:17002");props.put(ConsumerConfig.GROUP_ID_CONFIG ,"wwaaaddfw") ;props.put("auto.offset.reset", "earliest");props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);//props.put("mit", "false");props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,"SASL_PLAINTEXT");props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1");props.put("sasl.jaas.config","org.mon.security.plain.PlainLoginModule required username='dataflow' password='dataflow';");Consumer<String, String> consumer = new KafkaConsumer<>(props);consumer.subscribe(Arrays.asList("topic-name17"));while (true) {ConsumerRecords<String, String> records = consumer.poll(10);Thread.sleep(1000);for (ConsumerRecord<String, String> record : records) {System.out.println("分区:"+record.partition() +"分区offset&&"+record.offset()+"&&分区key:"+record.key());}}}}

此时consumer如果换成其他用户(admin除外)都会报认证失败

如果觉得《kafak集群部署配置 开启SASL_PLAINTEXT认证以及acl权限控制》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。