elastic / beats

:tropical_fish: Beats - Lightweight shippers for Elasticsearch & Logstash
https://www.elastic.co/products/beats
Other
12.14k stars 4.91k forks source link

Kerberos Authentication for Kafka #5413

Closed AndreAga closed 2 years ago

AndreAga commented 6 years ago

Hi guys, I saw Beats Library doesn’t support Kerberos authentication for kafka output, but Logstash kafka input does. Any plan to add this kind of Auth?

Thanks.

mellowonpsx commented 6 years ago

I agree with @AndreAga, also Logstash's Kafka output plugin supports Kerberos SASL. Kerberos Auth is a must-have feature for Bests Library.

gmoskovicz commented 6 years ago

@urso any news regarding this?

urso commented 6 years ago

@gmoskovicz Sorry, no updates on this ticket.

miko-code commented 6 years ago

+1

mmirabedini commented 6 years ago

+1

giezer commented 6 years ago

+1

ioah86 commented 6 years ago

+1

dounine commented 6 years ago

+1

mayank-mahajan-guavus commented 5 years ago

@jsoriano Is there a plan to add Kerberos Support in beats?

nathanrstacey commented 5 years ago

+1

smaley07 commented 4 years ago

+1

vickhello commented 4 years ago

+1

Lswx2017 commented 4 years ago

+1

Yggdrassil80 commented 4 years ago

+1

mostlyjason commented 4 years ago

One of our customers offered this response on why Kerberos is better than SSL:

In the case of the Confluent article [showing SSL auth], they are using a very loose term for authentication, in saying that it is performing mutual authentication of the certificates and only validates that the certificate is trusted by way of the CA certificates. In other words, this is machine authentication, and it provides no context to the user on that machine. In our security context, SSL authentication is not sufficient.

kvch commented 4 years ago

The feature has been released in v7.7 as a beta.

agarwri commented 4 years ago

I've been running filebeat 7.7 kafka output with the following kerberos configurations:

output.kafka: hosts: xxxx topic: xxxx required_acks: 0 compression: none max_message_bytes: 1000000 kerberos.enabled: true kerberos.auth_type: keytab kerberos.username: xxx kerberos.keytab: xxx kerberos.service_name: kafka kerberos.config_path: /etc/krb5.conf kerberos.realm: xxxx

i've verified that kinit works with my principal/keytab, but i"m getting the following error:

2020-05-28T10:54:54.156-0400 DEBUG [kafka] kafka/client.go:290 Kafka publish failed with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

Im sure this is not a kafka side problem, because ive used the same principal/keytab to send logs to kafka via the console kafka producer, and this was successful. Any ideas what the problem might be, or how i can debug further (debug logs already enabled).

kvch commented 4 years ago

There was an issue with the lib we use for Kafka. But it has been updated in the repo and it will be fixed hopefully in v7.7.1. Do you mind testing again when the patch release comes out?

For debugging the issue, I suggest you enable debug logging in Kafka. It will tell you the exact error during the authtentication.

agarwri commented 4 years ago

thanks for getting back to me! yes i can test when the patch comes out--is there a timeline for when that's happening?

kvch commented 4 years ago

It is going to be available in early June.

agarwri commented 4 years ago

thanks, will look out for an update on this thread. In the meanwhile I've been trying to repro the error by running a standalone shopify/sarama kafka producer (which i believe is the library filebeat is using) and i'm getting the following error:

13:53:06 Error while performing GSSAPI Kerberos Authentication: EOF

Is this the problem you're referring to?

agarwri commented 4 years ago

I see you've already merged the fix into shopify sarama here: https://github.com/Shopify/sarama/pull/1697 look forward to seeing the patch for filebeat, thanks!

kvch commented 4 years ago

@agarwri The new version has been released. Let me know if you still have problems with Kerberos for Kafka.

agarwri commented 4 years ago

Im getting a new error now:

2020-06-15T12:55:54.282-0400 DEBUG [kafka] kafka/client.go:290 Kafka publish failed with: kafka server: Request was for a topic or partition that does not exist on this broker.

even though the topic definitely exists.

kvch commented 4 years ago

@agarwri Could you please open a separate issue and add the debug logs of Kafka?

moulisea commented 4 years ago

Hi - does filebeat support gssapi mechanism. I keep getting error error initializing publisher:Kafka: invalid configuration net.sasl.gssapi.username must not be empty when gssapi mechanism is used. I already gave this property in config, still it error. Any leads. Thanks.

moulisea commented 4 years ago

@kvch can you please check above comment. I am testing on filebeat 7.7.1. Thanks.

kvch commented 4 years ago

Could you share your config?

moulisea commented 4 years ago

Thanks for the response.

Attached Filebeat.yml which we use. we are testing using Filebeat 7.8 version.

I recieve below error: "Exiting: error initializing publisher: kafka: invalid configuration (Net.SASL.GSSAPI.Username must not be empty when GSS-API mechanism is used)"

Thanks, Mouli

On Wed, Aug 19, 2020 at 8:04 AM Noémi Ványi notifications@github.com wrote:

Could you share your config?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/elastic/beats/issues/5413#issuecomment-676321717, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD2UZLQ2LTAHMUPDFUOM74TSBPEVJANCNFSM4EAA7QHQ .

###################### Filebeat Configuration Example #########################

This file is an example configuration file highlighting only the most common

options. The filebeat.reference.yml file from the same directory contains all the

supported options with more comments. You can use it as a reference.

#

You can find the full configuration reference here:

https://www.elastic.co/guide/en/beats/filebeat/index.html

For more available modules and options, please see the filebeat.reference.yml sample

configuration file.

============================== Filebeat inputs ===============================

filebeat.inputs:

Each - is an input. Most options can be set at the input level, so

you can use different inputs for various configurations.

Below are the input specific configurations.

============================== Filebeat modules ==============================

filebeat.config.modules:

Glob pattern for configuration loading

path: ${path.config}/modules.d/*.yml

Set to true to enable config reloading

reload.enabled: false

Period on which files under path should be checked for changes

reload.period: 10s

======================= Elasticsearch template setting =======================

setup.template.settings: index.number_of_shards: 1

index.codec: best_compression

_source.enabled: false

================================== General ===================================

The name of the shipper that publishes the network data. It can be used to group

all the transactions sent by a single shipper in the web interface.

name:

The tags of the shipper are included in their own field with each

transaction published.

tags: ["service-X", "web-tier"]

Optional fields that you can specify to add additional information to the

output.

fields:

env: staging

================================= Dashboards =================================

These settings control loading the sample dashboards to the Kibana index. Loading

the dashboards is disabled by default and can be enabled either by setting the

options here or by using the setup command.

setup.dashboards.enabled: false

The URL from where to download the dashboards archive. By default this URL

has a value which is computed based on the Beat name and version. For released

versions, this URL points to the dashboard archive on the artifacts.elastic.co

website.

setup.dashboards.url:

=================================== Kibana ===================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.

setup.kibana:

Kibana Host

Scheme and port can be left out and will be set to the default (http and 5601)

In case you specify and additional path, the scheme is required: http://localhost:5601/path

IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

host: "localhost:5601"

Kibana Space ID

ID of the Kibana Space into which the dashboards should be loaded. By default,

the Default Space will be used.

space.id:

=============================== Elastic Cloud ================================

These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

The cloud.id setting overwrites the output.elasticsearch.hosts and

setup.kibana.host options.

You can find the cloud.id in the Elastic Cloud web UI.

cloud.id:

The cloud.auth setting overwrites the output.elasticsearch.username and

output.elasticsearch.password settings. The format is <user>:<pass>.

cloud.auth:

================================== Outputs ===================================

Configure what output to use when sending the data collected by the beat.

---------------------------- Elasticsearch Output ----------------------------

output.elasticsearch:

Array of hosts to connect to.

hosts: ["localhost:9200"]

Protocol - either http (default) or https.

protocol: "https"

Authentication credentials - either API key or username/password.

api_key: "id:api_key"

username: "elastic"

password: "changeme"

------------------------------ Logstash Output -------------------------------

output.logstash:

The Logstash hosts

hosts: ["localhost:5044"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication

ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

ssl.key: "/etc/pki/client/cert.key"

------------------------------- Kafka output ----------------------------------

output.kafka:

Boolean flag to enable or disable the output module.

enabled: true

The list of Kafka broker addresses from which to fetch the cluster metadata.

The cluster metadata contain the actual Kafka brokers events are published

to.

hosts: ["broker.visa.com:9093"]

The Kafka topic used for produced events. The setting can be a format string

using any event field. To set the topic from document type use %{[type]}.

topic: 'filetest'

The Kafka event key setting. Use format string to create a unique event key.

By default no event key will be generated.

key: ''

The Kafka event partitioning strategy. Default hashing strategy is hash

using the output.kafka.key setting or randomly distributes events if

output.kafka.key is not configured.

partition.hash:

# If enabled, events will only be published to partitions with reachable
# leaders. Default is false.
#reachable_only: false

# Configure alternative event field names used to compute the hash value.
# If empty `output.kafka.key` setting will be used.
# Default value is empty list.
#hash: []

Authentication details. Password is required if username is set.

username: ''

password: ''

Kafka version Filebeat is assumed to run against. Defaults to the "1.0.0".

version: '1.0.0'

Configure JSON encoding

codec.json:

# Pretty-print JSON event
#pretty: false

# Configure escaping HTML symbols in strings.
#escape_html: false

Metadata update configuration. Metadata contains leader information

used to decide which broker to use when publishing.

metadata:

# Max metadata request retry attempts when cluster is in middle of leader
# election. Defaults to 3 retries.
#retry.max: 3

# Wait time between retries during leader elections. Default is 250ms.
#retry.backoff: 250ms

# Refresh metadata interval. Defaults to every 10 minutes.
#refresh_frequency: 10m

# Strategy for fetching the topics metadata from the broker. Default is false.
#full: false

The number of concurrent load-balanced Kafka output workers.

worker: 1

The number of times to retry publishing an event after a publishing failure.

After the specified number of retries, events are typically dropped.

Some Beats, such as Filebeat, ignore the max_retries setting and retry until

all events are published. Set max_retries to a value less than 0 to retry

until all events are published. The default is 3.

max_retries: 3

The maximum number of events to bulk in a single Kafka request. The default

is 2048.

bulk_max_size: 2048

Duration to wait before sending bulk Kafka request. 0 is no delay. The default

is 0.

bulk_flush_frequency: 0s

The number of seconds to wait for responses from the Kafka brokers before

timing out. The default is 30s.

timeout: 30s

The maximum duration a broker will wait for number of required ACKs. The

default is 10s.

broker_timeout: 10s

The number of messages buffered for each Kafka broker. The default is 256.

channel_buffer_size: 256

The keep-alive period for an active network connection. If 0s, keep-alives

are disabled. The default is 0 seconds.

keep_alive: 0

Sets the output compression codec. Must be one of none, snappy and gzip. The

default is gzip.

compression: gzip

Set the compression level. Currently only gzip provides a compression level

between 0 and 9. The default value is chosen by the compression algorithm.

compression_level: 4

The maximum permitted size of JSON-encoded messages. Bigger messages will be

dropped. The default value is 1000000 (bytes). This value should be equal to

or less than the broker's message.max.bytes.

max_message_bytes: 1000000

The ACK reliability level required from broker. 0=no response, 1=wait for

local commit, -1=wait for all replicas to commit. The default is 1. Note:

If set to 0, no ACKs are returned by Kafka. Messages might be lost silently

on error.

required_acks: 1

The configurable ClientID used for logging, debugging, and auditing

purposes. The default is "beats".

client_id: beats

Enable SSL support. SSL is automatically enabled if any SSL setting is set.

ssl.enabled: true

Optional SSL configuration options. SSL is off by default.

List of root certificates for HTTPS server verifications

ssl.certificate_authorities: C:\Users\chasrini\Documents\Projects\Kafka\Filebeat\conf\neo4jnonproductionclient.pem

Configure SSL verification mode. If none is configured, all server hosts

and certificates will be accepted. In this mode, SSL based connections are

susceptible to man-in-the-middle attacks. Use only for testing. Default is

full.

ssl.verification_mode: full

List of supported/valid TLS versions. By default all TLS versions from 1.1

up to 1.3 are enabled.

ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3]

Certificate for SSL client authentication

ssl.certificate: C:\Users\xxxxx\Documents\Projects\Kafka\Filebeat\conf\nonproductionclient.pem

Client Certificate Key

ssl.key: C:\Users\xxxxx\Documents\Projects\Kafka\Filebeat\conf\nonproductionclient.pem

Optional passphrase for decrypting the Certificate Key.

ssl.key_passphrase: 'passphrase'

Configure cipher suites to be used for SSL connections

ssl.cipher_suites: []

Configure curve types for ECDHE-based cipher suites

ssl.curve_types: []

Configure what types of renegotiation are supported. Valid options are

never, once, and freely. Default is never.

ssl.renegotiation: never

Authentication type to use with Kerberos. Available options: keytab, password.

kerberos.auth_type: keytab

Path to the keytab file. It is used when auth_type is set to keytab.

kerberos.keytab: C:\Users\chasrini\Documents\Projects\Kafka\Filebeat\conf\zookeeper_broker.visa.com.keytab

Path to the Kerberos configuration.

kerberos.config_path: C:\Users\chasrini\Documents\Projects\Kafka\Filebeat\conf\krb5.conf

The service name. Service principal name is contructed from

service_name/hostname@realm.

kerberos.service_name: kafka

Name of the Kerberos user.

kerberos.username: kafka

Password of the Kerberos user. It is used when auth_type is set to password.

kerberos.password: changeme

Kerberos realm.

kerberos.realm: CORPDEV.VISA.COM

================================= Processors =================================

Configure processors to enhance or manipulate events generated by the beat.

processors:

================================== Logging ===================================

Sets log level. The default log level is info.

Available log levels are: error, warning, info, debug

logging.level: info

At debug level, you can selectively enable logging only for some components.

To enable all selectors use ["*"]. Examples of other selectors are "beat",

"publish", "service".

logging.selectors: ["*"]

============================= X-Pack Monitoring ==============================

Filebeat can export internal metrics to a central Elasticsearch monitoring

cluster. This requires xpack monitoring to be enabled in Elasticsearch. The

reporting is disabled by default.

Set to true to enable the monitoring reporter.

monitoring.enabled: false

Sets the UUID of the Elasticsearch cluster under which monitoring data for this

Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch

is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.

monitoring.cluster_uuid:

Uncomment to send the metrics to Elasticsearch. Most settings from the

Elasticsearch output are accepted here as well.

Note that the settings should point to your Elasticsearch monitoring cluster.

Any setting that is not set is automatically inherited from the Elasticsearch

output configuration, so if you have the Elasticsearch output configured such

that it is pointing to your Elasticsearch monitoring cluster, you can simply

uncomment the following line.

monitoring.elasticsearch:

================================= Migration ==================================

This allows to enable 6.7 migration aliases

migration.6_to_7.enabled: true

kvch commented 4 years ago

@moulisea ATM kerberos.username is commented out. You need to remove # from the beginning of the line.

moulisea commented 4 years ago

I did that and I see below two errors. In the initial logs, it says it established kafka connection. But later I see below errors.

DEBUG [harvester] log/log.go:107 End of file reached: E:\Logs\file.log; Backoff now. DEBUG [kafka] kafka/client.go:277 finished kafka batch DEBUG [kafka] kafka/client.go:291 Kafka publish failed with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

Kafka publish failed with: circuit breaker is open

Thanks, Mouli

On Wed, Aug 19, 2020 at 10:31 AM Noémi Ványi notifications@github.com wrote:

@moulisea https://github.com/moulisea ATM kerberos.username is commented out. You need to remove # from the beginning of the line.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/elastic/beats/issues/5413#issuecomment-676498047, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD2UZLQIPOWRM7G24RWJDHDSBPV5TANCNFSM4EAA7QHQ .

moulisea commented 4 years ago

I tried logging: debug in YML file but not helpful to find what is the issue. If you are aware of any debugging mechanism, let me know. Thanks.

Thanks, Mouli

On Wed, Aug 19, 2020 at 12:23 PM chandramouli srinivasan < c84.srinivasan@gmail.com> wrote:

I did that and I see below two errors. In the initial logs, it says it established kafka connection. But later I see below errors.

DEBUG [harvester] log/log.go:107 End of file reached: E:\Logs\file.log; Backoff now. DEBUG [kafka] kafka/client.go:277 finished kafka batch DEBUG [kafka] kafka/client.go:291 Kafka publish failed with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

Kafka publish failed with: circuit breaker is open

Thanks, Mouli

On Wed, Aug 19, 2020 at 10:31 AM Noémi Ványi notifications@github.com wrote:

@moulisea https://github.com/moulisea ATM kerberos.username is commented out. You need to remove # from the beginning of the line.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/elastic/beats/issues/5413#issuecomment-676498047, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD2UZLQIPOWRM7G24RWJDHDSBPV5TANCNFSM4EAA7QHQ .

moulisea commented 4 years ago
  1. when I give output.kafka.version: "2.5.0", I get below error:

    ERROR instance/beat.go:958 Exiting: error initializing publisher: unknown/unsupported kafka vesion '2.5.0' accessing 'output.kafka.version' (source:'filebeat.yml') Exiting: error initializing publisher: unknown/unsupported kafka vesion '2.5.0' accessing 'output.kafka.version' (source:'filebeat.yml')

  2. Also I see below Alert in Filebeat.

Known issue in version 7.8.0

The Kafka output fails to connect when using multiple TLS brokers. We advise not to upgrade to Filebeat 7.8.0 if you’re using the Kafka output in this configuration.

Do you recommend to use 7.8.0 in Prod with multiple broker( we have 3 broker). I started using 7.8 mainly to support SASL_SSL with GSSAPI mechanism.

Thanks, Mouli

On Wed, Aug 19, 2020 at 11:05 PM chandramouli srinivasan < c84.srinivasan@gmail.com> wrote:

I tried logging: debug in YML file but not helpful to find what is the issue. If you are aware of any debugging mechanism, let me know. Thanks.

Thanks, Mouli

On Wed, Aug 19, 2020 at 12:23 PM chandramouli srinivasan < c84.srinivasan@gmail.com> wrote:

I did that and I see below two errors. In the initial logs, it says it established kafka connection. But later I see below errors.

DEBUG [harvester] log/log.go:107 End of file reached: E:\Logs\file.log; Backoff now. DEBUG [kafka] kafka/client.go:277 finished kafka batch DEBUG [kafka] kafka/client.go:291 Kafka publish failed with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

Kafka publish failed with: circuit breaker is open

Thanks, Mouli

On Wed, Aug 19, 2020 at 10:31 AM Noémi Ványi notifications@github.com wrote:

@moulisea https://github.com/moulisea ATM kerberos.username is commented out. You need to remove # from the beginning of the line.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/elastic/beats/issues/5413#issuecomment-676498047, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD2UZLQIPOWRM7G24RWJDHDSBPV5TANCNFSM4EAA7QHQ .

kvch commented 4 years ago

Do you mind opening a Discuss question for these problems? This issue is about tracking Kerberos authentication for Kafka, not arbitrary Kafka issues.

moulisea commented 4 years ago

Thanks for response. I am not clear how to open a Discuss question for these problems. can you provide some link, so I can start discussion on this.

Yes, the issue is not able to connect to Kafka kerberos from Filebeat. so I am exploring logstash now. But if we could make this work in Filebeat, that would be really great.

Appreciate your help. Thanks.

Thanks, Mouli

Sent from my iPhone

On Aug 24, 2020, at 12:01 PM, Noémi Ványi notifications@github.com wrote:



Do you mind opening a Discuss question for these problems? This issue is about tracking Kerberos authentication for Kafka, not arbitrary Kafka issues.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/elastic/beats/issues/5413#issuecomment-679249454, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD2UZLVPXZSWN2KHOQOSMLDSCKMINANCNFSM4EAA7QHQ .

kvch commented 4 years ago

I meant opening a question here: https://discuss.elastic.co/c/elastic-stack/beats/28 I will find someone to help you there. Thanks in advance.

moulisea commented 4 years ago

Thanks. I have created one. link below.

https://discuss.elastic.co/t/filebeat-connect-with-kafka-kerberos-sasl-ssl-not-working/246160

Thanks, Mouli

On Wed, Aug 19, 2020 at 12:23 PM chandramouli srinivasan < c84.srinivasan@gmail.com> wrote:

I did that and I see below two errors. In the initial logs, it says it established kafka connection. But later I see below errors.

DEBUG [harvester] log/log.go:107 End of file reached: E:\Logs\file.log; Backoff now. DEBUG [kafka] kafka/client.go:277 finished kafka batch DEBUG [kafka] kafka/client.go:291 Kafka publish failed with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

Kafka publish failed with: circuit breaker is open

Thanks, Mouli

On Wed, Aug 19, 2020 at 10:31 AM Noémi Ványi notifications@github.com wrote:

@moulisea https://github.com/moulisea ATM kerberos.username is commented out. You need to remove # from the beginning of the line.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/elastic/beats/issues/5413#issuecomment-676498047, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD2UZLQIPOWRM7G24RWJDHDSBPV5TANCNFSM4EAA7QHQ .

Ghaithjemai commented 3 years ago

same problem... did you find the solution?

atalukdar commented 3 years ago

I am having same issue with filebeat-7.11.1

Any solution on sasl_ssl?

Connection to kafka(xxxxx:9093) established [kafka] kafka/client.go:371 finished kafka batch [kafka] kafka/client.go:385 Kafka publish failed with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

elafontaine commented 3 years ago

Same problem. Searching for a link to a solution.

urso commented 3 years ago

Can you reproduce the issue with 7.12?

elafontaine commented 3 years ago

We were on 7.10 from the standard rpm repository of Red Hat I believe. I'll ask to try on filebeat 7.12

I noticed that it's mostly a Kerberos issue, there is just no log about it. I noticed we're forced to put the service_name parameter, even though the authentification is successful without it (and with it, it actually fails... I'm trying to understand why on my side.

elafontaine commented 3 years ago

Ok, I confirm that with 7.12, we're able to make it work with password type of authentification with kerberos (and clear text password in the configuration). However, as soon as we switch this to auth_type: keytab and pass the keytab option, it stops working. I have confirmed the keytab authentification with kinit and with the same service that filebeat should be trying to use as per the config.

We just tested with 7.13, the same; clear text password works, but keytab doesn't.

theforcebemay commented 3 years ago

@elafontaine Could you share your filebeat.yml?

JunTaoYuan80 commented 2 years ago

Same problem with 7.15. Searching for a link to a solution.


2021-10-09T17:10:45.739+0800    INFO    [file_watcher]  filestream/fswatch.go:137   Start next scan
2021-10-09T17:10:50.838+0800    ERROR   [kafka] kafka/client.go:317 Kafka (topic=common_log_topic): kafka: client has run out of available brokers to talk to (Is your cluster reachable?)```
jaychouuu commented 2 years ago

@JunTaoYuan80 jun Have you solved it now? I have the same problem

JunTaoYuan80 commented 2 years ago

@JunTaoYuan80 jun Have you solved it now? I have the same problem

no, but i change filebeat to logstash, it's ok.

elasticmachine commented 2 years ago

Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane)

jlind23 commented 2 years ago

@rdner @faec does this issue rings a bell on your end?

rdner commented 2 years ago

Our latest documentation claims Kerberos is supported:

To use GSSAPI mechanism to authenticate with Kerberos, you must leave this field empty, and use the kerberos options.

https://www.elastic.co/guide/en/beats/filebeat/current/kafka-output.html#_sasl_mechanism

However, we still don't have an integration test to track that it's working https://github.com/elastic/beats/issues/29430