apache / druid

Apache Druid: a high performance real-time analytics database.
https://druid.apache.org/
Apache License 2.0
13.43k stars 3.69k forks source link

Request failed with status code 500 #9781

Closed majidbijani closed 1 year ago

majidbijani commented 4 years ago

I had trouble setting up the druid cluster. The cluster setup was done step by step according to the druid website(Clustered deployment). deep storage set is local. And for Metadata, the default derby is used. Most settings are default.  common.runtime.properties for Master Server and Query Server and Data Server are similar. common.runtime.properties: `#

Licensed to the Apache Software Foundation (ASF) under one

or more contributor license agreements. See the NOTICE file

distributed with this work for additional information

regarding copyright ownership. The ASF licenses this file

to you under the Apache License, Version 2.0 (the

"License"); you may not use this file except in compliance

with the License. You may obtain a copy of the License at

#

http://www.apache.org/licenses/LICENSE-2.0

#

Unless required by applicable law or agreed to in writing,

software distributed under the License is distributed on an

"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY

KIND, either express or implied. See the License for the

specific language governing permissions and limitations

under the License.

#

Extensions specified in the load list will be loaded by Druid

We are using local fs for deep storage - not recommended for production - use S3, HDFS, or NFS instead

We are using local derby for the metadata store - not recommended for production - use MySQL or Postgres instead

If you specify druid.extensions.loadList=[], Druid won't load any extension from file system.

If you don't specify druid.extensions.loadList, Druid will load all the extensions under root extension directory.

More info: https://druid.apache.org/docs/latest/operations/including-extensions.html

druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]

If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory

and uncomment the line below to point to your directory.

druid.extensions.hadoopDependenciesDir=/my/dir/hadoop-dependencies

#

Hostname

# druid.host=master

#

Logging

#

Log all runtime properties on startup. Disable to avoid logging properties on startup:

druid.startup.logging.logProperties=true

#

Zookeeper

#

druid.zk.service.host=master druid.zk.paths.base=/druid

#

Metadata storage

#

For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):

druid.metadata.storage.type=derby druid.metadata.storage.connector.connectURI=jdbc:derby://master:1527/var/druid/metadata.db;create=true druid.metadata.storage.connector.host=master druid.metadata.storage.connector.port=1527

For MySQL (make sure to include the MySQL JDBC driver on the classpath):

druid.metadata.storage.type=mysql

druid.metadata.storage.connector.connectURI=jdbc:mysql://db.example.com:3306/druid

druid.metadata.storage.connector.user=...

druid.metadata.storage.connector.password=...

For PostgreSQL:

druid.metadata.storage.type=postgresql

druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid

druid.metadata.storage.connector.user=...

druid.metadata.storage.connector.password=...

#

Deep storage

#

For local disk (only viable in a cluster if this is a network mount):

druid.storage.type=local druid.storage.storageDirectory=var/druid/segments

For HDFS:

druid.storage.type=hdfs

druid.storage.storageDirectory=/druid/segments

For S3:

druid.storage.type=s3

druid.storage.bucket=your-bucket

druid.storage.baseKey=druid/segments

druid.s3.accessKey=...

druid.s3.secretKey=...

#

Indexing service logs

#

For local disk (only viable in a cluster if this is a network mount):

druid.indexer.logs.type=file druid.indexer.logs.directory=var/druid/indexing-logs

For HDFS:

druid.indexer.logs.type=hdfs

druid.indexer.logs.directory=/druid/indexing-logs

For S3:

druid.indexer.logs.type=s3

druid.indexer.logs.s3Bucket=your-bucket

druid.indexer.logs.s3Prefix=druid/indexing-logs

#

Service discovery

#

druid.selectors.indexing.serviceName=druid/overlord druid.selectors.coordinator.serviceName=druid/coordinator

#

Monitoring

#

druid.monitoring.monitors=["org.apache.druid.java.util.metrics.JvmMonitor"] druid.emitter=noop druid.emitter.logging.logLevel=info

Storage type of double columns

ommiting this will lead to index double as float at the storage layer

druid.indexing.doubleStorage=double

#

Security

# druid.server.hiddenProperties=["druid.s3.accessKey","druid.s3.secretKey","druid.metadata.storage.connector.password"]

#

SQL

# druid.sql.enable=true

#

Lookups

# druid.lookup.enableLookupSyncOnStartup=false ` After starting the master services with Zookeeper and Query Server and Data Server, the following error is displayed.

dr1

Druid versions: druid-0.18.0 , 0.17.0 Can anyone help me?

mysticaltech commented 4 years ago

Same thing happening to me, I have no idea what is the problem.

mysticaltech commented 4 years ago

On my side in was hostnames misconfigured.

yogrj commented 2 years ago

Hello , I am also facing same issue - request failed with status 500.

Can anyone suggest what's going wrong in config

github-actions[bot] commented 1 year ago

This issue has been marked as stale due to 280 days of inactivity. It will be closed in 4 weeks if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the dev@druid.apache.org list. Thank you for your contributions.

github-actions[bot] commented 1 year ago

This issue has been closed due to lack of activity. If you think that is incorrect, or the issue requires additional review, you can revive the issue at any time.