big-data-europe / docker-hadoop

Apache Hadoop docker image
2.2k stars 1.3k forks source link

Error when try to start #81

Open leosimoesp opened 4 years ago

leosimoesp commented 4 years ago

Hi,

When I execute docker-compose up the follow error happens.

docker-hadoop.log

The resourcemanager is unable to initialize because org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /rmstate/FSRMStateRoot/RMDTSecretManagerRoot. Name node is in safe mode.

Could you help me ?

Thanks, Leo

XavierGrool commented 3 years ago

Hi,

When I execute docker-compose up the follow error happens.

docker-hadoop.log

The resourcemanager is unable to initialize because org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /rmstate/FSRMStateRoot/RMDTSecretManagerRoot. Name node is in safe mode.

Could you help me ?

Thanks, Leo

Hi,

I have the same problem after I try to use a new .yml I don't change other files

Have u solve this problem yet?

XavierGrool commented 3 years ago

I think I see what's going on...

I've tried several times, and I found that every time when I use "docker-compose down" to stop the hadoop cluster, I'll get this problem the next time I use "docker-compose up -d" to start it.

I search the Internet and find something related to the safe mode in the namenode. It seems that hadoop will be in safe mode when it just start up, and it's not recommended to upload/modify/delete files at the time. So when we stop it at the moment, maybe it cause some changes to the volumes? Or maybe we shouldn't use "docker-compose down" to stop the cluster? I don't know.

But I know that I can fix this by removing the existing volumes.

XavierGrool commented 3 years ago

maybe it's about formatting, i guess

see hadoop doc -> Hadoop Startup:

The first time you bring up HDFS, it must be formatted.

Rustem commented 2 years ago

So what is the issue there ? Could someone summarize please

dcguim commented 2 years ago

Perhaps it helps, I noticed that my volumes were full and therefore the datanodes could not replicate the data, which meant the namenode was not able to reach the minimum percentage of safely replicated replicas and therefore it will be kept in safe mode. I simply ran docker volume ls | awk '{print $2}' | xargs docker volume rm for deleting all the volumes in my machine and reran the docker-compose file again. You can check if this is the case by using docker system df to check if you are closing to reaching the allocated docker storage capacity.

gogagum commented 1 year ago

Perhaps it helps, I noticed that my volumes were full and therefore the datanodes could not replicate the data, which meant the namenode was not able to reach the minimum percentage of safely replicated replicas and therefore it will be kept in safe mode. I simply ran docker volume ls | awk '{print $2}' | xargs docker volume rm for deleting all the volumes in my machine and reran the docker-compose file again. You can check if this is the case by using docker system df to check if you are closing to reaching the allocated docker storage capacity.

I had the same problem as leosimoesp and this recommendation helped. The only thing is that command docker volume ls | awk '{print $2}' | xargs docker volume rm is not perfect, as it tries to apply docker volume rm to "VOLUME". The answer on stackoverflow says that there is a more elegant method to remove volumes: docker-compose down --volumes.