Closed gmunumel closed 6 years ago
Changed my core-site.xml solved the issue.
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:8020</value>
</property>
</configuration>
Now I am facing a problem with:
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hadoop-yarn/staging/gabrielmunumel/.staging/job_1525387543266_0001/job.split could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1559)
But this is another issue I guess.
Looks like your cluster has some issues with the datanodes. Also, we have not used the framework with Hadoop 3 and have no plans on evolving it further, see notice at https://projects.spring.io/spring-hadoop/
Realized that. I have downgraded hadoop to version 2.6.
To solved the last issue I have removed my tmp dir and recreated with hdfs namenode -format
. It is working now. The only issue is that the job hangs.
I will close the issue because seems not related to spring.
Hello I am getting the following error when trying to run the Mapreduce sample:
I am running Hadoop version 3 in mac OS X.
Any idea?