Closed pilgrimkst closed 7 years ago
In order to access to the hdfs i was need to start data nodes (cd /home/ec2-user/hadoop && sbin/hadoop-daemon.sh stop datanode)
Hmm, why do you need to do this? Flintrock should start up HDFS automatically for you as long as you specify --install-hdfs
(or the equivalent in config.yaml
).
Also, does your VPC have an Internet Gateway attached? Flintrock does not currently support private VPCs (#14).
Yes, i am installing hdfs with
launch:
install-hdfs: True
I will check VPC settings, and write back
min-root-ebs-size-gb: 100
stops working, i got standard 30GB root volume for all of my instancesthis is my flintrock config https://gist.github.com/pilgrimkst/204b000e195e543d54a159cebed63168 Also I want to mention, that all spark workers are initialized
Ok, I think i had found an issue, I removed one security group and it started to work. So I am closing issue
Ah, so one of the additional security groups you had configured on launch was interfering with Flintrock?
Yeah, we had two flintrock clusters, and i saw on the instances SG with name flintrock, so i thought it would be a good idea to add it. But I guess this one was preventing to create it's own flintrock group.
Hi, i have problems working with hdfs. In order to access to the hdfs i was need to start data nodes (cd /home/ec2-user/hadoop && sbin/hadoop-daemon.sh stop datanode) But after i am starting datanodes on all instances (both master and slaves) there is only one datanode that is registered from the master (ie running locally with namenode) there is connectivity between nodes, but i have following error message in both datanode and namenode logs:
Where
172.33.9.26
is internal ip of one of my slaves (errors are logged from all slaves, i just added one for reference