Closed GoogleCodeExporter closed 9 years ago
Alex, are you working on this? If not please reassign.
Original comment by kroko...@gmail.com
on 19 Mar 2010 at 3:55
lyolik, please give Alex B. access to Amazon account
Original comment by alex.s...@gmail.com
on 19 Mar 2010 at 10:48
first, we need to translate examples to Hadoop 0.18 since Amazon Elastic Map
Reduce
does not support 0.20 yet
Original comment by alex.s...@gmail.com
on 19 Mar 2010 at 11:03
Execution of command is
com.codeminders.hamake.commands.HadoopCommand@1367e28[jar=s3n://hamak
e/dist/hamake-examples-
1.0.jar,main=com.codeminders.hamake.examples.JarListing,parameters=[com.code
minders.hamake.params.PathParam@94af2f[name=path1,ptype=inputfile,number=-
1,mask=keep],
com.codeminders.hamake.params.PathParam@1797795[name=path2,ptype=outputfi
le,number=-1,mask=keep]]] failed: This file system object (hdfs://ip-10-242-58-
54.ec2.internal:9000) does not support access to the request path
's3n://hamake/dist/hamake-examples-1.0.jar' You possibly called
FileSystem.get(conf) when you should of called FileSystem.get(uri, conf) to
obtain a
file system supporting your path.
Exception in thread
"com.codeminders.hamake.commands.HadoopCommand@1367e28[jar=s3n://hama
ke/dist/hamake-examples-
1.0.jar,main=com.codeminders.hamake.examples.JarListing,parameters=[com.code
minders.hamake.params.PathParam@94af2f[name=path1,ptype=inputfile,number=-
1,mask=keep],
com.codeminders.hamake.params.PathParam@1797795[name=path2,ptype=outputfi
le,number=-1,mask=keep]]]" java.lang.IllegalArgumentException: This file system
object (hdfs://ip-10-242-58-54.ec2.internal:9000) does not support access to
the
request path 's3n://hamake/build/test/jar-listings' You possibly called
FileSystem.get(conf) when you should of called FileSystem.get(uri, conf) to
obtain a
file system supporting your path.
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320)
at
org.apache.hadoop.dfs.DistributedFileSystem.checkPath(DistributedFileSystem.java
:8
4)
at
org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.ja
v
a:140)
at
org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.
java
:408)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:666)
at
com.codeminders.hamake.CommandThread.cleanup(CommandThread.java:68)
at com.codeminders.hamake.CommandThread.run(CommandThread.java:50)
we are calling getFileSystem(conf) instead of getFileSystem(uri, conf) at
CommandThread.java:62 and it is a bug.
But it seems it is failing for some other reason, since this code should be
called only
if task itself failed.
Original comment by alex.s...@gmail.com
on 22 Mar 2010 at 10:29
Here is the real error (a tip: run hamake with -v -t flags)
Starting jar-listings
Scanning s3n://hamake/lib/*.jar
Scanning s3n://hamake/build/test/jar-listings/
Execution of command is
com.codeminders.hamake.commands.HadoopCommand@1b4c1d7[jar=s3n://hamak
e/dist/hamake-examples-
1.0.jar,main=com.codeminders.hamake.examples.JarListing,parameters=[com.code
minders.hamake.params.PathParam@221e9e[name=path1,ptype=inputfile,number=
-1,mask=keep],
com.codeminders.hamake.params.PathParam@83e1e[name=path2,ptype=outputfile,
number=-1,mask=keep]]] failed: This file system object (hdfs://domU-12-31-39-
0B-24-07.compute-1.internal:9000) does not support access to the request path
's3n://hamake/dist/hamake-examples-1.0.jar' You possibly called
FileSystem.get(conf) when you should of called FileSystem.get(uri, conf) to
obtain a
file system supporting your path.
java.lang.IllegalArgumentException: This file system object (hdfs://domU-12-31-
39-0B-24-07.compute-1.internal:9000) does not support access to the request
path 's3n://hamake/dist/hamake-examples-1.0.jar' You possibly called
FileSystem.get(conf) when you should of called FileSystem.get(uri, conf) to
obtain a
file system supporting your path.
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320)
at
org.apache.hadoop.dfs.DistributedFileSystem.checkPath(DistributedFileSystem.java
:8
4)
at
org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.ja
v
a:140)
at
org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.
java
:408)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:666)
at com.codeminders.hamake.Utils.copyToTemporaryLocal(Utils.java:222)
at
com.codeminders.hamake.commands.HadoopCommand.execute(HadoopCommand.j
ava:27)
at com.codeminders.hamake.CommandThread.run(CommandThread.java:41)
Original comment by alex.s...@gmail.com
on 22 Mar 2010 at 10:45
1. used Amazon Elastic MapReduce Ruby Client for testing:
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2264&catego
ryID=266
2. copied a set of files to s3:
/hamake/lib/*.jar from /trunk/hamake-j/lib
/hamake/class-size-s3.xml from /trunk/hamake-j/test/resources
3. log dir is s3://hamake/log
4. command for hamake execution:
./elastic-mapreduce --create --jar s3n://hamake/hamake-j-1.0.jar --main-class
com.codeminders.hamake.Main --args -t,-v,-f,s3n://hamake/class-size-s3.xml
execution was completed successfully
see the jobflow (id:j-2HF2PBTTMY3AE) status at
https://console.aws.amazon.com/elasticmapreduce/home
results at s3://hamake/build/test
Original comment by abon...@gmail.com
on 23 Mar 2010 at 1:23
Original issue reported on code.google.com by
kroko...@gmail.com
on 15 Mar 2010 at 5:47