Open taoyingqi opened 6 years ago
modify org.springframework.cloud.deployer logging level
logging:
level:
org.apache.hadoop: INFO
org.springframework.yarn: INFO
org.springframework.cloud.deployer: INFO
dataflow-server-yarn logs
[2018-10-23 23:27:58,285][INFO ][yarnModuleDeployerTaskExecutor-1][org.springframework.cloud.deployer.spi.yarn.DefaultYarnCloudAppService getApp :237]Cachekey TASKnull found YarnCloudAppServiceApplication org.springframework.cloud.deployer.spi.yarn.YarnCloudAppServiceApplication@44465aed
[2018-10-23 23:27:58,308][INFO ][yarnModuleDeployerTaskExecutor-1][org.apache.hadoop.fs.TrashPolicyDefault initialize :92]Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
[2018-10-23 23:27:58,310][INFO ][yarnModuleDeployerTaskExecutor-1][org.springframework.cloud.deployer.spi.yarn.DefaultYarnCloudAppService pushArtifact :93]Pushing artifact file [/tmp/deployer-resource-cache4634791679588973950/hdfs-e768a547cbf7fac8e05cf5339410a54e439ae9a5] into dir /dataflow//artifacts/cache/
[2018-10-23 23:27:58,315][ERROR][yarnModuleDeployerTaskExecutor-1][org.springframework.cloud.deployer.spi.yarn.DefaultYarnCloudAppService pushArtifact :97]Error pushing artifact
org.springframework.data.hadoop.HadoopException: Cannot copy resources Permission denied: user=root, access=WRITE, inode="/dataflow/artifacts/cache":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:242)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:169)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6590)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6572)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6524)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2758)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2676)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2561)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:593)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:111)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:393)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
at org.springframework.data.hadoop.fs.FsShell.copyFromLocal(FsShell.java:267) ~[spring-data-hadoop-core-2.4.0.RELEASE.jar!/:2.4.0.RELEASE]
at org.springframework.data.hadoop.fs.FsShell.copyFromLocal(FsShell.java:254) ~[spring-data-hadoop-core-2.4.0.RELEASE.jar!/:2.4.0.RELEASE]
at org.springframework.cloud.deployer.spi.yarn.DefaultYarnCloudAppService.pushArtifact(DefaultYarnCloudAppService.java:94) ~[spring-cloud-deployer-yarn-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
at org.springframework.cloud.deployer.spi.yarn.AbstractDeployerStateMachine$PushArtifactAction.execute(AbstractDeployerStateMachine.java:248) [spring-cloud-deployer-yarn-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
at org.springframework.cloud.deployer.spi.yarn.AbstractDeployerStateMachine$ExceptionCatchingAction.execute(AbstractDeployerStateMachine.java:310) [spring-cloud-deployer-yarn-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
at org.springframework.statemachine.state.ObjectState.entry(ObjectState.java:146) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.entryToState(AbstractStateMachine.java:1088) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.entryToState(AbstractStateMachine.java:1054) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.setCurrentState(AbstractStateMachine.java:875) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.setCurrentState(AbstractStateMachine.java:849) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.setCurrentState(AbstractStateMachine.java:939) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.switchToState(AbstractStateMachine.java:752) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine.access$200(AbstractStateMachine.java:72) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at org.springframework.statemachine.support.AbstractStateMachine$2.transit(AbstractStateMachine.java:293) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.handleTriggerTrans(DefaultStateMachineExecutor.java:213) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.processTriggerQueue(DefaultStateMachineExecutor.java:363) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor.access$100(DefaultStateMachineExecutor.java:57) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at org.springframework.statemachine.support.DefaultStateMachineExecutor$1.run(DefaultStateMachineExecutor.java:248) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/dataflow/artifacts/cache":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:242)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:169)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6590)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6572)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6524)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2758)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2676)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2561)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:593)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:111)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:393)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_181]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_181]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_181]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_181]
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) ~[hadoop-common-2.7.1.jar!/:na]
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) ~[hadoop-common-2.7.1.jar!/:na]
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1628) ~[hadoop-hdfs-2.7.1.jar!/:na]
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1703) ~[hadoop-hdfs-2.7.1.jar!/:na]
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1638) ~[hadoop-hdfs-2.7.1.jar!/:na]
at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448) ~[hadoop-hdfs-2.7.1.jar!/:na]
at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444) ~[hadoop-hdfs-2.7.1.jar!/:na]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.7.1.jar!/:na]
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:459) ~[hadoop-hdfs-2.7.1.jar!/:na]
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387) ~[hadoop-hdfs-2.7.1.jar!/:na]
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909) ~[hadoop-common-2.7.1.jar!/:na]
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890) ~[hadoop-common-2.7.1.jar!/:na]
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787) ~[hadoop-common-2.7.1.jar!/:na]
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365) ~[hadoop-common-2.7.1.jar!/:na]
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338) ~[hadoop-common-2.7.1.jar!/:na]
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:302) ~[hadoop-common-2.7.1.jar!/:na]
at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1949) ~[hadoop-common-2.7.1.jar!/:na]
at org.springframework.data.hadoop.fs.FsShell.copyFromLocal(FsShell.java:265) ~[spring-data-hadoop-core-2.4.0.RELEASE.jar!/:2.4.0.RELEASE]
... 20 common frames omitted
Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied: user=root, access=WRITE, inode="/dataflow/artifacts/cache":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:242)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:169)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6590)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6572)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6524)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2758)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2676)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2561)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:593)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:111)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:393)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar!/:na]
at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar!/:na]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar!/:na]
at com.sun.proxy.$Proxy123.create(Unknown Source) ~[na:na]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296) ~[hadoop-hdfs-2.7.1.jar!/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_181]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_181]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_181]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_181]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar!/:na]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.1.jar!/:na]
at com.sun.proxy.$Proxy124.create(Unknown Source) ~[na:na]
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1623) ~[hadoop-hdfs-2.7.1.jar!/:na]
... 35 common frames omitted
I check the configuration document of server.yml in http://docs.spring.io/spring-cloud-dataflow-server-yarn/docs/1.2.2.RELEASE/reference/htmlsingle/
I want set hdfs configuration
HADOOP_USER_NAME=hdfs
But I can't find any configuration about HADOOP_USER_NAME.
change to user "hdfs" to start the dataflow-server @taoyingqi
I am having the same problem on dataflow-server as I am copying a jar file into /tmp/apps and successfully registering this app. When I, however, build a stream using this app, I get:
Error: Unable to access jarfile /tmp/apps/simple-processor-0.0.2-SNAPSHOT.jar
Although inside my dataflow-server
container there is a simple-processor-0.0.2-SNAPSHOT.jar
, as in:
root@e7c8ae26c9c4:/# ls -xl /tmp/apps
total 55920
-rwxrwxrwx 1 root root 57260291 Mar 15 13:38 simple-processor-0.0.2-SNAPSHOT.jar
for Spring Cloud Data Flow for Apache YARN 1.2.2.RELEASE
yarn logs -applicationId application_1540258016059_0020
my hdfs directory