spring-attic / spring-cloud-dataflow-server-yarn

Spring Cloud Data Flow Implementation for Apache YARN
http://cloud.spring.io/spring-cloud-dataflow-server-yarn/
Apache License 2.0
15 stars 34 forks source link

Error: Unable to access jarfile hdfs-27e6693e437b5a81b0ea8e733ec1a9f42ece866f on dataflow-server-yarn #177

Open taoyingqi opened 6 years ago

taoyingqi commented 6 years ago

for Spring Cloud Data Flow for Apache YARN 1.2.2.RELEASE

yarn logs -applicationId application_1540258016059_0020

Container id: container_1540258016059_0020_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
        at org.apache.hadoop.util.Shell.run(Shell.java:478)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Container exited with a non-zero exit code 1

Container: container_1540258016059_0020_01_000002 on CDH3_8041
================================================================
LogType:Container.stderr
Log Upload Time:Tue Oct 23 17:46:46 +0800 2018
LogLength:78
Log Contents:
Error: Unable to access jarfile hdfs-27e6693e437b5a81b0ea8e733ec1a9f42ece866f

LogType:Container.stdout
Log Upload Time:Tue Oct 23 17:46:46 +0800 2018
LogLength:0
Log Contents:

You have new mail in /var/spool/mail/root
./bin/dataflow-shell
app register --name timestamp --type task --uri hdfs:/dataflow/artifacts/cache/timestamp-task-1.3.0.RELEASE.jar
task create --name printTimeStamp --definition "timestamp"
task launch printTimeStamp

my hdfs directory

[hdfs@CDH2 src]$ hadoop fs -ls -R /dataflow
drwxrwxr-x   - hdfs supergroup          0 2018-10-23 16:43 /dataflow/apps
drwxrwxr-x   - hdfs supergroup          0 2018-10-18 11:18 /dataflow/apps/stream
drwxrwxr-x   - hdfs supergroup          0 2018-10-23 17:15 /dataflow/apps/stream/app
drwxrwxr-x   - hdfs supergroup          0 2018-10-18 11:17 /dataflow/apps/task
drwxrwxr-x   - hdfs supergroup          0 2018-10-23 17:15 /dataflow/apps/task/app
-rwxrwxr-x   3 hdfs supergroup   72918225 2018-10-23 09:47 /dataflow/apps/task/app/spdb-data-splunk-1.0.1.jar
-rwxrwxr-x   3 hdfs supergroup   62115628 2018-10-18 10:46 /dataflow/apps/task/app/spring-cloud-deployer-yarn-tasklauncherappmaster-1.2.2.RELEASE.jar
-rwxrwxr-x   3 hdfs supergroup   22518771 2018-10-18 11:03 /dataflow/apps/task/app/timestamp-task-1.3.0.RELEASE.jar
drwxr-xr-x   - hdfs supergroup          0 2018-10-23 16:44 /dataflow/artifacts
drwxr-xr-x   - hdfs supergroup          0 2018-10-23 17:32 /dataflow/artifacts/cache
-rw-r--r--   3 hdfs supergroup   22518771 2018-10-23 17:32 /dataflow/artifacts/cache/timestamp-task-1.3.0.RELEASE.jar
drwxr-xr-x   - hdfs supergroup          0 2018-10-23 16:45 /dataflow/artifacts/repo
-rw-r--r--   3 hdfs supergroup   72918225 2018-10-23 16:45 /dataflow/artifacts/repo/spdb-data-splunk-1.0.1.jar
taoyingqi commented 6 years ago

modify org.springframework.cloud.deployer logging level

logging:
  level:
    org.apache.hadoop: INFO
    org.springframework.yarn: INFO
    org.springframework.cloud.deployer: INFO

dataflow-server-yarn logs

[2018-10-23 23:27:58,285][INFO ][yarnModuleDeployerTaskExecutor-1][org.springframework.cloud.deployer.spi.yarn.DefaultYarnCloudAppService getApp :237]Cachekey TASKnull found YarnCloudAppServiceApplication org.springframework.cloud.deployer.spi.yarn.YarnCloudAppServiceApplication@44465aed
[2018-10-23 23:27:58,308][INFO ][yarnModuleDeployerTaskExecutor-1][org.apache.hadoop.fs.TrashPolicyDefault initialize :92]Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
[2018-10-23 23:27:58,310][INFO ][yarnModuleDeployerTaskExecutor-1][org.springframework.cloud.deployer.spi.yarn.DefaultYarnCloudAppService pushArtifact :93]Pushing artifact file [/tmp/deployer-resource-cache4634791679588973950/hdfs-e768a547cbf7fac8e05cf5339410a54e439ae9a5] into dir /dataflow//artifacts/cache/
[2018-10-23 23:27:58,315][ERROR][yarnModuleDeployerTaskExecutor-1][org.springframework.cloud.deployer.spi.yarn.DefaultYarnCloudAppService pushArtifact :97]Error pushing artifact
org.springframework.data.hadoop.HadoopException: Cannot copy resources Permission denied: user=root, access=WRITE, inode="/dataflow/artifacts/cache":hdfs:supergroup:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:242)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:169)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6590)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6572)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6524)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2758)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2676)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2561)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:593)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:111)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:393)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

        at org.springframework.data.hadoop.fs.FsShell.copyFromLocal(FsShell.java:267) ~[spring-data-hadoop-core-2.4.0.RELEASE.jar!/:2.4.0.RELEASE]
        at org.springframework.data.hadoop.fs.FsShell.copyFromLocal(FsShell.java:254) ~[spring-data-hadoop-core-2.4.0.RELEASE.jar!/:2.4.0.RELEASE]
        at org.springframework.cloud.deployer.spi.yarn.DefaultYarnCloudAppService.pushArtifact(DefaultYarnCloudAppService.java:94) ~[spring-cloud-deployer-yarn-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
        at org.springframework.cloud.deployer.spi.yarn.AbstractDeployerStateMachine$PushArtifactAction.execute(AbstractDeployerStateMachine.java:248) [spring-cloud-deployer-yarn-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
        at org.springframework.cloud.deployer.spi.yarn.AbstractDeployerStateMachine$ExceptionCatchingAction.execute(AbstractDeployerStateMachine.java:310) [spring-cloud-deployer-yarn-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
        at org.springframework.statemachine.state.ObjectState.entry(ObjectState.java:146) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at org.springframework.statemachine.support.AbstractStateMachine.entryToState(AbstractStateMachine.java:1088) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at org.springframework.statemachine.support.AbstractStateMachine.entryToState(AbstractStateMachine.java:1054) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at org.springframework.statemachine.support.AbstractStateMachine.setCurrentState(AbstractStateMachine.java:875) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at org.springframework.statemachine.support.AbstractStateMachine.setCurrentState(AbstractStateMachine.java:849) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at org.springframework.statemachine.support.AbstractStateMachine.setCurrentState(AbstractStateMachine.java:939) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at org.springframework.statemachine.support.AbstractStateMachine.switchToState(AbstractStateMachine.java:752) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at org.springframework.statemachine.support.AbstractStateMachine.access$200(AbstractStateMachine.java:72) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at org.springframework.statemachine.support.AbstractStateMachine$2.transit(AbstractStateMachine.java:293) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at org.springframework.statemachine.support.DefaultStateMachineExecutor.handleTriggerTrans(DefaultStateMachineExecutor.java:213) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at org.springframework.statemachine.support.DefaultStateMachineExecutor.processTriggerQueue(DefaultStateMachineExecutor.java:363) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at org.springframework.statemachine.support.DefaultStateMachineExecutor.access$100(DefaultStateMachineExecutor.java:57) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at org.springframework.statemachine.support.DefaultStateMachineExecutor$1.run(DefaultStateMachineExecutor.java:248) [spring-statemachine-core-1.1.0.RELEASE.jar!/:1.1.0.RELEASE]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_181]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_181]
        at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/dataflow/artifacts/cache":hdfs:supergroup:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:242)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:169)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6590)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6572)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6524)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2758)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2676)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2561)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:593)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:111)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:393)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_181]
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_181]
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_181]
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_181]
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) ~[hadoop-common-2.7.1.jar!/:na]
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) ~[hadoop-common-2.7.1.jar!/:na]
        at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1628) ~[hadoop-hdfs-2.7.1.jar!/:na]
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1703) ~[hadoop-hdfs-2.7.1.jar!/:na]
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1638) ~[hadoop-hdfs-2.7.1.jar!/:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448) ~[hadoop-hdfs-2.7.1.jar!/:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444) ~[hadoop-hdfs-2.7.1.jar!/:na]
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.7.1.jar!/:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:459) ~[hadoop-hdfs-2.7.1.jar!/:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387) ~[hadoop-hdfs-2.7.1.jar!/:na]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909) ~[hadoop-common-2.7.1.jar!/:na]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890) ~[hadoop-common-2.7.1.jar!/:na]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787) ~[hadoop-common-2.7.1.jar!/:na]
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365) ~[hadoop-common-2.7.1.jar!/:na]
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338) ~[hadoop-common-2.7.1.jar!/:na]
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:302) ~[hadoop-common-2.7.1.jar!/:na]
        at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1949) ~[hadoop-common-2.7.1.jar!/:na]
        at org.springframework.data.hadoop.fs.FsShell.copyFromLocal(FsShell.java:265) ~[spring-data-hadoop-core-2.4.0.RELEASE.jar!/:2.4.0.RELEASE]
        ... 20 common frames omitted
Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied: user=root, access=WRITE, inode="/dataflow/artifacts/cache":hdfs:supergroup:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:242)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:169)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6590)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6572)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6524)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2758)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2676)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2561)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:593)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:111)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:393)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

        at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar!/:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar!/:na]
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar!/:na]
        at com.sun.proxy.$Proxy123.create(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296) ~[hadoop-hdfs-2.7.1.jar!/:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_181]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_181]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_181]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_181]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar!/:na]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.1.jar!/:na]
        at com.sun.proxy.$Proxy124.create(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1623) ~[hadoop-hdfs-2.7.1.jar!/:na]
        ... 35 common frames omitted

I check the configuration document of server.yml in http://docs.spring.io/spring-cloud-dataflow-server-yarn/docs/1.2.2.RELEASE/reference/htmlsingle/

I want set hdfs configuration

HADOOP_USER_NAME=hdfs

But I can't find any configuration about HADOOP_USER_NAME.

NickJFang commented 5 years ago

change to user "hdfs" to start the dataflow-server @taoyingqi

dcguim commented 3 years ago

I am having the same problem on dataflow-server as I am copying a jar file into /tmp/apps and successfully registering this app. When I, however, build a stream using this app, I get: Error: Unable to access jarfile /tmp/apps/simple-processor-0.0.2-SNAPSHOT.jar Although inside my dataflow-server container there is a simple-processor-0.0.2-SNAPSHOT.jar, as in:

root@e7c8ae26c9c4:/# ls -xl /tmp/apps
total 55920
-rwxrwxrwx 1 root root 57260291 Mar 15 13:38 simple-processor-0.0.2-SNAPSHOT.jar