MaibornWolff / dcos-zeppelin

Docker image and marathon app to run zeppelin on DC/OS
Apache License 2.0
5 stars 5 forks source link

Shiro auth problem with shiro.ini from Secrets #10

Open fmarchand opened 5 years ago

fmarchand commented 5 years ago

I've put this configuration in zeppelin/shiro-conf then I checked and provided this in the zeppelin configuration

Here is my shiro.ini :

[users]
admin = password1, admin
user1 = password2, role1, role2
user2 = password3, role3
user3 = password4, role2

[main]
zeppelinHubRealm = org.apache.zeppelin.realm.ZeppelinHubRealm
zeppelinHubRealm.zeppelinhubUrl = https://www.zepl.com
securityManager.realms = $zeppelinHubRealm
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[roles]
role1 = *
role2 = *
role3 = *
admin = *

[urls]
/api/version = anon
/** = authc

Zeppelin service from the DCOS catalog, after the deployment, runs well and the health check is green. But when I try to login then I have this error :

ERROR [2019-05-29 09:17:08,439] ({qtp152134087-16} LoginRestApi.java[proceedToLogin]:172) - Exception in login: 
org.apache.shiro.authc.AuthenticationException: Authentication failed for token submission [org.apache.shiro.authc.UsernamePasswordToken - fmarchand, rememberMe=false].  Possible unexpected error? (Typical or expected login exceptions should extend from AuthenticationException).

This is what I get : image

Is there something I missed ? I tried the same shiro.ini with a zeppelin 0.8.1 on my computer and it works well. So I don't see what to do. I checked your mustache file and the secret is declared. All seems fine ...

Do you have an idea ?

fmarchand commented 5 years ago

I solved it. I restarted the marathon app with two new env variables :

ZEPPELIN_NOTEBOOK_STORAGE="org.apache.zeppelin.notebook.repo.GitNotebookRepo, org.apache.zeppelin.notebook.repo.zeppelinhub.ZeppelinHubRepo"
ZEPPELINHUB_API_ADDRESS="https://www.zepl.com"

I'm gonna do a pull request to add an env section in the config.jsonand add some stuff in marathon.json.mustache if you don't mind.

swoehrl-mw commented 5 years ago

Hi @fmarchand , glad you solved it. PRs are always welcome, so by all means go ahead.

fmarchand commented 5 years ago

This code is ready for this issue #10 ... but ...

When I tried to test the shiro part with the zeppelin-env.sh customization and the placement constraint everything worked except this :

val (df1, df2, df3) = (spark.read.format("csv").option("header", "true").load("hdfs://name-0-node.hdfs.autoip.dcos.thisdcos.directory:9001/kaggle/taxi/yellow_tripdata_2016-01.csv"),
                      spark.read.format("csv").option("header", "true").load("hdfs://name-0-node.hdfs.autoip.dcos.thisdcos.directory:9001/kaggle/taxi/yellow_tripdata_2016-02.csv"),
                      spark.read.format("csv").option("header", "true").load("hdfs://name-0-node.hdfs.autoip.dcos.thisdcos.directory:9001/kaggle/taxi/yellow_tripdata_2016-03.csv"))

val allDfS =  Seq(df1,df2,df3).reduce(_ union _)

I had a NoSuchMethodException in a constructor of the jackson-databind library. After a long time googling that error, I figured out that using spark 2.4 was causing a compatibilty problem. I checked the different versions of jackson from mesosphere image and had to downgrade to 2.4.0-2.2.1-3-hadoop-2.6

This spark image has a spark in the folder /opt/spark/dist whereas more recent images have spark in /opt/spark. Therefore I had to modify SPARK_HOME in Dockerfile and startup.sh. I would suggest to have a branch per package version and propose a PR to mesosphere with multiple version folders.

So the version of the zeppelin package is set to 1.1-0.8.1-2.2.1.

Could you create a branch on your repository so I can do a PR from my forked repository branch to that branch ?

What do you think ?

swoehrl-mw commented 5 years ago

Hi @fmarchand , unfortunately the mesosphere universe is not really designed to handle multiple concurrent versions. And I don't feel comfortable maintaing two versions of the package with different spark versions. Can you please provide me your code and the complete test you ran? I can try if I can get it working with the current spark version.