Closed zacayd closed 1 year ago
We do not have any plans to implement Databricks specific features, although we do our best to ensure at least core functionality is supported. Nevertheless we continue thinking about what we can improve on Databricks support without introducing any hard dependency on the platform.
Thanks
Do you have a stable code of spline agent? if not- what is the frequency of change? How can we get the spline code and can push to the cloud?
Every release is as stable as it can be from the developer point of view. We do not have a dedicated QA team though. Just use the latest released version and it should be fine. They are all generally backward compatible. If you want to use a development version you can also safely do it as long as you are not using it for production. You will have to build it yourself though.
what is the frequency of change
there is no stable release schedule. We release a new version when we are confident and feel comfortable doing so. Usually when a bug is found in a released version we release an update ASAP.
How can we get the spline code and can push to the cloud?
git clone https://github.com/AbsaOSS/spline-spark-agent.git
yes and How can i push it to cloud of Azure?
and How can i push it to cloud of Azure?
I don't understand the question. Are you asking us how to do git push?
push to a cloud in azure so I can see it on Databricks
You can build the agent from the code and deploy it on the Databricks.
https://github.com/AbsaOSS/spline-getting-started/tree/main/spline-on-databricks
If you want to build it yourself, you just upload the jar instead of using maven to get it. You can upload any additional artifacts the same way.
but can i build only version 1.0.4?
Pull the 'release/1.0.4' tag from git and build it as normal.
I don't recommend using older versions, since there are several bugs that were fixed in newer versions. So at least 1.0.7
or ideally 1.1.0
.
But all versions are tagged, so you can see them in here.
Closing thus issue as it doesn't contain any bug report or feature request. Please use discussion board for questions instead of github issues.
Hi team I uses your solution on my application on databricks on azure and i have issues with the fact that i need to write some code of scala to get the name of the running notebook and cluster to the exection plan 1.is there is a plan to add the code to get the databrciks params to the execution plan? 2.Do you have a stable code of spline agent? if not- what is the frequency of change 3.How can we get the spline code and can push to the cloud?
thanks alot