Closed AceHack closed 5 years ago
Hi @AceHack, updating to a newer version of Guava would create additional complications in relation to the version that Spark uses by default. We will look into the possibility of removing it all together.
@cosmincatalin so if you actually look at the lastest spark release without hadoop it does not include Guava at all. https://www-us.apache.org/dist/spark/spark-2.4.4/spark-2.4.4-bin-without-hadoop.tgz
You can see in the jars folder there is no guava jar.
Thanks and I would love to see just removing it altogether, it causes us so much headache.
As can be seen here, Guava is still a dependency, even if the distribution for Hadoop does not include it.
Until we decide what to do with Guava looking forward, I think you can try to exclude the Guava dependency when you build your own jar or you can use shading (although I am not very familiar with this technique).
It’s the distribution for spark server without Hadoop that does not include it to be clear.
Please remove this, it's now blocking from fixing CVEs in our spark base image. Thanks.
I've pulled out Guava. See commit 7e126169332c8b4da666b4bd55dbe36763fe6d4f. It will take a few more days until we release, so we have some time to test.
@cosmincatalin You are the man!!! That is awesome.
This is taking longer than expected. We have some problems when publishing to scala 2.12.
Hi @cosmincatalin can you publish first to scala 2.11 until you resolve the problems with 2.12 ? And thanks a lot.
I have just release 0.4.4
. Let me know if it is everything you wanted it 😄 and reopen as necessary.
Unfortunately, there has been a mistake. The Guava patch did not get merged. I will do another try this week and make a new release. Sorry for keeping all of you waiting for so long.
or update to a newer version, there are too many dependency issues