Closed pomadchin closed 10 months ago
@WhiteSte could you check this PR on your input? I tried and it seems 👍
Sure, how can i do? Should i go inside your commit, build the natives and test with the new ones?
@WhiteSte just pull down the PR, build it locally like before and try the same pipeline that's been failing, if that's possible!
Sure i can test that with my local files in few minutes. I had a different bug coming from inside the use of a docker image, but this docker is built using directly the maven release and there for me is really more difficult to do. But i will check it up next days
Is it possible that before that it wasn't possible to stream the log in real time from pdal and now it is like that?
Some time ago i built a Runnable
LogCollector
that each 1 seconds was calling pipeline.getLog()
and printing it to console, because as far as i know from java was impossible to stream the pdal output. But now even without this Collector java is streaming pdal output, this is great it makes the code even simpler.
Btw bad news: pipeline.getLog() crashes now , but the logLevel >3 seems to be fixed at all
@WhiteSte there is no getLog method anymore!
Oh sorry i updated just the native libs, not the core jar! So is fixed! Do you confirm that now the log is streamed automatically? So i can erase my collector
Do you confirm that now the log is streamed automatically?
Yes, it is streamed into stderr
, looks smth like:
[info] running Main
(javapipeline readers.las Debug) Ignoring setSpatialReference attempt: an override was set
(javapipeline Debug) Executing pipeline in standard mode.
(javapipeline filters.relaxationdartthrowing Debug) Input point cloud has fewer points 1769 than requested output size of 100000. Terminating.
.... // smth else
exactly that's my log now
(javapipeline readers.las Debug) Ignoring setSpatialReference attempt: an override was set
(javapipeline Debug) Executing pipeline in standard mode.
(javapipeline filters.relaxationdartthrowing Debug) Retaining 25000 of 30038 points (83.2279%)
(javapipeline filters.litree Debug) Classifying points for tree 1 (|Ui| = 25000)
...
@WhiteSte there is no getLog method anymore!
Isn't wiser to keep getLog just returning a fake string like "This method is deprecated and will be removed in next version"? So people using new version won't get unexpected crashes
I plan to cut a release with a note that it's a breaking change and methods were not functioning; that fits our release scheme (patch version can be breaking).
There are not that many users of these JNI bindings, and I'd prefer not to keep broken things in the codebase.
Unexpected crashes are possible only if users build libraries themsevles and / or experiment with not matching native / jar versions. Normally it should be a compile time error.
I can include patch version into the native binaries build so it's more strict about native libraries it can pickup, but that can be too agressive.
I can include patch version into the native binaries build so it's more strict about native libraries it can pickup, but that can be too agressive.
yeah maybe it is
@WhiteSte published as 2.6.0+9-37b3e4ef-SNAPSHOT for now!
where? I don't see it in maven central
@WhiteSte the link is clickable, it on maven central snapshots (aka oss sonatype snapshots) https://oss.sonatype.org/content/repositories/snapshots/
The badges in the README.md are not fake -- kind of having an informative nature of location & versions published; all of them are clickable.
got it thanks
This PR removes the log collection stream and forwards all PDAL logs to
stderr
(viastdlog
). This much more useful in the context of the pipeline and unhandled exceptions, where the log stream could be collected into a stream only after the entire pipeline execution (which is not guaranteed).Pipeline constructor signature is also changed, and now no longer accepts
LogLevels
as integers.About the nature of a bug:
It looks like the log stream was collected (freed) / reinitialized, and the nature of this behavior is still unknown, I spent a couple of evening on it and could not find a workaround / viable theory. It may depend on how a particular pipeline step is executed because not all the pipelines cause the log failure. However, I feel like the current approach is much more informative and useful.
Connects #79
@WhiteSte could you check this PR on your input? I tried and it seems 👍