On startup the worker will download all UDF dylibs needed for the pipeline, then write them to a shared directory (/tmp/arroyo/localudfs/{name}{hash}.so) where it can be loaded. When using the process scheduler, this means that if two pipelines are using the same UDF, when the second one starts up it will overwrite the dylib that's already being used by the first.
For reasons that are not clear to me, it appears that certain UDFs can segfault when their shared library is replaced. This PR addresses that issue by not overwriting the UDF if it already exists (note that any changes to the UDF will give it a different hash, so it will still be written).
On startup the worker will download all UDF dylibs needed for the pipeline, then write them to a shared directory (/tmp/arroyo/localudfs/{name}{hash}.so) where it can be loaded. When using the process scheduler, this means that if two pipelines are using the same UDF, when the second one starts up it will overwrite the dylib that's already being used by the first.
For reasons that are not clear to me, it appears that certain UDFs can segfault when their shared library is replaced. This PR addresses that issue by not overwriting the UDF if it already exists (note that any changes to the UDF will give it a different hash, so it will still be written).