Closed unre4l closed 3 years ago
Hi,
Indeed, unfortunately we were unable to release the EGL implementation as part of the research framework (and this is unrelated to the RetrospectiveLearner
).
Of course, anyone who wishes is welcome to contribute to this or other parts of the repo.
Thank you for the quick reply. This is indeed unfortunate as it narrows the comparability and reproducibility of your paper's results. Could there be more insight as to why it is not released along with the framework? I would be happy to know.
We had an early implementation for EGL, but we did not reimplement it for the open source as a) it was not better than other AL strategies despite being very inefficient b) returning per example gradients was not native to tensorflow and hence required a lot of effort to reimplement for a non-experimental version.
Hi, great framework. I've got a question though.
One AL strategy in your paper is Expected Gradient Length (EGL) but i can't find its implementation in the repo. I notice that the
orchestrator_api.infer()
method retuns agradients
key, but as far as i can see it is not utilised anywhere. Do you use theRetrospectiveLearner
as an equivalent to EGL? What is the theoretical foundation?