robertfeldt / BlackBoxOptim.jl

Black-box optimization for Julia
Other
438 stars 56 forks source link

Provide initial solution #74

Closed ngiann closed 2 years ago

ngiann commented 6 years ago

Is there a way I could start the optimisation search from an existing solution? I read the documentation but I can't seem to find the relevant information. Many thanks.

ngiann commented 6 years ago

I have been looking into the code, but unfortunately I can't find an option that would allow me to do this...

robertfeldt commented 6 years ago

Sorry for the slow response; have a lot of work currently...

We had a way to do this in earlier versions but we might have lost that as the package evolved. Please provide ideas for a good interface for this (just another parameter to the main func?) and we can add it.

ngiann commented 6 years ago

No worries.

Having been a user of other Julia optimisation packages, such as Optim and NLopt, I would suggest that the initial solution is passed as the second argument to the function, eg:

res = bboptimize(my_objective, my_initial_solution; Method = :generating_set_search,
                        NumDimensions = length(my_initial_solution),
                        MaxFuncEvals = 100)

This would be consistent with both Optim and NLopt which pass the initial solution point simply as the second argument.

In Optim you have:

res = optimize(my_objective, my_initial_solution, NelderMead(),
                Optim.Options(iterations=maxIter))

In NLopt you have (slightly more complex setup):

    opt = Opt(:LN_NELDERMEAD, length(my_initial_solution))
    maxeval!(opt, maxIter)
    min_objective!(opt, my_objective)
    (_, res, _) = NLopt.optimize(opt,  my_initial_solution)

I think that this would make it easier for users that are familiar with either Optim or NLopt.

Many thanks.

_ps. The generating_setsearch algorithm really rocks on non-differentiable objectives

ps2 I accidentally closed this issue!

robertfeldt commented 6 years ago

Ok, sounds good. Any input on this @alyst ?

@ngiann is the GSS more effective than the Adaptive DE for your problems?

Be aware that as a problem gets more complex my experience is that the Adaptive DEs often become relatively better than GSS even though the GSS can be quicker to find good solutions for simpler problems. Of course, this is based only on my gut feeling from using them both for long and not on any systematic evaluation. :)

ngiann commented 6 years ago

On my particular problem (a mnimal autoencoder with a non-differentiable cost function) GSS seems to sufficiently well without any parameter tuning. For some reason Adaptive DE performs worse, but of course, as you say, it is all problem dependent...

robertfeldt commented 6 years ago

Yes, just be prepared to also try one of the adaptive_de methods when you switch problems. The default method is generally good so start from that one if GSS doesn't give you what you need.

ngiann commented 6 years ago

Ok, many thanks for the advice!

alyst commented 6 years ago

Currently for population-based methods there's Population= option that you can use to provide the initial population as a matrix. AFAIR currently the automatic "expansion" of the initial point solution into a population is not implemented, but this could be done from the user code with the help of BlackBoxOptim.latin_hypercube_sampling().

For GSS specifying the initial point is not implemented, it's initialized as x = rand_individual(ss) in the constructor. I think your suggestion of adding the 2-arg version of bboptimize/bbsetup() is the proper way to go.

ngiann commented 6 years ago

Hello, everyone. Is there perhaps some kind of workaround that allows me to provide an initial solution to generating_set_search for now? Cheers.

millorito commented 6 years ago

Hi all,

Has there been any progress on this? I really like the bboptimize package and being able to provide an initial solution would be extremely helpful for my problem. Thanks!

alyst commented 6 years ago

@millorito AFAIK nothing was done in this direction so far. It looks like a straightforward feature, though. You can help us implement it by providing some simple use case, i.e. some optimization problem that should benefit from providing the initial solution in a measurable way (significantly reducing the number of iterations needed to reach the optimum or starting to converge to the global minimum instead of the local one). It will make testing the feature much easier.

robertfeldt commented 6 years ago

Yeah, there was a student who "took this on" so I was waiting for his fix and then he got sidetracked. I agree with @alyst it should be quite easy but great if you can help on the test side.

ngiann commented 6 years ago

I know I am being quite vague, but typical cases would involve coordinate ascent algorithms, where one set of parameters is kept fixed while another one is optimized. Expectation maximisation is such an instance.

robertfeldt commented 6 years ago

Hmm, but keeping some params fixed is different from providing an initial/starting solution. Keeping params fixed can be done by wrapping the fitness function; not sure that should be supported inside the package itself.

millorito commented 6 years ago

Indeed I don't want to keep anything fixed. I just want to provide a starting point.

An optimization problem I have in mind where this could be useful is when one is estimating parameters on bootstrapped data. From one bootstrap to the next the parameters shouldn't change wildly so by giving one solution as an initial point this would speed up the procedure.

I could create example data and problem if that helps...

millorito commented 6 years ago

I tried to create an example. You will notice, I am not a computer scientist.

The idea is that I am trying to guess some parameters of a linear model using maximum likelihood estimation on 100 randomly drawn samples. As you can see, the resulting distribution of estimated parameters is normal. So if I do just one optimization round, then by starting the next optimization round from that first parameter vector can speed things up as the next solution(s) will be somewhat close.

Now you might argue that I could simply limit the bounds or that the presented example is fast anyhow. However, I am trying to do this exercise with much more parameters and a much more complex model where just one optimization round takes hours. Moreover, the optimization often doesn't converge and doesn't manage to reduce the error at all if it starts off with some strange combination of parameters.

Please let me know if this is not clear.

Prepare

srand(2)
using Distributions
using DataFrames
using BlackBoxOptim
using PyPlot

N=1000
K=3

Generate data with noise

genX = MvNormal(eye(K))
X = rand(genX,N)
X = X'
X_noconstant = X
constant = ones(N)
X = [constant X]
genEpsilon = Normal(0, 1)
epsilon = rand(genEpsilon,N)
trueParams = [0.01,0.05,0.05,0.07]
Y = X*trueParams + epsilon

log likelihood

function loglike(rho, a, b)
    beta = rho[1:4]
    sigma2 = exp(rho[5])+eps(Float64)
    residual = b-a*beta
    dist = Normal(0, sqrt(sigma2))
    contributions = logpdf.(dist,residual)
    loglikelihood = sum(contributions)
    return -loglikelihood
end

bootstrap 100 times

many = 100
all = zeros(5, many)
for j = 1 : many
    theIndex = sample(1:N,N)
    x = X[theIndex,:]
    y = Y[theIndex,:]
    res = bboptimize(params -> loglike(params, x, y) ; Method=:adaptive_de_rand_1_bin_radiuslimited, TraceMode= :silence, MaxTime = 50.0, TraceInterval=10.0, SearchRange = [(-1.0,1.0),(-1.0,1.0),(-1.0,1.0),(-1.0,1.0),(-1.0,1.0)], NumDimensions = 5)
    println(best_candidate(res))
    all[:, j] = best_candidate(res)
end

plot resulting parameter distribution

num_rows, num_cols = 2, 2
fig, axes = subplots(num_rows, num_cols, figsize=(16,6))

subplot_num = 0
for i in 1:num_rows
    for j in 1:num_cols
        ax = axes[i, j]
        subplot_num += 1

        ax[:hist](all[subplot_num, :], alpha=0.6, bins=20)
        ax[:set_title]("histogram $subplot_num")
        ax[:set_xticks]([-0.3, 0, 0.3])
        ax[:set_yticks]([])
    end
end

example_dsitrib

bootstrap_example.ipynb.zip

Most of this code comes from: https://juliaeconomics.com/tag/bootstrap/

ngiann commented 6 years ago

A common example is initialising a neural network with small weights before optimising it.

ngiann commented 6 years ago

As I pointed out in my previous (quite vague frankly) post, another common example are coordinate ascent algorithms.

Such algorithms optimise one set of variables at a time. As a simple example consider a function f(x,y) to be optimised where x is one set of variables and y is another. There may be algorithmic reasons that make coordinate ascent compelling. In this case, we would like to fix x and optimise with respect to y only. Once optimisation is done, we would like to fix the y and optimise with respect to x, and then keep alternating between these two steps.

Starting from some initial x1, y1 we would then iteratively alternate between optimising x and y. Here are the few first iterations:

We then keep alternating between optimising between x and y but picking up the optimisation from the last optimum in order to save computation.

This is the typical case with Expectation Maximisation (EM) algorithms. There, typically, one set of variables (the so-called hidden variables) are estimated during the E-step and then another set (typically the parameters of the model) are optimised during the M-step. The EM algorithm iteratively alternates between the E-step and M-step. A common example of such an algorithm is fitting a mixture of Gaussians using EM (though in this example there exist closed form updates). However, one can come up easily with other more complex scenarios where the M-step of the algorithm does not admit a closed form solution.

ngiann commented 6 years ago

Another example where starting with an initial solution is useful is when we switch between optimisation algorithms. One may perhaps start optimising a neural network with an evolutionary algorithm and then switch to a gradient descent algorithm to perform some local search. After the local search, one may wish to restart the evolutionary search using the best solution found by the local search.

robertfeldt commented 6 years ago

Ok, yes, I see what you meant now. Thanks for the examples and code.

robertfeldt commented 6 years ago

I'm planning to call the parameter for this InitialCandidate but also provide bbsetup and bboptimize ways to provide it directly. Comments welcome.

robertfeldt commented 6 years ago

Here is an analysis of how we could implement this:

https://github.com/robertfeldt/BlackBoxOptim.jl/blob/robertfeldt_set_initial_candidate/design/set_initial_solution_candidate.md

Doesn't look too bad but I'm not sure if its enough to have a parameter for this that is only used at setup or if we truly want to be able to "insert" new candidates even in each repeated run of one and the same optimizer (controller).

ngiann commented 6 years ago

I would go for the 3rd presented option, that is, setting the initial solution as part of the bboptimize call. This is also how the Optim package does it. It seems general enough. The guys over in Optim have not encountered a problem with this strategy, so I guess that this means that it is a good way of doing it ;)

ngiann commented 6 years ago

If I understand correctly, in population algorithms the provided initial solution would be just an individual in the population, right? (I am not sure what you mean by memory in the case of SeparableNESOpt and co)

robertfeldt commented 6 years ago

For setting an initial solution when the population is started it is clear, yes, it will be one "member" of the population. The "memory" is about what happens if one is allowed to set a new solution later, after having run the optimization for some time and then restarting it. That makes little to no sense for some of the optimizers. For now, I plan to not allow it.

When it comes to the other solutions both 3 and 1 will be supported.

ngiann commented 5 years ago

Ok, I think I understand. But just to be absolutely clear, you write (@robertfeldt):

The "memory" is about what happens if one is allowed to set a new solution later, after having run the optimization for some time and then restarting it.

I suppose that "restarting" means actually resuming (sorry to be so terribly pedantic).

So, if I understand correctly, there are two separate things: (a) Providing an initial solution when the population is initialised, and (b) injecting a solution in a population and then resuming optimisation.

If this is the case indeed, I wonder: do we really need (b)? I would have thought that (a) is all we need.

isentropic commented 4 years ago

@robertfeldt do you have any estimate when will this be implemented? Oftentimes I end up with a worse solution than the initial guess I have. This is especially important for uncostrained problems.

robertfeldt commented 4 years ago

Yeah, sorry I got sidetracked on this since a student of mine said they would fix this as part of their thesis but then never got to it. Thanks for reminding me. I'll try to address this within 2 weeks; unfortunately will not get to it before then since I'm traveling. @isentropic

kyriienko commented 4 years ago

I have a problem with good educated guess, so providing an initial vector can greatly reduce number of evaluations. @robertfeldt Is it a currently possible to add InitialCandidate? Thanks a lot for the excellent package!

jibaneza commented 4 years ago

Was this implemented? It would be very useful for the kind of problems I solve! :)

clintonTE commented 4 years ago

This would be useful for a problem I am working on where only a small fraction of the parameter space returns a finite value.

floswald commented 3 years ago

any progress on this? I'm confused because @ChrisRackauckas seems to be able to use the starting value x0 here https://galacticoptim.sciml.ai/dev/tutorials/intro/ ?

ChrisRackauckas commented 3 years ago

That initial solution doesn't end up effecting BlackBoxOptim, just other libraries.

robertfeldt commented 3 years ago

Latest master branch now have a simple way to give an initial starting point for the search. Feel free to try it out (if BlackBoxOptim is still interesting to you). Sorry for the very long delays on this but I'm getting back to the package after a lot of other tasks in recent years.

robertfeldt commented 3 years ago

If any of you (@floswald @ngiann @clintonTE @jibaneza @kyriienko @isentropic @millorito) are still using or interested in BlackBoxOptim it would be great if you can try the way to provide an initial solution on latest master branch and see if it helps your use case(s). I rarely use this feature myself so would be good with some "real-world" feedback... :) Only very basic testing of it added to the test suite, so far.

Note that for population-based optimizers you must run it for some time before a seeded solution can have a larger effect on the population. This is because with a population size of say 100 only one of the initial solutions will be your initial (set) one. So if you only run a few iterations/steps there is no guarantee your initial solution has even been used and evaluated yet. If you just run on the order of 5-10 times the populations size steps it is unlikely your initial solution has not been selected and evaluated though and from then on. Anyway, in practice this shouldn't be much of a problem though (in the test suite I run for only 0.1 seconds and the initial solution have always been evaluated within that timeframe. Anyway, feedback welcome.

floswald commented 3 years ago

Yes still using it, thanks for this update! Will try out asap

scepeda78 commented 3 years ago

Has it been implemented for the borg_moea method?

robertfeldt commented 3 years ago

Sorry no, testing with the single-objective ones for now. The ones used in the tests should all work see:

https://github.com/robertfeldt/BlackBoxOptim.jl/blob/master/test/test_set_candidate.jl

Will check BORG next.

robertfeldt commented 3 years ago

Actually, this was trivial to add for BorgMOEA so should work on latest master, please try:

https://github.com/robertfeldt/BlackBoxOptim.jl/commit/b35fe8a6d7af27705ddc4cd26a3473799343c145

Example that works on my side:

using BlackBoxOptim
# Best aggregated fitness should be for [0.5, 0.5, 0.5]:
fitness_2obj(x) = (sum(abs2, x), sum(abs2, x .- 1.0))
x0 = 0.5 * ones(3)
res = bboptimize(fitness_2obj, x0; Method=:borg_moea,
            FitnessScheme=ParetoFitnessScheme{2}(is_minimizing=true),
            MaxFuncEvals = 10,
            SearchRange=(-10.0, 10.0), NumDimensions=3, ϵ=0.05);
@assert best_fitness(res) == fitness_2obj(x0)
ngiann commented 3 years ago

I wrote a script to test the new feature as follows:

A single hidden layer neural network is trained to solve a simple regression task. Training is performed for a few seconds only before stopped. The best solution is retrieved and the predictions of the partially trained neural network are plotted. Then training is resumed using the best parameters obtained from the previous optimisation run. Again, training is performed for a few second again before stopped to plot the new predictions. This is repeated for a few iterations. At each iteration we also plot the evolution of the fitness function.

The script requires the package PyPlot.

To run the test for yourselves, simply do:

include("mlp.jl")
x, y = toydata()
test_nn(x',y'; H = 10) # H are the number of hidden units in the single hidden layer neural network.

I have put the code in a gist here, I hope it works without problems as it does for me.

This is obviously not a very stringent test, but it seems that the new feature works absolutely fine. I tried running in multiple times with different training data and hidden neurons, and the objective plotted in the figure appears to be always decreasing as it should, since we are resuming the optimisation from the best candidate obtained from the previous optimisation run.

I will be using the feature in the near future for more serious purposes.

Thanks for implementing this feature!

robertfeldt commented 3 years ago

Thanks a lot @ngiann. It seems to work as expected on my end. I get conda problems when trying to load PyPlot after the first install and run (that was successful).

Ps. Not the focus here but have you tried with one of the adaptive DE's instead? Might be useful on these types of tasks where you'd expect there to be non-linearities and many interacting parameters etc.

ngiann commented 3 years ago

generating_set_search is my favourite optimiser and I always try it out first!

I just tried adaptive_de_rand_1_bin_radiuslimited and this also looks fine as the objective monotonically decreases in multiple runs of the script.

I tried xnes but in this case the fitness does not monotonically decrease. But I guess this relates to what you point out at your comment above about population based optimisers. After noticing this, I run the script again but this time allowing xnes to run for longer (10 secs) to see if the fitness does monotonically decrease but it didn't help. Maybe this needs to be tested more thoroughly, or?

I also tried simultaneous_perturbation_stochastic_approximation (I have hardly ever used this algorithm in the past). Here, however, an error is thrown:

ERROR: AssertionError: target[17]=-5.245273066878641 is out of [-5.0, 5.0]

Let me know if I should try out more or perhaps all of the optimisers. I would be happy to do more testing.

Kaltxi commented 2 years ago

Hello, @robertfeldt! The use of the initial guess feature fails for me at the moment, both with my case and the test written by @ngiann . It throws the following error:

LoadError: MethodError: no method matching bboptimize(::var"#obj#7"{Vector{Float64}, Vector{Float64}}, ::Vector{Float64}; Method=:generating_set_search, SearchRange=(-5.0, 5.0), NumDimensions=16, MaxTime=2.0)
Closest candidates are:
  bboptimize(::Any) at C:\Users\Eugene\.julia\packages\BlackBoxOptim\iWqGG\src\bboptimize.jl:70 got unsupported keyword arguments "Method", "SearchRange", "NumDimensions", "MaxTime"
  bboptimize(::Any, ::AbstractDict{Symbol, Any}; kwargs...) at C:\Users\Eugene\.julia\packages\BlackBoxOptim\iWqGG\src\bboptimize.jl:70
Stacktrace:
 [1] test_nn(x::Vector{Float64}, y::Vector{Float64}; H::Int64)
   @ Main d:\Dev\test.jl:103
 [2] test_nn(x::Vector{Float64}, y::Vector{Float64})
   @ Main d:\Dev\test.jl:38
 [3] top-level scope
   @ d:\Dev\\test.jl:152
in expression starting at d:\Dev\test.jl:152

Any idea why is that?

disadone commented 2 years ago

@robertfeldt Same problem here. Just run the example in README.

res = bboptimize(rosenbrock2d, x0; SearchRange = (-5.0, 5.0), NumDimensions = 2, MaxTime = 0.1)
ERROR: MethodError: no method matching bboptimize(::typeof(rosenbrock2d), ::Vector{Float64}; SearchRange=(-5.0, 5.0), NumDimensions=2, MaxTime=0.1)
Closest candidates are:
  bboptimize(::Any) at /home/flumer/.julia/packages/BlackBoxOptim/iWqGG/src/bboptimize.jl:70 got unsupported keyword arguments "SearchRange", "NumDimensions", "MaxTime"
robertfeldt commented 2 years ago

Note that this is only available on master branch so far. Please try again and ensure you are running on the master branch and not on latest tagged version. I plan to tag and release very soon though so maybe moot.

robertfeldt commented 2 years ago

We now have a new tag/version 0.6.1 and I tried and it works for me.

ngiann commented 2 years ago

Thank you so much for the update. I just rerun my script from above.

I tried out the optimisers generating_set_search and adaptive_de_rand_1_bin_radiuslimited. Both seem to work correctly, in the sense that, if I resume the optimisation with the best solution obtained from the previous optimisation, then the objective seems to be decreasing monotonically.

However, this is not the case for xnes but it may be (as I commented above) that as a population based optimiser xnes may "lose" the initial solution it has been handed due to random selection of individuals in the population (this is just assumption, I don't know know how xnes works exactly).

I also tried the optimiser simultaneous_perturbation_stochastic_approximation and just like before, I got an error stating:

ERROR: AssertionError: target[3]=-5.166238350948682 is out of [-5.0, 5.0]
Stacktrace:
  [1] apply!(eo::RandomBound{ContinuousRectSearchSpace}, target::Vector{Float64}, ref::Vector{Float64})
    @ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/dvDGl/src/genetic_operators/embedding/random_bound.jl:34
  [2] tell!(spsa::BlackBoxOptim.SimultaneousPerturbationSA2{RandomBound{ContinuousRectSearchSpace}}, rankedCandidates::Vector{BlackBoxOptim.Candidate{Float64}})
    @ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/dvDGl/src/simultaneous_perturbation_stochastic_approximation.jl:69
  [3] step!(ctrl::BlackBoxOptim.OptRunController{BlackBoxOptim.SimultaneousPerturbationSA2{RandomBound{ContinuousRectSearchSpace}}, BlackBoxOptim.ProblemEvaluator{Float64, Float64, TopListArchive{Float64, ScalarFitnessScheme{true}}, FunctionBasedProblem{var"#obj#33"{LinearAlgebra.Adjoint{Float64, Vector{Float64}}, LinearAlgebra.Adjoint{Float64, Vector{Float64}}}, ScalarFitnessScheme{true}, ContinuousRectSearchSpace, Nothing}}})
    @ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/dvDGl/src/opt_controller.jl:275
  [4] run!(ctrl::BlackBoxOptim.OptRunController{BlackBoxOptim.SimultaneousPerturbationSA2{RandomBound{ContinuousRectSearchSpace}}, BlackBoxOptim.ProblemEvaluator{Float64, Float64, TopListArchive{Float64, ScalarFitnessScheme{true}}, FunctionBasedProblem{var"#obj#33"{LinearAlgebra.Adjoint{Float64, Vector{Float64}}, LinearAlgebra.Adjoint{Float64, Vector{Float64}}}, ScalarFitnessScheme{true}, ContinuousRectSearchSpace, Nothing}}})
    @ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/dvDGl/src/opt_controller.jl:321
  [5] run!(oc::BlackBoxOptim.OptController{BlackBoxOptim.SimultaneousPerturbationSA2{RandomBound{ContinuousRectSearchSpace}}, FunctionBasedProblem{var"#obj#33"{LinearAlgebra.Adjoint{Float64, Vector{Float64}}, LinearAlgebra.Adjoint{Float64, Vector{Float64}}}, ScalarFitnessScheme{true}, ContinuousRectSearchSpace, Nothing}})
    @ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/dvDGl/src/opt_controller.jl:470
  [6] #bboptimize#122
    @ ~/.julia/packages/BlackBoxOptim/dvDGl/src/bboptimize.jl:71 [inlined]
  [7] bboptimize(optctrl::BlackBoxOptim.OptController{BlackBoxOptim.SimultaneousPerturbationSA2{RandomBound{ContinuousRectSearchSpace}}, FunctionBasedProblem{var"#obj#33"{LinearAlgebra.Adjoint{Float64, Vector{Float64}}, LinearAlgebra.Adjoint{Float64, Vector{Float64}}}, ScalarFitnessScheme{true}, ContinuousRectSearchSpace, Nothing}}, x0::Vector{Float64})
    @ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/dvDGl/src/bboptimize.jl:65
  [8] bboptimize(functionOrProblem::Function, x0::Vector{Float64}, parameters::Dict{Symbol, Any}; kwargs::Base.Iterators.Pairs{Symbol, Any, NTuple{4, Symbol}, NamedTuple{(:Method, :SearchRange, :NumDimensions, :MaxTime), Tuple{Symbol, Tuple{Float64, Float64}, Int64, Float64}}})
    @ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/dvDGl/src/bboptimize.jl:76
  [9] test_nn(x::LinearAlgebra.Adjoint{Float64, Vector{Float64}}, y::LinearAlgebra.Adjoint{Float64, Vector{Float64}}; H::Int64)
    @ Main ~/tmp/mlp.jl:103
 [10] top-level scope
    @ REPL[16]:1
robertfeldt commented 2 years ago

Thanks @ngiann for checking this. Yes, xnes doesn't guarantee "elitism" and might "loose" good points due to random variation so I think this behavior can happen for this "family" of algorithms.

The simultaneous_perturbation_stochastic_approximation has never really given me any good results so I think there can be many "bugs" in it; it frankly hasn't seen much use. I don't recommend it's used so fixing this problem is not high on the prio list. But I'll note it and get to it in time. Sorry.

ngiann commented 2 years ago

Thanks @robertfeldt for confirming my suspicion on the workings of xnes. Concerning simultaneous_perturbation_stochastic_approximation, I had no particular reason for trying it out, other than the fact that I had done so previously. Maybe as you say, it is indeed a good idea to put this method "on the ice" for now and revisit it when the priorities permit.

In the next days, I will extend the script I posted above to test all available optimisers on this particular task so that I empirically verify that the fitness does indeed monotonically decrease between successive optimisation runs, with each run initialised with the best solution of the previous one.

robertfeldt commented 2 years ago

Thanks, sounds great @ngiann . I'll close this issue for now. If you finish the script and there are some issues please open a new issue so we can weed out any problems. The scripts might also be added as part of the test suite at some point.