Closed DeNeutoy closed 7 years ago
This looks great, thanks! Yes, I agree with adding two more functions, like you said. Also, can you add a test for this?
@matt-gardner I've added the ability for a user to specify a new DeepQaModel
subclass when calling train/evaluate/load to get around the problem of having to add the model to concrete_models
. I think this is a good solution - if a user has built there own model, we should probably trust that they have done it correctly. What do you reckon?
Also, you can see the generated docs here: https://750-47793051-gh.circle-artifacts.com/0/home/ubuntu/deep_qa/doc/_build/html/training/misc.html. Looks like deep_qa/run.py
doesn't show up anywhere yet. We should add it somewhere in there. Where do you think it should go?
Just shifted the run_model function into the module and allows you to set random seeds in the parameter files. One thing that caught my eye - at the moment if we aren't training a model, we just load it instead, but don't actually do anything with it. Perhaps we should split this into 3 functions,
train
which does training,eval
, which loads a model from the file path and runs evaluation on a dataset, andload
which just loads and returns a model to the user. Thoughts?