Closed damianoporta closed 4 years ago
Should I set the conv_depth differently?
I am facing the same issue.
Hi @damianoporta, apologies for the late follow-up. We recently figured out what was happening, cf https://github.com/explosion/spaCy/issues/4934#issuecomment-593389846. In short, it looks like the conv_depth
setting got ignored and did not actually change the model, but the value was still stored in the output config file, which then caused an incompatibility upon IO, and a crash. PR #5078 should at least prevent the crash.
We are currently working on spaCy 3.0, where we will have a much more clean way to define these kind of hyperparameters of the ML models.
Also have a look at this advice by Matt:
By the way, adding CNN layers is unlikely to be the most effective option. You'll be better off increasing token_vector_width, and possibly installing PyTorch and increasing bilstm_depth.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Hi, i am training a new NER model ad i found a problem with conv_depth. I recently added another similar issue, this: https://github.com/explosion/spaCy/issues/4058 that has been closed,
Basically i only have added this:
as @ines said. When i try to load the model again i get the following error:
in this case the trick with _os.environ['convdepth'] = '6' does not solve the problem. I also have checked the ner/cfg and this is the content:
The model has been trained with the same code:
I did not pass the config in _begintraining() with component_cfg={"ner": {"conv_depth": 6}} I thought it was useless because of the ner initialization:
ner = nlp.create_pipe("ner", config={"conv_depth": 6})
Do i also need it? Any workaround? Thanks
Your Environment