Training the MLLM backend can be a bit slow. Most of the time is spent generating candidates from the training documents. This could probably be done faster by using parallel processing. It's noted as a TODO item in the code: https://github.com/NatLibFi/Annif/blob/master/annif/backend/mllm.py#L23
Training the MLLM backend can be a bit slow. Most of the time is spent generating candidates from the training documents. This could probably be done faster by using parallel processing. It's noted as a TODO item in the code: https://github.com/NatLibFi/Annif/blob/master/annif/backend/mllm.py#L23