Without a way to specify dependencies or manage lifecycle/status, there are some current race conditions that result in a broken assistant.
Example: speaking before mind loading has completed causes the speech_recognition thread to crash because it inspects the current mind to create decoders (and the mind hasn't been set yet)
Without a way to specify dependencies or manage lifecycle/status, there are some current race conditions that result in a broken assistant.
Example: speaking before mind loading has completed causes the speech_recognition thread to crash because it inspects the current mind to create decoders (and the mind hasn't been set yet)