Closed dominikheinz closed 5 years ago
Hi!
You can select mimic 2 on home.mycroft.ai by clicking on your name in the top right corner, selecting the "Settings" option from the drop down and on the settings page select "American Male", then finally saving the changes.
Mycroft will sync within a minute and the mimic2 voice should be used.
Also if your internet connection is a bit patchy, Mycroft will default back to Mimic1 on the device.
Hi @dominikheinz was these answers to your satisfaction. Can we close the issue or is there still things that aren't quite clear?
Closing this now. If you still have issues setting it up feel free to reopen this issue.
I can find no documentation anywhere for how to enable Mimic2 as the TTS for Mycroft. The suggested method of going to home.mycroft.ai and select "Settings" doesn't even exist anymore, there is no "settings" in that dropdown. Where can I find instructions for how to configure Mimic2 to be my TTS?
Hi @tyleha we did a big update to Home earlier in the year. You now can set the voice to be used for each device: https://home.mycroft.ai/devices Select "American Male" for Mimic2
Thanks for the response @krisgesling! I did find those per-device configurations, but there's no indication what is Mimic2 and what isn't. So a few clarification questions:
mycroft.conf
?I can find none of this info in the documentation, and provided with the answers, I would be more than happy to submit a PR to add to the docs. As a new user, it's unclear from the documentation whether each voice has a local "robotic" voice and a cloud-based superior voice, or if instead it's that certain voices are just robotic (because they are Mimic1) and certain voices are smoother and thus use mimic2 via the cloud. Sounds like the latter? But I'm only piecing this together from github tickets and community forum complaints.
Hi there,
No the documentation on this isn't great.
mycroft.conf
though as it accounts for a very high amount of our support requests purely because it's so easy to have syntax errors in JSON. Is there a benefit to configuring it locally?Would love any suggestions you have to improve the documentation. I'm actually working on a major overhaul of the docs right now. Super top secret: http://mycroft-ai.gitbook.io :smile:
I appreciate the response. Let me try and figure out where this could live in the docs and push a PR. I'm loving Mycroft, but I'm having to kinda piece together much of the lower-level inner workings as I go. Quite a bit of trial and error in config. The gitbook looks like an awesome next step!
Is there a benefit to configuring it locally?
Since I'm a developer by training, I greatly appreciate the UNIX-y standard of having all your configuration in a flat, human-readable file somewhere. Whether it's ini, json or whatever, it makes troubleshooting and bootstrapping so much easier, as I can wrap the flat file with a useful script or service manager, or just simply get a quick and dirty look at all my options from the command line. While a web gui is certainly slicker, as a dev I find the many layers of abstraction between the web client and Mycroft's actual local config to be totally opaque and hard to manage/enforce/automate. Case in point: this Mycroft community issue I'm tracking about the web gui not actually reflecting locally installed skills.
I appreciate that Mycroft is trying to reach a wider and less technical audience with the Mark I/II, thus the web gui, but having a single, local, editable .conf
source of truth is just so powerful and extensible that it's what I have come to expect from open source projects. The web interface could just write to that file. Actually...I think it does? Not clear from the docs, and multiple times now I've had my ~/.mycroft/mycroft.conf
file overwritten in mysterious ways, again making automation/extensibility much harder.
There's a config tool coming soon that may help.
You could run mimic2 offline, if you build your own voice. I'd recommend you play with tacotron 1 and you can use the mimic2 settings to connect to the demo server. There's a pre-built voice based on the LJ dataset you can use as well. Without a GPU, it will have higher latency than mimic(1), but the voice quality is much better.
@tyleha, to explain some more about the config system, we use a "stack" of config files (the once further down in the list overrides the ones above):
The only case where I know the user config is actually overwritten if it's invalid json and parsed as an empty config.
I've just added some docs for the new config manager which will be available as soon as your device updates to 19.08 (that just got released). This includes a brief outline of the config stack that forslund just mentioned, and as it validates everything on save it should prevent the parsing errors causing the file to get overwritten.
Hello,
I am looking for an explanation how one can enable/integrate mimic2 into Mycroft? The latest Mycroft version seems to use Mimic1 per default which sounds pretty akward. The sound samples I've heard from Mimic2 sounded promising. How do I properly integrate/enable Mimic2 in Mycroft?