Closed rinigus closed 6 years ago
As you warned me before, I don't expect fast review.
Sorry about the delay, just an update: I'm still (slowly) working on the tileserver and now that we have a release date for Sailfish X I want to focus on that. So, it's likely to be at least a couple weeks before I can review this. But, at least there should be no issue of keeping your branch in sync as I won't be working on the codebase.
We all have our priorities, so it's understandable. Would you mind if I'll put out RPM with this branch in OBS and link it in TMO post? That should not interfere with normal packaging, but would allow others to use and test it. Or would it be too confusing?
Would you mind if I'll put out RPM with this branch in OBS and link it in TMO post?
That's fine, go ahead. Please phrase it as "early-feedback preview" or something to make clear that it's not a permanent fork and that the work will be merged in time.
The only possible issue I'm aware of if you use OBS, check if it adds a vendor field in the RPM and check if that's a problem. I don't really know how that works myself.
Thank you, I'll set the vendor to no vendor, to make it simple to upgrade. Thanks for reminding.
I added some review comments. I didn't get to the big stuff yet, so I'll return to that. Overall it looks good in the sense that I don't see any missing features, but I'll want to do some polish and simplification for better maintainability.
@otsaloma, I haven't received any comments, unfortunately. Maybe they were not "submitted" (if its needed at github, don't remember specifics)
I am expecting that you will change/simplify/adjust the code. You did that with the rerouting code and it ended up better.
I haven't received any comments, unfortunately. Maybe they were not "submitted" (if its needed at github, don't remember specifics)
Yes, sorry, I didn't submit them. I'm not really familiar the review functionality, as I've only dealt with smaller PRs before. And, now it seems maybe I should have submitted the comments individually?
I am expecting that you will change/simplify/adjust the code. You did that with the rerouting code and it ended up better.
Yes, I will do a large part of the adjusting, but this is such a big PR, that I want to lighten the load by asking you to do a first pass on things that are simple and clear enough for me to briefly explain.
Excellent! I have the review now and its actually good to get it all in one email. I'll work on your comments - will do my best to get through them tomorrow. Its a large PR, indeed, as I had adjust in many parts to fit it in.
I have looked through and responded to the review. Three suggestions (i18n.py and util.py) are left open, waiting for reply.
I looked through narrative.py
and voice.py
now as well. I might have some questions later, but no requests on those. I'll start making commits once these smaller things are out of the way.
On my count, I should have replied to all the requests (or some of them you decided to take over). Please let me know if something is still missing.
Thank you very much!
After testing this Pull Request a lot of time i discovered, that the voice volume is very low. Is this per Design or a Bug? Hope this gets fixed before merging, because without it's very unusable for using while driving, because my engine is louder then the Speech :( My setup: Xperia X, Language: German
Hi,
the speech volume would depend on voice engine and your audio setup. As such, its not the 'fault' of Poor Maps. German requires use of picoTTS which maybe with quieter voices than mimic, for example. In my car, playing via Bluetooth was significantly quieter than radio. For that to work, I had to increase volumeon the phone and in audio system.
When testing, would you mind to try
did it help?
@otsaloma , speaking of this PR - I hope that you don't wait for anything on my side. As far as I understood, all changes have been reviewed and approved for the time being. I wonder whether you have any idea when you may return to this PR?
cheers!
I've just had some other stuff to do. I have probably already a couple times said I'd get to this next weekend, maybe it's better I don't give any estimates, but this is still at the front of the queue for stuff to do in Poor Maps.
Thank you very much for reply :) . Looking forward to it.
According to m4rtink, SFOS for Sony X has in general "Low speakerphone volume & bad echo cancellation on Sailfish X". This topic will be discussed today at SailfishOS, open source, collaboration meeting. So, the low volume is probably hardware adaptation related, not this PR in particular
It seems that the issue with X was not regarding general audio, just phone calls: https://together.jolla.com/question/169291/xperia-x-speakerphone-volume-too-low/
So, please test with the other voice engine
I have finally started integrating this PR. First question: Is there some relevant reason for removing older generated WAV files (the obscurely named VoiceDirection.set_time
etc. in voice.py
)? Those WAV are tiny, aren't they? Isn't it just enough to remove them in one go when quitting the app or changing the TTS engine?
Great news - thank you!
The voice files are stored in a temporary directory under /tmp
. For SFOS default setup, it means in RAM via tmpfs. Writing WAV files into /tmp was done to avoid writing many times into storage and avoid stressing this needlessly, faster read/write. Depending on selected voice, the size can vary. Just running it now, I could see the largest file was 420KB in size - I guess they can get even larger.
By maintaining them, I avoid too large stress on RAM, absence of action that would result in a long operation later. I presume it may make a difference for very long roads with lots of maneuvers.
PS: Name VoiceDirection.set_time
comes from consideration that its only VoiceDirection
's business what it does when the time along the road moves further. The fact that its used for cached files maintenance shouldn't interest the one who is calling it. However, maybe this time, its overdoing it since we are calling it during idle periods anyway.
Didn't know /tmp
is in RAM, makes sense now.
Could you explain the reasoning for the pre-maneuver prompt times? Why does it vary? Why is it shown twice if there's plenty of time – is it common practice in navigators? (I'm not really familiar with navigators or voice navigation.)
Now that I have thought a bit more, I understand the double prompt after a long drive, but still why the 20 vs. 30 vs. 35 seconds? And another question: the minimum leg duration of 60 seconds, below which to ignore prompts, seems a bit high. Is that to leave room for a previous post-maneuver prompt?
In general, I am sure that the timings can be tuned further. Sometimes they seem to be a bit on a short notice side. Its all about psychology on when to notify, when to assume that the driver needs double notification (as when we have been driving longer stretch without maneuvers), when to keep it not too frequent. But I am sure that if we manage to collect the feedback from others, its possible to make it decently well.
20/30/35 was adjusted as a compromise. However, its possible that I overdid it since I was tuning at least twice the timings - once when tuning by distance and later (current implementation) by time. 20 seems to be on the short side, but at least it gives a prompt. 60 sec cutoff was to not to be too intrusive, but maybe it should be reduced to 45 sec or so. You need the space for post-maneuver prompt as well, indeed.
One issue that is still there is for example when you are on the roundabout. It keeps giving the post-maneuvers that are interrupted by the next maneuver for example. But I consider it an issue with Valhalla actually - its a case where maybe we shouldn't have post-maneuver prompt given by it. I would suggest to address it a bit later.
That overlap handling somewhat bothers me. Not just roundabouts, but I expect one block legs in a city. I find the approach of maneuver-relative times and separate attributes for the different prompts difficult in that sense. I'm tempted to rewrite the different prompts of all maneuvers into a single list, something like
[
VoicePrompt(dist=100,
time=60,
text="In 200 meters, turn left",
priority=2,
voice_generated=False,
passed=False),
VoicePrompt(dist=350,
time=70,
text="Turn left",
priority=3,
voice_generated=False,
passed=False),
VoicePrompt(dist=450,
time=80,
text="Continue for 1 kilometer",
priority=1,
voice_generated=False,
passed=False),
...
]
After initial population, overlaps would be removed, comparing times, keeping the one with the highest priority. And generation and playback would be simply based on the distance to destination (or from origin), which is already calculated by the existing functions.
Do you see any obvious problem here? If not, I'll do it now rather than later.
As for progress, the rest of the package code is done, routers and QML are left, but they look fine, there's not much to do there. I haven't pushed any commits yet, since some partial changes mean some stuff is broken, I'll push the code before I begin testing.
Its OK that it bothers - that keeps Poor Maps standards high :)
Just a comment regarding overlap effect before other points. As a result of the overlap, at present, the voice prompt is stopped in the middle and a new one is voiced. The two prompts never speak at the same time, just one is interrupted. Now, back to the main topic
Indeed, having a global list would allow to get rid of the overlap. However, there are points to think about:
I would suggest to target playback by time to the maneuver, not distance. Its much better experience when the prompts come as determined by time (global time on the route?). Technically it can still be done via global distance since we know the average speed of the leg, but see below.
Ideally, we should take into account the current speed and not the average leg speed. This would require additional data to be provided to the calling routine, but maybe its worth it.
We should avoid synthesis of all voice prompts in advance. Long roads could have plenty of them and there is a bigger chance that it will be rerouted. It would take significant CPU resources to do it, which due to rerouting could be wasted anyway. As a result, I would suggest to keep a global "window" of synthesized prompts, similar to what's done now, except that list would be global.
To properly check for overlaps, we would need to know the duration of the prompt. Alternative, is to do a solution that would work most of the time and then just demand minimal amount of time between prompts. Such alternative solution would allow to generate the list for the whole road and drop prompts by their priority, as you suggested.
In sum, I think its a right direction and will work. Just few points to consider.
Ideally, we should take into account the current speed and not the average leg speed.
I have thought about this too, but I figured that the current or past speed is not necessarily a good predictor for speed on the rest of the leg, especially in the city with traffic lights, varying traffic conditions, etc. So, I think I'll skip that.
To properly check for overlaps, we would need to know the duration of the prompt.
I expect using an average characters per second value with a bit of extra added would work well enough, but we'll see.
Re speed: yes, let's keep that and see if we need something more precise later
Re chars: it could work quite well as a first approximation. Let's see it in action. Also, maybe just requirement of minimal time between prompts may work almost as well.
What about the case where we have only with the same priority prompts left (in case of frequent maneuvers)? Maybe, if the priority is high, we should make an exception then and keep them all and hope for the best. This would keep driver alert and not just having the navigator assistant silently hoping that you can constantly glance on the screen
Yes, I agree, the highest priority can be all kept.
Took a while, but I've now done the adaptation – mainly revised verbal prompts and overlap removal, using standard locales and fallbacks without hard-coded mappings and also small fixes and improvements such as saving maneuvers between sessions and rerouting voice prompts etc. Hope I didn't break anything. The commits are not very atomic, sorry about that, it's a result of file-by-file work and fixing all errors found in testing in one go.
I've done some desktop testing with the fake position source, it seems to work OK. I'll do more testing, but since I don't have a car, it'll be just desktop testing and actual field testing is appreciated to see if the parameters need tweaking. The most relevant parameters are:
And, @rinigus, do you actually use the voice navigation? When testing with Mimic, I found the pronounciation of Finnish road names to be horribly disturbing, at times it even seems to fall back on just enumerating the letters. It can't be much better in Estonian, or is it?
Thank you very much! Its a huge change and I will read (and learn) the new code with pleasure. I hope you were not too frustrated with the original PR ...
As for use, yes, I do use it every time I am using navigation. A bit more in the summer than now, but still I do it every time I need to go out of the routine places. When you drive its of great help, even with these issues of streets names. I think picotts was doing a better job with the streets, but mimic has much better voice quality, so I made it the default. In addition, mimic/flite are developed in proper open source fashion, while picotts is non-developed anymore source code dump from Android.
State of TTS is far from ideal on Linux (and SFOS, as a result). However, by making apps using it we'll at least expose the problems and maybe someone (or us) get interest in fixing them.
The original PR was developed to have ability to plug in online TTS engines, if we wish. I'll have to see if this is still the case (whether we can pre-synthesise well in advance).
I am sad to see English Pirate go. I think it was very appropriate for Sailfish OS with all its "sailors". Its quite well developed by Valhalla and adds some personality to the navigation software. I wonder if its possible to restore it in Poor Maps?
I'll test the new version today and will report back.
From testing: I think it works well. I tested it few times today (had to drive around a bit) and was prompted to make maneuvers in time. When I could, I checked the distance displayed on Poor Maps and compared it with the prompt text.
Few times, it seemed that the prompt was said 100 meters early, like at 400 meters I was told that its 300 to go. I'll read the code and look at the parameters that you listed earlier.
Couple of times it happened that post-maneuver text was interrupted by the next one. However, I didn't notice it on roundabouts which was an issue earlier. Although, more testing will be needed regarding it. (Please see the note below, though)
One question, before I forget: do we still have ability to have multiple prompts before maneuver? If the leg was really long.
Note that while I drive, I don't need perfect solution, but the one that helps. These prompts are very handy and I don't mind if they interrupt and tell the next, more relevant prompt. Its all to help the driver and keep the eyes on the road, not map, when we can. Right now, it already works well with prompts and occasional glancing over the map. :)
Off to read the new code...
Do you think it would make sense to add transformations for non-English letters? Using English TTS voice, the result for Finnish street names sounds much better at least with Mimic and Flite if I replace "ä" with "ae" etc. – try e.g. "Jämeräntaival" vs. "Jaemeraentaival"?
I am sad to see English Pirate go.
I understand, but I don't want to include something that hardly anyone uses more than once, if it takes up space on a small screen and needs special casing in code.
One question, before I forget: do we still have ability to have multiple prompts before maneuver? If the leg was really long.
Yes: https://github.com/rinigus/poor-maps/blob/voice/poor/narrative.py#L533
Do you think it would make sense to add transformations for non-English letters? Using English TTS voice, the result for Finnish street names sounds much better at least with Mimic and Flite if I replace "ä" with "ae" etc. – try e.g. "Jämeräntaival" vs. "Jaemeraentaival"?
I would prefer to avoid it. In reality, its TTS issue which has to be solved properly by mimic - Valhalla interaction. Just imagine transliteration of some languages we are not aware about - I have no clue how it will work with Russian or Chinese, for example. I would prefer to feed it what we get from Valhalla and hope for the best.
Proper solutions in Linux are tricky - we either need someone to sell us a good TTS, use some online service, or maybe we can get Android TTS layer exposed in SFOS. I had written in a thread at TJC regarding state of TTS, but there was not much happening there. Maybe when navigation with spoken instructions will show up, things will become more interesting for everyone.
I understand, but I don't want to include something that hardly anyone uses more than once, if it takes up space on a small screen and needs special casing in code.
I am surely biased, but I actually used it more than regular English. Its somewhat amusing and surely brings you out of routine. No idea how much others were using it, though. [And here I went against my own policy of not fixing TTS by spelling out few words. But that was due to the fact that pirate-spoken TTS is a wayy too far from SFOS]. I do understand your reasoning as well.
Have to stop here and get some sleep. I'll continue tomorrow.
I would prefer to avoid it. In reality, its TTS issue which has to be solved properly by mimic - Valhalla interaction. Just imagine transliteration of some languages we are not aware about - I have no clue how it will work with Russian or Chinese, for example.
Yes, I figured there might be generality issues, and Wikipedia seems to agree, though Russian and Chinese surely don't use "ä"!
I'm still tempted to special case Finnish as there are a lot of users here, no TTS engine and those characters seem to trigger letter enumeration, which is quite bad, e.g. "taival" part of "Jämeräntaival". I'll give it some more thought.
locale = poor.util.get_default_locale()
if locale.startswith("fi") and self.language.startswith("en"):
self.text = self.text.replace("ä", "ae")
self.text = self.text.replace("ö", "oe")
self.text = self.text.replace("å", "aa")
Edit 1: Default locale does not of course necessarily match the language of the street names, so it's far from perfect.
Edit 2: Or maybe I should just file a bug report against Mimic on this?
This is what I mean:
If we think it through, its quickly becoming unrealistic and maybe some hacky way is OK. Ideally
Valhalla should convey the language info in the prompt. Something like
<en>Turn right into</en><fi>Jämeräntaival</fi><en>street</en>
. This is probably doable if we fill the issue in Valhalla and work on/test it. Street name language should be known by Valhalla and it should be possible to insert it into the resulting string, if requested. There is a standard that allows it do properly, see https://www.w3.org/TR/speech-synthesis11/#AppB
Our multilingual TTS engine should be able to parse such string and generate the WAV file. And here it is quickly becoming a pipe dream. We have only one language TTS of decent quality (maybe add few more if we count picoTTS), but far from covering possible requirements. I think unless we get such engine that is able to process multilingual input, there is no big point in bugging Valhalla. However, it looks like mimic is expanding to support more languages. See repos available under https://github.com/MycroftAI?utf8=%E2%9C%93&q=mimic&type=&language= . I will have to look into it at some point and see whether I can release an updated version
Maybe, for now, we can just replace ä, ö and few other chars for any language (Estonian would benefit from it too) and see later how to progress from there.
From reading the code - it looks nice and tidy, as expected, thank you very much! Reorganization and rewriting surely helped. I enjoyed reading it and I hope I didn't miss anything important.
Maybe we should cut a bit the distance at which we stop giving voice directions. Right now its 200 meters, maybe 100 meters are OK, as for rerouting. Right now, when I change the route, I sometimes get the instruction from the old route. This would reduce such event probability.
On my part, its ready :)
Maybe, for now, we can just replace ä, ö and few other chars for any language (Estonian would benefit from it too) and see later how to progress from there.
OK, I'm really not looking for more than small hack to work around the issue.
Maybe we should cut a bit the distance at which we stop giving voice directions. Right now its 200 meters, maybe 100 meters are OK, as for rerouting.
Sure. Thanks for testing, I'll make these remaining changes soon, then merge and do a couple other fixes before making a release.
This PR adds voice navigation using an installed external TTS program. At present, mimic, flite, picotts, and espeak are supported. Description of this PR has been given in https://github.com/otsaloma/poor-maps/issues/30. However, for completeness, I am repeating it below. When compared to earlier description, support for OSRM and Mapquest Open was added.
Background
With the voice commands, we are having several restrictions. Namely, voice commands are not available in all languages and, it is probably common, user may want to specify the voice command language. Second, the voice synthesis availability is also rather poor. There are several options, more about it below. Third, when using better quality voices, synthesis may take some time. The developed code is able to handle these limitations, as much as I could.
Text to speech
After some search, I think we have three packages providing TTS in SFOS: mimic (based on flite), picotts, and espeak. Out of these, mimic supports English only, picotts has few other languages (de, es, fr, and it), and espeak has more. Quality of espeak is, though, rather poor. All packages are available at OpenRepos (mimic and picotts I have uploaded myself from OBS).
To handle these options, I made a class
VoiceEngineBase
that is later used as a base class by voice engines:VoiceEngineMimic
,VoiceEngineFlite
(supports flite if you prefer),VoiceEnginePicoTTS
, andVoiceEngineEspeak
. These voice engines are used byVoiceCommand
that handlesVoice prompts
Voice prompts are given through
Narrative
and itsManeuver
. Audio is handled through QML Audio by giving the audio file. When compared to earlierManeuver
, we have now to store and play verbal instructions. In Valhalla, its alert (like 200-300 meters before), pre-maneuver (turn left now), and post-maneuver (continue for 1 km). So, the routing engine (Valhalla and others) would have to provide these data. If only narrative is provided, its going to be used for alert and pre-maneuver prompt.When compared to earlier
Narrative
, the proposed version has to definecurrent_maneuver
(to know when to play post-maneuver), interface withVoiceCommand
, and remember which prompts have been voiced already (at present, the corresponding prompt is just deleted from a current maneuver copy after the prompts has been voiced). To keep the track of maneuvers,Narrative
hasbegin
andend
methods called at the start and end of navigation by QMLMap
.To support slower TTS (like mimic's
ap
voice), I made the voices to be synthesized in advance. At present, 3 maneuvers ahead. This should also allow us to extend to online synthesis, if we wish.To ensure that we don't miss the maneuver, I reduced the period of narration timer to one second. At present, the preferred voice gender (when possible, only female voices in picotts) is specified in Preferences. Maybe we can move it to the new Navigation page.
Overall preference on whether to enable voice commands is stored in
config.py
/voice_commands
, but doesn't have GUI, as you suggested.