Closed NocturnalNick closed 2 years ago
You could try to get to dev mode, and then follow this direction on the Arch Wiki:
https://wiki.archlinux.org/index.php/PulseAudio/Examples#Set_default_input_sources
I think it will work :) Let me know if you have any difficulties!
Thanks for the quick response. I try that and get the following:
pi@crankshaft:~ $ pacmd list-sources | grep -e device.string -e 'name:'
No PulseAudio daemon running, or not running as session daemon.
Which is interesting, it must start when the GUI starts? I tried starting it but get:
pi@crankshaft:~ $ pulseaudio
\E: [pulseaudio] core-util.c: Failed to create secure directory (/home/pi/.config/pulse): No such file or directory
Ah sorry, I didn't tell you one thing I did to make Pulseaudio work.
Try this first:
mkdir /tmp/.config
pulseaudio -D
Then do as the wiki says.
There was some additional steps involved to get pulseaudio working because the dev mode doesn't activate the temporary path that normal startup does. The .config on /home/pi/.config
is just a symlink to the .config
inside tmp
directory that needs to be created as needed. All that voodoo just to make the filesystem read-only so it doesn't corrupt the SD card when you shut down the pi suddenly.
Interesting, that all worked by the way, though I've got an interesting aside: "Hey Google" detection is done by the phone, though after the fact is handled by the RPi microphone (or so it seems, testing with holding various things over ports and speaking quietly)
Any idea how (if at all) I can get the original detection to happen through the mic?
Edit: Calls use the phone's mic too, but the speaker works fine...
I have the exact same behavior--the phone detects the initial wake work but subsequent voice is recorded on the mic.
@nickorooster
The pi hardware doesn't understand the wake word, unless it is programmed into OpenAuto, which is theoretically possible, however, it will take some effort to implement that feature in OpenAuto.
This is one reference point to begin working on it.
Not sure if this is of any help - but I had an issue with my mic jack and the port on the usb soundcard. With the Jack inserted fully (you would assume correctly) the connection wasnt being made. I had to pull the Jack out slightly by a mm or two. This made the correct connection and the mic worked correctly. I'm putting it down to a cheap soundcard.
I think I've isolated the problem a bit. On my phone, I have my TTS output set at 2x speed and I've noticed that the output is only faster when it is being generated on the phone i.e. offline. Similarly, I have noticed that I only get really bad static in the voice output when it is being rendered on the phone i.e. offline. Unfortunately, I don't know where to go from here to be of more help but maybe this is a starting place.
As i know it's a default behaviour. Phone detect's wake word and headunit takes over. We should close the issue for now.
This issue is stale because it has been open 120 days with no activity. Remove stale label or comment or this will be closed in 60 days.
This issue was closed because it has been stalled for 60 days with no activity.
What phone do you have? What OS version?: GM-930F (Galaxy S7 with 7.0)
Did you try to enable and run autoapp under X11 dev mode? Follow this: https://github.com/htruong/crankshaft/wiki/Crankshaft-dev-mode Not yet, but will do soon (same issue occurs when I build openauto myself)
Please provide any further information that you might find helpful. Is there something I'm missing? Should I just build a dev mode version/jump the standard one and look in alsamixer?