Open thewierdnut opened 5 months ago
So we're using MFi actually? xD
When I first contacted starkey asking for details on the protocol, their response was "Go buy an imac, it works fine". I thought it was a stupid, arrogant answer, but rooting through apples mfi nda's it kind of makes sense now. Apple doesn't allow anybody to disclose of that crap because they reeeaaally like their walled gardens.
Now I wonder if MFI has its own audio standard that is higher quality than asha's
Now I wonder if MFI has its own audio standard that is higher quality than asha's
Supposedly it does and it is aac based, but don't quote me on that, might have misremembered that.
Now I wonder if MFI has its own audio standard that is higher quality than asha's
I would guess they are using AAC of some sort. Maybe we could do a test with different profiles (AAC-LC, AAC-HE etc.) and see what plays?
Maybe this one:
AAC-ELD has been evaluated in several independent tests comparing it to codecs like G.718, G.719, G.722.1 Annex C (Siren 14), G.722.2 (AMR-WB), G.722, Silk, Speex and CELT. In these tests, AAC-ELD was found to deliver excellent quality at lower bit rates than any of its competitors.
https://www.iis.fraunhofer.de/en/ff/amm/communication/aaceld.html
aac support would be fantastic, but it would take a lot of work to reverse-engineer the protocol. I don't have access to an iphone, and I don't think they make it easy to capture bluetooth traffic.
I've no idea of how this works, but I was just thinking maybe it's possible to just switch out the G722 codec to AAC-ELD and see if it plays?
My device only advertises G.722 support. Dumping anything else into the stream just sounds like really loud static.
My device only advertises G.722 support. Dumping anything else into the stream just sounds like really loud static.
The only things that support MFi connection for hearing devices, are Apple's arm socs, which is why it also now works with their newer M2+ chip laptops. So part of it might be implemented on a hardware level. I have no idea what I'm talking about though.
My device only advertises G.722 support. Dumping anything else into the stream just sounds like really loud static.
Is aac-eld readily available under Linux though? If I remember correctly you have to self-patch ffmpeg or use custom build to get it there, no idea about gstreamer
if you can get a sample file, the asha_stream_test app just dumps whatever raw data you give it into the socket.
Part of the problem is that when you start audio streaming, you have to specify which protocol you are streaming with. ASHA only defines one value. I haven't tried giving it other values, but I suspect my hearing aids will laugh at me.
Additionally, Apple has 3 variants of ELD https://developer.apple.com/documentation/coreaudiotypes/1572096-audio_format_identifiers
hm... any clue what those numeric values are? I can see that they are typedef'd to a 32 bit integer.
hm... any clue what those numeric values are? I can see that they are typedef'd to a 32 bit integer.
What should I be looking for?
What should I be looking for?
The 7d74f4bd-c74a-4431-862c-cce884371592 service. It should have at least 15 characteristics or so.
What should I be looking for?
The 7d74f4bd-c74a-4431-862c-cce884371592 service. It should have at least 15 characteristics or so.
That information you just posted in the screenshot is the same as the stuff dumped by the gatt_dump tool though here:
It has 18 characteristics. No easy way to export the values from this stupid app.
It has 18 characteristics. No easy way to export the values from this stupid app.
Yeah, that tool just appears to be exploring the exported gatt attributes of your hearing device. It matches this stuff you already posted earlier from gatt_dump:
7d74f4bd-c74a-4431-862c-cce884371592
61d1b37d-94d5-4281-a88f-9b67f8f96314 [notify, read] [subscribed] 01 00 00 00 "\001\0\0\0"
8e750bb1-40c1-48df-b450-97f245c57e0c [read] 00 "\0"
98924a39-6559-40a8-b302-3c8e40dbf834 [read] 00 "\0"
76b3db1f-44c4-46cc-a7b5-e9ce7dfbef50 [read] 00 "\0"
4656d3ac-c2df-4096-96e7-713580b69ccd [notify, read, write] [subscribed] 00 "\0"
16438c66-e95a-4c6f-8117-a6b745bd86fc [read] 01 00 00 00 "\001\0\0\0"
9c12a3db-9ce8-4865-a217-d394b3bc9311 [read, write] ff "\377"
7be94a55-8d91-4592-bc0f-ea3664ccd3a9 [read, write] 52 65 6d 6f 74 65 20 4d 69 63 20 31 "Remote Mic 1"
a28b6be1-2fa4-42f8-aeb2-b15a1dbd837a [read, write] 05 "\005"
adc3023d-bfd2-43fd-86f6-7ae05a619092 [notify, read] [subscribed] bf 03 00 00 "\277\003\0\0"
a391c6f1-20bb-495a-abbf-2017098fbc61 [notify, read, write] [subscribed] 00 "\0"
21ff4275-c41d-4486-a0e3-dc11138bcde6 [read] 31 "1"
6ac46200-24ea-46d8-a136-81133c65840a [notify, read, write] [subscribed] 00 "\0"
f3f594f9-e210-48f3-85e2-4b0cf235a9d3 [notify, read, write] [subscribed] a2 "\242"
497eeb9e-b194-4f35-bc82-36fd300482a6 [read] 51 41 52 34 2d 27 "QAR4-'"
c97d21d3-d79d-4df8-9230-bb33fa805f4e [read] 51 41 52 60 3c 37 "QAR`<7"
8d17ac2f-1d54-4742-a49a-ef4b20784eb3 [read] 00 "\0"
24e1dff3-ae90-41bf-bfbd-2cf8df42bf87 [notify, read] [subscribed] 08 "\b"
To reverse engineer the mfi hearing aid stuff, you would need a packet trace from a procedure like this one, which I think requires you to have a mac (and if you have a mac, you probably don't need the code I'm writing).
I'm also not sure where the legal line is trying to reverse-engineer a proprietary protocol. Android's version, is at least open source, and we already have it working.
Yeah, I was more thinking if we could use the MFI codec with the ASHA protocol. No Mac here so the only way forward I think is to try different AAC codecs maybe.
Yeah, I was more thinking if we could use the MFI codec with the ASHA protocol. No Mac here so the only way forward I think is to try different AAC codecs maybe.
The ASHA protocol specifies a bitmask of protocols that can be read from the ReadOnlyProperties uuid, and a single byte in the ACP start command to indicate which codec to load. There is only one value defined in the spec, but two values defined in the android source code (g722@16khz, and g722@24khz). I've never seen the g722@24khz variant marked as supported in any of the log dumps you guys have posted. It may be possible to just randomly spam other codec values, and see if the hearing aid accepts them, but I don't really want to try it.
I'll admit that when my hearing aids were gathering dust because I hardly ever used them, I was more willing to risk poking around in them. Now that they are much more useful to me, I'm less keen on the idea of accidentally bricking them, considering how expensive they are.
To that end, I've kind of avoided doing anything to them that I haven't already observed in an android trace. I thought I would be safe just doing read-only operations on them, but I found several UUIDs that if I read from my device, will crash its bluetooth stack, and it won't come back without a reboot.
I'll admit that when my hearing aids were gathering dust because I hardly ever used them, I was more willing to risk poking around in them. Now that they are much more useful to me, I'm less keen on the idea of accidentally bricking them, considering how expensive they are.
That's perfectly understandable.
Btw it was already possible to connect the hearing devices to PCs via your phone. Using pipewire's low latency streaming protocols like ROC or simple. Over Wi-Fi it is something like 200-300ms delay, which can be offset in some video players. Not very suitable for video conferencing ofc. If I remember correctly it was possible to lower the latency by having the phone tethered to the computer.
I hopefully have buttons on my Bernafon Alpha 3 ITC and on my side also f3f594f9-e210-48f3-85e2-4b0cf235a9d3
is resposnible for volume it changes between 00 (muted) and ff (max volume). The default value is 94.
Starkey has generated a custom service in the Bluetooth LE realm called Starkey Hearing Instrument Profile (SHIP)
Name | GUID String Format |
---|---|
SHIP Primary Service | 896c9518-d4ea-11e1-af45-58b035fea743 |
Audio Config | 896c9748-d4ea-11e1-af48-58b035fea743 |
Byte | Description |
---|---|
0 | Mute Status. 0: Mute Disabled 1: Mute Enabled |
1 | Microphone Volume Range: 1-255 |
Starkey has generated a custom service in the Bluetooth LE realm called Starkey Hearing Instrument Profile (SHIP)
Interesting. I've seen that service before (See my notes in the wiki), But I don't see that particular characteristic on my device. Did you see that new characteristic using the gatt_dump tool?
I would like to be able to add a device-specific volume control, independent from the asha audio volume. My worry is that different hearing device manufacturers will all use different mechanisms to control this volume.
If anybody can reverse engineer their hearing devices to determine what the device volume control is, please let me know and I will add it to the below table.
To find the volume control on my hearing aids, I used bluetoothctl to explore the available characteristics. This is can be tedious, as there were about 90 characteristics to check, and reading some of them caused my hearing aids to stop responding until I restarted them.
First, attach your hearing device via bluetooth, then run
bluetoothctl
.At this point, bluetoothctl will dump lots of output to the screen. You will want to copy and paste the output for reference. The Attributes we are looking for are
Characteristics
that areVendor specific
, like this:To check this characteristic, check its value, and see if it has the notification flag.
If it says
Flags: notify
, then you can optionally enable notifications this way:Now, change the volume of your device. If you have notifications enabled, and it is the right characteristic, you should see a message like this:
If you don't have notifications, then just run the attribute-info command again, and see if the
Value:
has changed.