Open eisneim opened 1 year ago
Second this.
Do you have the problem of 'ModuleNotFoundError: No module named 'ds_ctcdecoder'? Is this the reason of m1?
Project is not maintained anymore. The reason there was no M1 support was lack of hardware available for GitHub Actions to support CI on it.
Project is not maintained anymore.
It's a damned shame that the developers have vanished, like with the Coqui project STT part at https://github.com/coqui-ai/STT - where the source is left in a broken non-compilable state. Laws concerning accessibility have been introduced in many countries, so the market and demand for STT is starting to mature.
Project is not maintained anymore.
It's a damned shame that the developers have vanished, like with the Coqui project STT part at https://github.com/coqui-ai/STT - where the source is left in a broken non-compilable state. Laws concerning accessibility have been introduced in many countries, so the market and demand for STT is starting to mature.
Please direct your anger against the right people.
Also, for the record, we completed work to make the project uncoupled from TaskCluster and only dependant on GitHub Actions so that people could fork and continue on it. Unfortunately, nobody invested time in this.
Please direct your anger against the right people.
It's not anger, it's sorrow if anything. Also, since the DeepSpeech isn't mantained anymore, some of it is also related to this project.
Please direct your anger against the right people.
It's not anger, it's sorrow if anything. Also, since the DeepSpeech isn't mantained anymore, some of it is also related to this project.
Taking over someone else's code is hard. Taking over lots of other people's code is second to impossible (at least very hard). For this reason, I guess this is the main reason why nothing happened. The number of comments in both forums (DeepSpeech and Coqui STT) show that there is a market. I would gladly pay a reasonable price for a working product - however a pay-per-usage (cloud / hosted solution), which is then a direct competitor to Google/Bing/Amazon, but with higher prices and more bugs, it won't happen for both economical and quality reasons.
In my own little project, it just can't make any budget hold if I had to pay per transcription - so local hosting is the only solution. My guess is that a lot of other people's projects share the same challenge.
Please direct your anger against the right people.
It's not anger, it's sorrow if anything. Also, since the DeepSpeech isn't mantained anymore, some of it is also related to this project.
Taking over someone else's code is hard. Taking over lots of other people's code is second to impossible (at least very hard). For this reason, I guess this is the main reason why nothing happened.
Yes, we know that, this is why we constantly welcomed contributors and did everything we could to make it easier. The discourse thead on GitHub Actions has had 1 reply for 1300 reads.
I'm sorry but at some point, if there's so much interest, it should be possible to get a few people interested enough to start contributing. Nobody talks about owning the whole codebase from day 1.
The number of comments in both forums (DeepSpeech and Coqui STT) show that there is a market. I would gladly pay a reasonable price for a working product - however a pay-per-usage, which is then a direct competitor to Google/Bing/Amazon, but with higher prices and more bugs, it won't happen for both economical and quality reasons.
A business model is complicated, but the fact that Coqui pivoted away from STT shows that maybe there is not such an interesting market. I can't speak for them, but it's my reading of the events.
In my own little project, it just can't make any budget hold if I had to pay per transcription - so local hosting is the only solution. My guess is that a lot of other people's projects share the same challenge.
Which is one of the motivation for DeepSpeech at first.
Now, the sad truth is here: nobody cares enough to have spent a little time to try and continue the project. That's sad, but that's it.
Has anyone got anywhere with this, or got any resources to share? I’m considering looking in to it but if work has already been done then I can start from there.
obviously training on apple silicon hardware is out of the question. Considering that deepspeech is still a viable solution to many small domain specific projects with limited budget, adding support for inference on apple silicon is something that would be worth working on.
As of 2023-2 there is still no bindings for Apple silicon M1, can't use deepspeech in electron on a arm Macbook