Closed SubParLou closed 9 months ago
@SubParLou do you mind sharing the code snippet that you are using ? The tests do pass where we use the API key to generate audio (link)
@SubParLou We've merged a fix and if you upgrade to 0.1.5
the environment variable should be respected. If it's still an issue, feel free to reopen this and tag me!
@dsinghvi Updated to 0.1.5 and still getting the same issue. It let me make 1 call and then the next time I tried it failed with 401 again. the call that did come through is not showing in my account history indicating to me it's still not using the api key.
Browser Response:
{"error":"Error synthesizing speech","details":"Status code: 401\nBody: {\n \"_readableState\": {\n \"objectMode\": false,\n \"highWaterMark\": 16384,\n \"buffer\": {\n \"head\": null,\n \"tail\": null,\n \"length\": 0\n },\n \"length\": 0,\n \"pipes\": [],\n \"flowing\": null,\n \"ended\": false,\n \"endEmitted\": false,\n \"reading\": false,\n \"constructed\": true,\n \"sync\": false,\n \"needReadable\": false,\n \"emittedReadable\": false,\n \"readableListening\": false,\n \"resumeScheduled\": false,\n \"errorEmitted\": false,\n \"emitClose\": true,\n \"autoDestroy\": true,\n \"destroyed\": false,\n \"errored\": null,\n \"closed\": false,\n \"closeEmitted\": false,\n \"defaultEncoding\": \"utf8\",\n \"awaitDrainWriters\": null,\n \"multiAwaitDrain\": false,\n \"readingMore\": false,\n \"dataEmitted\": false,\n \"decoder\": null,\n \"encoding\": null\n },\n \"_events\": {\n \"error\": [\n null,\n null\n ]\n },\n \"_eventsCount\": 5,\n \"_writableState\": {\n \"objectMode\": false,\n \"highWaterMark\": 16384,\n \"finalCalled\": false,\n \"needDrain\": false,\n \"ending\": false,\n \"ended\": false,\n \"finished\": false,\n \"destroyed\": false,\n \"decodeStrings\": true,\n \"defaultEncoding\": \"utf8\",\n \"length\": 0,\n \"writing\": false,\n \"corked\": 0,\n \"sync\": true,\n \"bufferProcessing\": false,\n \"writecb\": null,\n \"writelen\": 0,\n \"afterWriteTickInfo\": null,\n \"buffered\": [],\n \"bufferedIndex\": 0,\n \"allBuffers\": true,\n \"allNoop\": true,\n \"pendingcb\": 0,\n \"constructed\": true,\n \"prefinished\": false,\n \"errorEmitted\": false,\n \"emitClose\": true,\n \"autoDestroy\": true,\n \"errored\": null,\n \"closed\": false,\n \"closeEmitted\": false\n },\n \"allowHalfOpen\": true\n}"}
URL:
.../elevenlabs/stream?text=Hello&voice=Patrick
Code:
const router = express.Router()
const cors = require('cors')
const { ElevenLabsClient } = require('elevenlabs');
const eleven = new ElevenLabsClient({
apiKey: process.env.ELEVENLABS_API_KEY,
});
router.get('/elevenlabs/stream', cors(), async (req, res) => {
try {
const response = await eleven.voices.getAll();
const voices = response.voices; // Accessing the array of voices
let voiceId = "21m00Tcm4TlvDq8ikWAM"; // Default voice ID
const requestedVoice = req.query.voice ? req.query.voice.toLowerCase() : null;
if (requestedVoice) {
const foundVoice = voices.find(v =>
v.voice_id.toLowerCase() === requestedVoice ||
v.name.toLowerCase() === requestedVoice
);
if (foundVoice) {
voiceId = foundVoice.voice_id; // Use the found voice ID
}
}
const audioStream = await eleven.textToSpeech.convertAsStream(voiceId, {
text: req.query.text || "Hello world!",
model_id: "eleven_multilingual_v2",
});
res.set({ 'Content-Type': 'audio/mpeg' });
audioStream.pipe(res);
} catch (error) {
let errorMessage = error.message;
if (error.body && Buffer.isBuffer(error.body)) {
const bufferError = error.body;
const decodedError = bufferError.toString('utf8');
errorMessage += `\nDecoded body: ${decodedError}`;
}
console.error('Error synthesizing speech:', errorMessage);
res.status(500).json({ error: 'Error synthesizing speech', details: errorMessage });
}
});
module.exports = router
@SubParLou I just ran this script 4 times in a row and it worked for me:
const eleven = new ElevenLabsClient({
apiKey: process.env.ELEVEN_LABS_API_KEY,
});
const voices = await eleven.voices.getAll();
console.log(voices);
const audioStream = await eleven.textToSpeech.convertAsStream(
voices.voices[0].voice_id,
{
text: "Hello world!",
model_id: "eleven_multilingual_v2",
}
);
await play(audioStream)
Are you sure that process.env.ELEVENLABS_API_KEY
is being correctly populated on your server?
Modified my code slightly to make sure all variables are correct
Server Console (truncated to omit the error message, as it will also show in the browser console):
API Key: a21ce75c0785830f797541**********
Voice: ODq5zmih8GrVes37Dizd
Text: Hello
Model: eleven_multilingual_v2
Replaced the last 10 characters of the logged api key with *, will end up changing my api key again later anyway, but just showing it was pulling from the .env as previously mentioned.
Browser Console Output:
{"error":"Error synthesizing speech","details":"Status code: 401\nBody: {\n \"_readableState\": {\n \"objectMode\": false,\n \"highWaterMark\": 16384,\n \"buffer\": {\n \"head\": null,\n \"tail\": null,\n \"length\": 0\n },\n \"length\": 0,\n \"pipes\": [],\n \"flowing\": null,\n \"ended\": false,\n \"endEmitted\": false,\n \"reading\": false,\n \"constructed\": true,\n \"sync\": false,\n \"needReadable\": false,\n \"emittedReadable\": false,\n \"readableListening\": false,\n \"resumeScheduled\": false,\n \"errorEmitted\": false,\n \"emitClose\": true,\n \"autoDestroy\": true,\n \"destroyed\": false,\n \"errored\": null,\n \"closed\": false,\n \"closeEmitted\": false,\n \"defaultEncoding\": \"utf8\",\n \"awaitDrainWriters\": null,\n \"multiAwaitDrain\": false,\n \"readingMore\": false,\n \"dataEmitted\": false,\n \"decoder\": null,\n \"encoding\": null\n },\n \"_events\": {\n \"error\": [\n null,\n null\n ]\n },\n \"_eventsCount\": 5,\n \"_writableState\": {\n \"objectMode\": false,\n \"highWaterMark\": 16384,\n \"finalCalled\": false,\n \"needDrain\": false,\n \"ending\": false,\n \"ended\": false,\n \"finished\": false,\n \"destroyed\": false,\n \"decodeStrings\": true,\n \"defaultEncoding\": \"utf8\",\n \"length\": 0,\n \"writing\": false,\n \"corked\": 0,\n \"sync\": true,\n \"bufferProcessing\": false,\n \"writecb\": null,\n \"writelen\": 0,\n \"afterWriteTickInfo\": null,\n \"buffered\": [],\n \"bufferedIndex\": 0,\n \"allBuffers\": true,\n \"allNoop\": true,\n \"pendingcb\": 0,\n \"constructed\": true,\n \"prefinished\": false,\n \"errorEmitted\": false,\n \"emitClose\": true,\n \"autoDestroy\": true,\n \"errored\": null,\n \"closed\": false,\n \"closeEmitted\": false\n },\n \"allowHalfOpen\": true\n}"}
Code:
const router = express.Router()
const cors = require('cors')
const { ElevenLabsClient } = require('elevenlabs');
let requestedText, voiceId, modelId;
const elevenAPIKey = process.env.ELEVENLABS_API_KEY;
const allowedDomains = ['example1.com', 'example2.com', 'example3.com', 'example4.com'];
const corsOptions = {
origin: function (origin, callback) {
if (!origin) return callback(null, true);
const domain = origin.split('//')[1];
if (allowedDomains.some(d => domain.endsWith(d))) {
callback(null, true);
} else {
callback(new Error('Not allowed by CORS'));
}
},
methods: 'GET',
};
const eleven = new ElevenLabsClient({
apiKey: elevenAPIKey,
});
router.get('/elevenlabs/stream', cors(corsOptions), async (req, res) => {
try {
const response = await eleven.voices.getAll();
const voices = response.voices; // Accessing the array of voices
voiceId = "21m00Tcm4TlvDq8ikWAM"; // Default voice ID
const requestedVoice = req.query.voice ? req.query.voice.toLowerCase() : null;
if (requestedVoice) {
const foundVoice = voices.find(v =>
v.voice_id.toLowerCase() === requestedVoice ||
v.name.toLowerCase() === requestedVoice
);
if (foundVoice) {
voiceId = foundVoice.voice_id; // Use the found voice ID
}
}
requestedText = req.query.text || "Hello world!";
modelId = "eleven_multilingual_v2";
const audioStream = await eleven.textToSpeech.convertAsStream(voiceId, {
text: requestedText,
model_id: modelId,
});
res.set({ 'Content-Type': 'audio/mpeg' });
audioStream.pipe(res);
} catch (error) {
let errorMessage = error.message;
if (error.body && Buffer.isBuffer(error.body)) {
const bufferError = error.body;
const decodedError = bufferError.toString('utf8');
errorMessage += `\nDecoded body: ${decodedError}`;
}
console.error('Error synthesizing speech:', errorMessage);
console.error(`API Key: ${elevenAPIKey}`);
console.error(`Voice: ${voiceId}`);
console.error(`Text: ${requestedText}`);
console.error(`Model: ${modelId}`);
res.status(500).json({ error: 'Error synthesizing speech', details: errorMessage });
}
});```
@dsinghvi forgot to tag you.
It seems to work once every 24 hours for me and then i get nothing but the 401 error, and not showing in account history.
@SubParLou do you have any information on what the error prints -- I have a feeling this is related to the API itself, and not the SDK.
@dsinghvi in my messages I have posted the error. it's broken into code "401" and the body of the error. sometimes the error is even longer but no clear messages as to why i'm getting the 401. The only thing I can think is it's not seeing my API Key.
When you run your tests, are your tests showing in your account history tied to the API Key? From what I was reading on the discord if the API returns a result tied to your API key, you should see it in your account history for text to speech.
The only results in my history are the ones I do directly on the elevenlabs website.
I actually get a very similar error with elevenlabs js and express/hono. I noticed that if I translate only 3 words, everything works, but if I translate more, I get the following error message. The api key is correct and I have not exceeded any limits so far. So far it had only worked once with ~50 words.
This is the error:
And this is my code:
I don't think the API key is being passed to the API. I have tried both the .env method and declaring it directly in code. Getting a 401 response.