Closed cwule closed 2 months ago
Pretty sure this was removed from the OpenAPI spec but I'll double check
@cwule this is no longer part of the OpenAPI spec:
"Body_Dub_a_video_or_an_audio_file_v1_dubbing_post": {
"properties": {
"mode": {
"type": "string",
"title": "Mode",
"description": "automatic or manual. Manual mode is only supported when creating a dubbing studio project"
},
"file": {
"type": "string",
"format": "binary",
"title": "File",
"description": "A list of file paths to audio recordings intended for voice cloning"
},
"csv_file": {
"type": "string",
"format": "binary",
"title": "Csv File",
"description": "CSV file containing transcription/translation metadata"
},
"foreground_audio_file": {
"type": "string",
"format": "binary",
"title": "Foreground Audio File",
"description": "For use only with csv input"
},
"background_audio_file": {
"type": "string",
"format": "binary",
"title": "Background Audio File",
"description": "For use only with csv input"
},
"name": {
"type": "string",
"title": "Name",
"description": "Name of the dubbing project."
},
"source_url": {
"type": "string",
"title": "Source Url",
"description": "URL of the source video/audio file."
},
"source_lang": {
"type": "string",
"title": "Source Lang",
"description": "Source language.",
"default": "auto"
},
"target_lang": {
"type": "string",
"title": "Target Lang",
"description": "The Target language to dub the content into. Can be none if dubbing studio editor is enabled and running manual mode"
},
"num_speakers": {
"type": "integer",
"title": "Num Speakers",
"description": "Number of speakers to use for the dubbing. Set to 0 to automatically detect the number of speakers",
"default": 0
},
"watermark": {
"type": "boolean",
"title": "Watermark",
"description": "Whether to apply watermark to the output video.",
"default": false
},
"start_time": {
"type": "integer",
"title": "Start Time",
"description": "Start time of the source video/audio file."
},
"end_time": {
"type": "integer",
"title": "End Time",
"description": "End time of the source video/audio file."
},
"highest_resolution": {
"type": "boolean",
"title": "Highest Resolution",
"description": "Whether to use the highest resolution available.",
"default": false
},
"dubbing_studio": {
"type": "boolean",
"title": "Dubbing Studio",
"description": "Whether to prepare dub for edits in dubbing studio.",
"default": false
}
},
"type": "object",
"title": "Body_Dub_a_video_or_an_audio_file_v1_dubbing_post"
}
Hmm maybe I was too quick on this. The API ref docs themselves have it. So confusing.
Actually I already have it in the request object.
Use it like so:
var request = new DubbingRequest(filePath, "es", "en", 1, dropBackgroundAudio: true);
var metadata = await ElevenLabsClient.DubbingEndpoint.DubAsync(request, progress: new Progress<DubbingProjectMetadata>(metadata =>
{
switch (metadata.Status)
{
case "dubbing":
Console.WriteLine($"Dubbing for {metadata.DubbingId} in progress... Expected Duration: {metadata.ExpectedDurationSeconds:0.00} seconds");
break;
case "dubbed":
Console.WriteLine($"Dubbing for {metadata.DubbingId} complete in {metadata.TimeCompleted.TotalSeconds:0.00} seconds!");
break;
default:
Console.WriteLine($"Status: {metadata.Status}");
break;
}
}));
Ah but I didn't actually put it into the payload in the request method 😅
Great, thanks for looking into that!
This property is missing from the payload for the dubbing request via https://github.com/RageAgainstThePixel/ElevenLabs-DotNet/blob/e642904400300683c7b2af3a02159b80305b6910/ElevenLabs-DotNet/Dubbing/DubbingEndpoint.cs#L32