Closed GNagisa closed 7 months ago
Why not try it?? Go ahead and try blob url... Let me know if it works (I did work on supporting blob, so it should work, never tested)
On Wed, Apr 10, 2024, 1:17 PM Nagisa @.***> wrote:
Because the server can only return byte data of audio, I am curious if I can use the above method to create a URL to use.
— Reply to this email directly, view it on GitHub https://github.com/RaSan147/pixi-live2d-display/issues/5, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIDNL2YKBEIAETJNB3EX74TY4TRRPAVCNFSM6AAAAABF74HZUSVHI2DSMVQWIX3LMV43ASLTON2WKOZSGIZTIOJSGE2DSMA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Thanks for the quick reply, I tested it and I found that the blob virtual URL is available, but the mouth movement of the model is not triggered. Therefore, I wanted to determine if there were specific requirements for the Live2D model to be able to synchronize the movements of the mouth?
Are you using standard naming convention for the model, also i think i might've forgot something in the tool (on that version), fixed those on pr branch, will add them asap
On Wed, Apr 10, 2024, 3:45 PM Nagisa @.***> wrote:
Thanks for the quick reply, I tested it and I found that the blob virtual URL is available, but the mouth movement of the model is not triggered. Therefore, I wanted to determine if there were specific requirements for the Live2D model to be able to synchronize the movements of the mouth?
— Reply to this email directly, view it on GitHub https://github.com/RaSan147/pixi-live2d-display/issues/5#issuecomment-2047050696, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIDNL262MZIVLOEGJ4D4CLDY4UC5JAVCNFSM6AAAAABF74HZUSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBXGA2TANRZGY . You are receiving this because you commented.Message ID: @.***>
_{ "Version": 3, "FileReferences": { "Moc": "nagisa.moc3", "Textures": [ "nagisa.1024/texture00.png" ], "Physics": "nagisa.physics3.json", "DisplayInfo": "nagisa.cdi3.json", "Motions": { "standby": [{ "Name": "standby", "File": "standby.motion3.json", "Sound": "" }], "speak": [{ "Name": "speak", "File": "speak.motion3.json", "Sound": "" }] }, "Expressions": [ { "Name": "expression1", "File": "motions/expression1.exp3.json" }, { "Name": "expression2", "File": "motions/expression2.exp3.json" }, { "Name": "expression3", "File": "motions/expression3.exp3.json" }, { "Name": "expression4", "File": "motions/expression4.exp3.json" } ] }, "Groups": [ { "Target": "Parameter", "Name": "EyeBlink", "Ids": [] } ], "HitAreas": [] }
I'll admit that my model was a bit too rudimentary, as it was my first time model, so I only did a simple body swing, eyes opening and closed, and mouth opening and closing. So I'm not sure if you mean the naming of actions or something else when you say standard naming convention for the model.
By naming i mean the name of the body parts...
Anyways, updating please wait some time...
On Wed, Apr 10, 2024, 3:58 PM Nagisa @.***> wrote:
{ "Version": 3, "FileReferences": { "Moc": "nagisa.moc3", "Textures": [ "nagisa.1024/texture_00.png" ], "Physics": "nagisa.physics3.json", "DisplayInfo": "nagisa.cdi3.json", "Motions": { "standby": [{ "Name": "standby", "File": "standby.motion3.json", "Sound": "" }], "speak": [{ "Name": "speak", "File": "speak.motion3.json", "Sound": "" }] }, "Expressions": [ { "Name": "expression1", "File": "motions/expression1.exp3.json" }, { "Name": "expression2", "File": "motions/expression2.exp3.json" }, { "Name": "expression3", "File": "motions/expression3.exp3.json" }, { "Name": "expression4", "File": "motions/expression4.exp3.json" } ] }, "Groups": [ { "Target": "Parameter", "Name": "EyeBlink", "Ids": [] } ], "HitAreas": [] }
I'll admit that my model was a bit too rudimentary, as it was my first time model, so I only did a simple body swing, eyes opening and closed, and mouth opening and closing. So I'm not sure if you mean the naming of actions or something else when you say standard naming convention for the model.
— Reply to this email directly, view it on GitHub https://github.com/RaSan147/pixi-live2d-display/issues/5#issuecomment-2047087386, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIDNL22YMOWTARRPTX7VNJLY4UELJAVCNFSM6AAAAABF74HZUSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBXGA4DOMZYGY . You are receiving this because you commented.Message ID: @.***>
Updated... check the readme updated link. Let me know if it fix the issue... also if possible can you share the model file??
Ok, here's my model file, thanks
I'll check and let you know
On Thu, Apr 11, 2024, 6:39 AM Nagisa @.***> wrote:
Nagisa.zip https://github.com/RaSan147/pixi-live2d-display/files/14939231/Nagisa.zip
Ok, here's my model file, thanks
— Reply to this email directly, view it on GitHub https://github.com/RaSan147/pixi-live2d-display/issues/5#issuecomment-2048652029, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIDNL243MADRI2H23W46PBLY4XLTRAVCNFSM6AAAAABF74HZUSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBYGY2TEMBSHE . You are receiving this because you commented.Message ID: @.***>
Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'from')
It doesn't work to load the model file after updating the pixi-live2d-display.js
Check console log 😥 and let me know Are you using the npm version? If yes, please remove the import statements. Use cdn script link
On Thu, Apr 11, 2024, 7:03 AM Nagisa @.***> wrote:
Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'from') It doesn't work to load the model file after updating the pixi-live2d-display.js
— Reply to this email directly, view it on GitHub https://github.com/RaSan147/pixi-live2d-display/issues/5#issuecomment-2048711123, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIDNL25MH6BRO77FKVGZ6PDY4XOORAVCNFSM6AAAAABF74HZUSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBYG4YTCMJSGM . You are receiving this because you commented.Message ID: @.***>
Yes, I'm using CDN link loading and model loading is using the namespace PIXI.live2d.Live2DModel.
<script src="https://cdn.jsdelivr.net/gh/RaSan147/pixi-live2d-display@v0.4.0-ls-3/dist/index.min.js"></script>
This is the only error related to pixi-live2d-display.js in the console
Uncaught ReferenceError: process is not defined at pixi-live2d-display.js:1:53661 at pixi-live2d-display.js:1:216 at pixi-live2d-display.js:1:337
seems like working for me?
https://github.com/RaSan147/pixi-live2d-display/assets/34002411/0c477233-ebc9-4546-b8e7-f5f9d5991984
Well i've already fixed the node env issue. this shouldn't happen (also its on Cubism repository, not here and we can't edit it)
Ok, I'll give it a try, thank you very much <( *)>
I checked against the test.html and ran the test.html locally, and I found that when the parameters are the same (whichever is test.html), test.html can speak and lip sync normally, but when it is done inside the component, the model does not make any mouth movements.
Here's my local code
export default {
name: "Live2d",
setup() {
const model = ref(null);
const canvasWidth = ref(0);
const canvasHeight = ref(0);
var audio_link = "https://cdn.jsdelivr.net/gh/RaSan147/pixi-live2d-display@v1.0.3/playground/test.mp3" // [relative or full url path] [mp3 or wav file]
var category_name = "Tap" // name of the morion category
var animation_index = 1 // index of animation under that motion category
var priority_number = 3 // if you want to keep the current animation going or move to new animation by force [0: no priority, 1: idle, 2: normal, 3: forced]
var volume = 1; //[Optional arg, can be null or empty] [0.0 - 1.0]
var expression = 4; //[Optional arg, can be null or empty] [index|name of expression]
var resetExpression = true; //[Optional arg, can be null or empty] [true|false] [default: true] [if true, expression will be reset to default after animation is over]
var cors = "Anonymous" //[Optional arg, can be null or empty] [default: "Anonymous"] [if you want to use cors, set it to "use-cors"]
//
//
// 定义一个加载 Live2D 模型的函数
async function loadLive2DModel() {
// 加载 Live2D 模型
// const live2dModel = await PIXI.live2d.Live2DModel.from("/Resources/hiyori_pro_zh/runtime/hiyori_pro_t11.model3.json", {idleMotionGroup: 'standby'});
const live2dModel = await PIXI.live2d.Live2DModel.from("/Resources/Nagisa/nagisa.model3.json", {idleMotionGroup: 'standby'});
// 创建 PIXI 应用程序
const app = new PIXI.Application({
view: document.getElementById("canvas"),
autoStart: true,
resizeTo: document.getElementById("canvas"),
transparent: true
});
// 将 Live2D 模型添加到 PIXI 应用程序的舞台上
app.stage.addChild(live2dModel);
// 将加载后的 Live2D 模型保存到 ref 中
model.value = live2dModel;
updateCanvasSize();
}
//音频数据更新时播放
watch(()=> store.getters.getAuditData, (newValue, oldValue) => {
randomEvent();
});
// 在组件挂载时执行加载 Live2D 模型的函数
onMounted(() => {
loadLive2DModel();
});
function randomEvent() {
const byteCharacters = atob(store.getters.getAuditData);
const byteNumbers = new Array(byteCharacters.length);
for (let i = 0; i < byteCharacters.length; i++) {
byteNumbers[i] = byteCharacters.charCodeAt(i);
}
const byteArray = new Uint8Array(byteNumbers);
const blob = new Blob([byteArray], { type: 'audio/wav' });
// 创建 Blob 对象的 URL
const audioUrl = URL.createObjectURL(blob);
// model.value.motion("Idle", animation_index, priority_number, {sound: audioUrl, volume: volume, expression:expression, resetExpression:resetExpression})
model.value.speak(audioUrl, {
volume: volume,
expression: expression,
resetExpression: resetExpression,
crossOrigin: cors
})
const audio = new Audio(audioUrl);
audio.addEventListener('ended', () => {
URL.revokeObjectURL(audioUrl);
});
audio.play()
}
......
In addition, if you change the expression parameter to 3 (the expression of the model with its mouth open), the model will open its mouth when you run speak, but the mouth will not open or close with the audio. I don't quite understand why lip syncing can be done by filling in the test.html with 4, which is not included in the .model3.json file
There is a chance that, the mouth movement is hardly seen 😬 Guansss told me that the mouth sync parameters were too high... Will revert and try...
Sorry for all the trouble. Please bare with me
On Thu, Apr 11, 2024, 9:27 AM Nagisa @.***> wrote:
I checked against the test.html and ran the test.html locally, and I found that when the parameters are the same (whichever is test.html), test.html can speak and lip sync normally, but when it is done inside the component, the model does not make any mouth movements.
Here's my local code
export default { name: "Live2d", setup() { const model = ref(null); const canvasWidth = ref(0); const canvasHeight = ref(0); var audio_link = @.***/playground/test.mp3" // [relative or full url path] [mp3 or wav file] var category_name = "Tap" // name of the morion category var animation_index = 1 // index of animation under that motion category var priority_number = 3 // if you want to keep the current animation going or move to new animation by force [0: no priority, 1: idle, 2: normal, 3: forced] var volume = 1; //[Optional arg, can be null or empty] [0.0 - 1.0] var expression = 4; //[Optional arg, can be null or empty] [index|name of expression] var resetExpression = true; //[Optional arg, can be null or empty] [true|false] [default: true] [if true, expression will be reset to default after animation is over] var cors = "Anonymous" //[Optional arg, can be null or empty] [default: "Anonymous"] [if you want to use cors, set it to "use-cors"] // // // 定义一个加载 Live2D 模型的函数 async function loadLive2DModel() { // 加载 Live2D 模型 // const live2dModel = await PIXI.live2d.Live2DModel.from("/Resources/hiyori_pro_zh/runtime/hiyori_pro_t11.model3.json", {idleMotionGroup: 'standby'}); const live2dModel = await PIXI.live2d.Live2DModel.from("/Resources/Nagisa/nagisa.model3.json", {idleMotionGroup: 'standby'}); // 创建 PIXI 应用程序 const app = new PIXI.Application({ view: document.getElementById("canvas"), autoStart: true, resizeTo: document.getElementById("canvas"), transparent: true });
// 将 Live2D 模型添加到 PIXI 应用程序的舞台上 app.stage.addChild(live2dModel); // 将加载后的 Live2D 模型保存到 ref 中 model.value = live2dModel; updateCanvasSize(); } //音频数据更新时播放 watch(()=> store.getters.getAuditData, (newValue, oldValue) => { randomEvent(); }); // 在组件挂载时执行加载 Live2D 模型的函数 onMounted(() => { loadLive2DModel(); }); function randomEvent() { const byteCharacters = atob(store.getters.getAuditData); const byteNumbers = new Array(byteCharacters.length); for (let i = 0; i < byteCharacters.length; i++) { byteNumbers[i] = byteCharacters.charCodeAt(i); } const byteArray = new Uint8Array(byteNumbers); const blob = new Blob([byteArray], { type: 'audio/wav' }); // 创建 Blob 对象的 URL const audioUrl = URL.createObjectURL(blob); // model.value.motion("Idle", animation_index, priority_number, {sound: audioUrl, volume: volume, expression:expression, resetExpression:resetExpression}) model.value.speak(audioUrl, { volume: volume, expression: expression, resetExpression: resetExpression, crossOrigin: cors }) const audio = new Audio(audioUrl); audio.addEventListener('ended', () => { URL.revokeObjectURL(audioUrl); }); audio.play() } ......
In addition, if you change the expression parameter to 3 (the expression of the model with its mouth open), the model will open its mouth when you run speak, but the mouth will not open or close with the audio. I don't quite understand why lip syncing can be done by filling in the test.html with 4, which is not included in the .model3.json file
— Reply to this email directly, view it on GitHub https://github.com/RaSan147/pixi-live2d-display/issues/5#issuecomment-2048866859, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIDNL25VSK7YM7XO4ELJMTDY4X7LFAVCNFSM6AAAAABF74HZUSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBYHA3DMOBVHE . You are receiving this because you commented.Message ID: @.***>
Yes, I noticed. Although the model's mouth looks like it's always open, there are actually slight changes🧐🧐
Okay.. I switched to the free model provided by Live2D's official website to try it, and the model's mouth movement synchronization seems to work normally, which means that my model is too rough and the movement is not obvious😥
Take your time, refine your skills. I'd love to see your final work. Keep it up buddy...
On Thu, Apr 11, 2024, 1:17 PM Nagisa @.***> wrote:
Okay.. I switched to the free model provided by Live2D's official website to try it, and the model's mouth movement synchronization seems to work normally, which means that my model is too rough and the movement is not obvious😥
— Reply to this email directly, view it on GitHub https://github.com/RaSan147/pixi-live2d-display/issues/5#issuecomment-2049067727, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIDNL27HRDL3BZUUS2COBWLY4Y2JHAVCNFSM6AAAAABF74HZUSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBZGA3DONZSG4 . You are receiving this because you commented.Message ID: @.***>
Thanks bro, I just compared the configuration files of the models
"Groups": [
{
"Target": "Parameter",
"Name": "LipSync",
"Ids": [
"ParamMouthOpenY"
]
},
...
and I have not configured the parameter "ParamMouthOpenY" in this place before, and the mouth shape can be matched normally after adding it.😂 Anyway, thank you very much.🥰
Bro, I checked the official manual and API documentation, trying to find the function of the callback after the action is completed, and there seems to be nothing else available except for an isFinished() function that detects the state. And this function doesn't seem to be able to be used for the speak() method, so I'd like to ask if there are any other simple callback methods that can be used to handle the model sound playback after it is finished.
model.value.speak(audioUrl, {
volume: volume,
expression: expression,
resetExpression: resetExpression,
crossOrigin: cors
})
btw to use speak, use model.speak
not model.value.speak
(idk where you found it)
I'll check for adding a callback function (there is already a callback function, but not added with function caller)
is this ok or should there be any parameter that you need?
Pls forgive me for not having enough front-end skills. I'm not quite sure how you say this callback works? Subcalls like this have no effect.
I haven't implemented it yet (on api level)... Will do asap. Btw there will be major change when guansss update his version. He is doing lots of new stuffs (and should be done soon) So my version might become redundant...
On Mon, Apr 15, 2024, 7:56 AM Nagisa @.***> wrote:
Pls forgive me for not having enough front-end skills. I'm not quite sure how you say this callback works? image.png (view on web) https://github.com/RaSan147/pixi-live2d-display/assets/73531413/12397279-3264-436e-ab40-a16af679d4fb Subcalls like this have no effect.
— Reply to this email directly, view it on GitHub https://github.com/RaSan147/pixi-live2d-display/issues/5#issuecomment-2054308938, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIDNL276XCJL6WE7F3RD523Y5MXTFAVCNFSM6AAAAABF74HZUSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJUGMYDQOJTHA . You are receiving this because you commented.Message ID: @.***>
Well, I can also clear the audio cache with a timer. But I still have a problem, in the video demonstration in the readme.md, I saw that using the 'motion' method was able to trigger the model's movements along with the lip sync. But I actually tried it, and the model did the preset movements, but the audio didn't play and the lip sync wasn't synced.
Care to share the code+model in zip? If possible??
On Mon, Apr 15, 2024, 8:13 AM Nagisa @.***> wrote:
Well, I can also clear the audio cache with a timer. But I still have a problem, in the video demonstration in the readme.md, I saw that using the 'motion' method was able to trigger the model's movements along with the lip sync. But I actually tried it, and the model did the preset movements, but the audio didn't play and the lip sync wasn't synced.
— Reply to this email directly, view it on GitHub https://github.com/RaSan147/pixi-live2d-display/issues/5#issuecomment-2054359366, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIDNL22EUJRMIZ5LD3HMHE3Y5MZVDAVCNFSM6AAAAABF74HZUSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJUGM2TSMZWGY . You are receiving this because you commented.Message ID: @.***>
Could you please clone this repository if you can?😳
https://github.com/GNagisa/aichat.git
Gimme some time... Will test the model 1st...
On Mon, Apr 15, 2024, 8:34 AM Nagisa @.***> wrote:
Could you please clone this repository if you can?😳 https://github.com/GNagisa/aichat.git
— Reply to this email directly, view it on GitHub https://github.com/RaSan147/pixi-live2d-display/issues/5#issuecomment-2054458452, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIDNL27OEX3KSBE4UNJ6KOLY5M4E5AVCNFSM6AAAAABF74HZUSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJUGQ2TQNBVGI . You are receiving this because you commented.Message ID: @.***>
idk why but there seems to be some issue on your model? tried loading your model here, but failed https://guansss.github.io/live2d-viewer-web/ however haru or other models worked... Its getting weirder as we speak :(
[edit] as expected, are you using cubism 5 ???
yeah, somehow the model.motion is not working 😣 it seems there's some serious issue with the model (probably due to higher than supported version) haru (v4) and other models (4 and 2) are working as ususal
Yes, you're right. The model I saved when I saved it was Cubism 5 😥 because I wondered if it would be backwards compatible. I'll try to downgrade it.
Unfortunately, when I tried it with haru and hiyori, I could only make animation_index specified movements, and the lip shape and audio were not played. After all three models are loaded, the console displays "Live2D Cubism SDK Core Version 5.0.0", is this the problem with this anomaly?😵
Unfortunately, when I tried it with haru and hiyori, I could only make animation_index specified movements, and the lip shape and audio were not played. After all three models are loaded, the console displays "Live2D Cubism SDK Core Version 5.0.0", is this the problem with this anomaly?😵
Check the readme. The sound in motion is used differently https://github.com/RaSan147/pixi-live2d-display#do-some-motion-manually
model.motion("Idle",0,3, {sound:".....mp3"}) You need to put the parameters of sound in the {} due to lack of named argument feature in js.
Yes, you're right. The model I saved when I saved it was Cubism 5 😥 because I wondered if it would be backwards compatible. I'll try to downgrade it.
Everything is backward compatible not forward, you may never know what coming ahead. Imagine all browsers support html4,5 (Imagine) You gave them html6, will they work? even if html6 is backward compatible, the browser doesn't know the new features of html6.
Check the readme. The sound in motion is used differently https://github.com/RaSan147/pixi-live2d-display#do-some-motion-manually model.motion("Idle",0,3, {sound:".....mp3"}) You need to put the parameters of sound in the {} due to lack of named argument feature in js.
The previous writing without {} was based on the video in the readme.md, but you said "model.motion("Idle",0,3, {sound:".....mp3"})" I've tried this before, including just now when I tried it again, it didn't work as expected
Sadly i don't know how to run vue (too lazy to try) (sorry about the video, it was from a old version) I just checked it, sadly it worked and couldn't find any issue
was there any error? what was in the console output
Unfortunately, the console doesn't output any errors, and most of the warnings come from other components😥
Weird,bwhy pixi 6.5.2 🙂🤔 If i remember correctly, we are using pixi 7 now
Check new readme, let me know how it works. Do not use npm version (including the import * from pixi line in js), don't use that...
Use the cdn link, no need to import anything
https://cdn.jsdelivr.net/gh/RaSan147/pixi-live2d-display@v0.4.0-ls-3/dist/index.min.js doesn't seem to work properly in vue, both local and url references get the error 'Uncaught ReferenceError: process is not defined'. Maybe the CDN connection was changed before, but the project doesn't reference the new version. So the previous model didn't load properly, but the motion method didn't play the audio?That's my guess.😯
https://cdn.jsdelivr.net/gh/RaSan147/pixi-live2d-display@v0.4.0-ls-3/dist/index.min.js 在 vue 中似乎无法正常工作,但本地和 url 引用都收到错误“Uncaught ReferenceError: process is not defined”。也许 CDN 连接之前已更改,但该项目没有引用新版本。所以以前的模型没有正确加载,但是运动方法没有播放音频?这是我的猜测。 😯
Add a global variable Window.process = { env = { NODE_ENV = null } }
*Please make the naming ok
This patch should work Make sure to add it before adding the scripts (both pixi and cubism and lip sync)
Lemme check on your repo (i have no knowledge on vue, neither did have any idea on ts lol)
Finally i saw the error as well, running model from node never caused any trouble, running from python server crashed it
So i've sent a patch to @guansss with the possible fix will release a new version with the process issue it was originally from the main live2D repository, out of our hand, then saw guansss is using his fork
okay🤤
okk the main code is working, but onFinish callback is not working yet
Hi! Regarding the audio complete callback, I believe you can take the audio element after awaiting speak()
, and then listen for its end
event:
await model.speak(url, {})
const audio = model.internalModel.motionManager.currentAudio
if (audio) {
audio.addEventListener('end', () => {
// the audio has finished
})
}
// Note that in the new version I'm about to release, the audio should instead be read like this:
const audio = model.internalModel.lipSync.audio
(PS: hi @RaSan147 , I wonder if you see my latest comment in your PR? Not trying to rush you, just worrying that it's unnoticed😟)
(PS: hi @RaSan147 , I wonder if you see my latest comment in your PR? Not trying to rush you, just worrying that it's unnoticed😟)
wait there were comments?! thats why i got notifications, but didn't see any update in the PR page extremely sorry for not noticing...
about the callback,,, i made the function look like this: ....
AH SHET!!! now i realized, i added the callback in the motion manager,,,, but didn't re-add them the in index file.... thats why no matter what function i set, i see's undefined or default value... thanks a lot, will check your comments asap. sorry again
@GNagisa have fun check readme
I'll try it when I have time, thanks bro.🤗
Because the server can only return byte data of audio, I am curious if I can use the above method to create a URL to use.