Closed qlambert-pro closed 1 year ago
Hi @qlambert-pro !
Thanks so much for this !!! And sorry I took so long to handle it. I was doing a big refactoring.
In the end I opted for a simpler solution than passing in a dummy stream : just a comment to tell people to comment out the code :
const initApp = async () => {
// Register the worklet
await WebPdRuntime.initialize(audioContext)
// Fetch the patch code
response = await fetch('${compiledPatchFilename}')
patch = await ${artefacts.compiledJs ?
'response.text()': 'response.arrayBuffer()'}
// Comment this if you don't need audio input
stream = await navigator.mediaDevices.getUserMedia({ audio: true })
// Hide loading and show start button
loadingDiv.style.display = 'none'
startButton.style.display = 'block'
}
const startApp = async () => {
// AudioContext needs to be resumed on click to protects users
// from being spammed with autoplay.
// See : https://github.com/WebAudio/web-audio-api/issues/345
if (audioContext.state === 'suspended') {
audioContext.resume()
}
// Setup web audio graph
webpdNode = await WebPdRuntime.run(
audioContext,
patch,
WebPdRuntime.createDefaultSettings('./${compiledPatchFilename}'),
)
webpdNode.connect(audioContext.destination)
// Comment this if you don't need audio input
const sourceNode = audioContext.createMediaStreamSource(stream)
sourceNode.connect(webpdNode)
// Hide the start button
startButton.style.display = 'none'
}
But thanks so much again :wink:
This is an attempt at closing #132