Open konsumer opened 5 years ago
I have been trying to Find Kyle Dixon and/or convert that script to Node as well for this very purpose! I gave up after a couple of days and came here to suggest exactly this.
I've been using music-beat-detector to accomplish this but it's limited to MP3 files and youtube videos.
I think this is his github https://github.com/crocokyle
I gave up after a couple of days and came here to suggest exactly this. I've been using music-beat-detector to accomplish this but it's limited to MP3 files and youtube videos.
I think the above is a good start, I just need to research how to pump alsa into analyser
. Looks like this can make a pipe of audio-data, so it might just be a matter of gluing them together. Might be better to do the fft/light updates in the through-pipe, instead of a 1-second time loop, not sure. In the above I am using FFT windows, which is maybe not as good as music-beat-detector, as I will have to look through all the frequency buckets to detect beats. It looks like it works with pipes, too, so also may be the ticket. I will have to play around with it. Feel free to beat me to it, though, as I am pretty busy with work & family stuff, right now.
I'll continue hacking away at it and ping you if I come up with anything
Another (possibly false) start:
import { MusicBeatDetector } from 'music-beat-detector'
import mic from 'mic-stream'
const musicBeatDetector = new MusicBeatDetector()
mic()
.pipe(musicBeatDetector.getAnalyzer())
.on('peak-detected', (pos, bpm) => {
console.log('PEAK', pos, bpm)
})
In my super-quick testing it only seemed to detect a single peak, and no bpm, but it may be the start of getting it to work.
I was really impressed with how little code it took to start getting beat analysis with music-beat-detector
, so it'd be rad if this direction panned out.
This is what I crammed together using some of the logic from the disco script and music-beat-detector
. It works great! - But the hurdle i'm running into is doing this for any audio source as opposed to just MP3 and YouTube. My original goal for this was to sync the bulbs to Spotify.
const Speaker = require("speaker");
const createMusicStream = require("create-music-stream");
const { MusicBeatDetector, MusicBeatScheduler, MusicGraph } = require(".");
const TPLSmartDevice = require("tplink-lightbulb");
const musicSource = process.argv[2]; //gets the first argument on cli
//MusicBeatScheduler syncs any detected peak with the listened audio. It's useful to control some bulbs or any other effect
const musicBeatScheduler = new MusicBeatScheduler(pos => {
new function() {
var self = this;
this.lamp = [
// list all of your lamps
"192.168.0.10"
];
this.init = function() {
for (var i in this.lamp) {
var light = new TPLSmartDevice(this.lamp[i]);
this.setLightColor(light);
}
};
this.setLightColor = function(light) {
var randomHue = parseInt(Math.random() * 360),
randomSaturation = parseInt(Math.random() * 100),
randomBrightness = Math.floor(Math.random() * (100 - 50 + 1) + 50);
light.power(true, 5, {
color_temp: 0,
mode: "normal",
hue: randomHue, // 0 to 360
saturation: randomSaturation, // 0 to 100
brightness: 100
});
};
}().init();
});
//MusicBeatDetector analyzes the music
const musicBeatDetector = new MusicBeatDetector({
sensitivity: 0.5,
scheduler: musicBeatScheduler.getScheduler()
});
//get any raw pcm_16le stream
createMusicStream(musicSource)
//pipe on analyzer
.pipe(musicBeatDetector.getAnalyzer())
.on("end", () => {
console.log("end");
})
//pipe on speaker
.pipe(new Speaker())
.on("open", () => musicBeatScheduler.start());
looks good! What if you swapped createMusicStream(musicSource)
with mic()
from above? does that work?
Yeah I've been messing with that, but keep getting this error when I use mic-stream
events.js:174
throw er; // Unhandled 'error' event
^
Error: spawn rec ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:240:19)
at onErrorNT (internal/child_process.js:415:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
at Function.Module.runMain (internal/modules/cjs/loader.js:832:11)
at startup (internal/bootstrap/node.js:283:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)
Emitted 'error' event at:
at Process.ChildProcess._handle.onexit (internal/child_process.js:246:12)
at onErrorNT (internal/child_process.js:415:16)
[... lines matching original stack trace ...]
at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)
Okay I think that was an issue with the dependencies because I didn't have Sox installed. Will take another look this evening
Okay I got mic-stream
to pipe to the beat detector, but the detection on it is bad and there's feedback from the speaker instance to the mic. I've been looking for something to pipe speaker audio to music-beat-detector
so there's not that feedback loop with the mic input but have had zero luck finding anything that does this. Everything just pipes audio out to speaker, which is the reverse solution.
Yeh, I think the original didn't pipe to the speakers. I'll be honest, pipes aren't my strong-suit. I often get lost in the pipes after a few throughs. Does it need to pipe to an output?
It doesn't. That's been my hangup with it. Obviously, it's already playing on the speakers so it doesn't need to pipe to speaker
, but I don't know how else to get it into musicBeatScheduler.start()
.
I'm starting to wonder if music-beat-detector doesn't work how I think it does. I tried out a nice portaudio-wrapper naudiodon:
import { createWriteStream } from 'fs'
import { AudioIO, SampleFormat16Bit } from 'naudiodon'
// Create an instance of AudioIO with inOptions, which will return a ReadableStream
var ai = new AudioIO({
inOptions: {
channelCount: 2,
sampleFormat: SampleFormat16Bit,
sampleRate: 44100,
deviceId: -1 // Use -1 or omit the deviceId to select the default device
}
})
// Create a write stream to write out to a raw audio file
var ws = createWriteStream('rawAudio.raw')
//Start streaming
ai.pipe(ws)
ai.start()
This worked fine and saved rawAudio.raw
.
Then I tried to do same with music-beat-detector, with basically the same code:
import { AudioIO, SampleFormat16Bit } from 'naudiodon'
import { MusicBeatDetector } from 'music-beat-detector'
import { createWriteStream } from 'fs'
// You can use this kind of stuff to select another device
// const devices = naudiodon.getDevices()
const ai = new AudioIO({
inOptions: {
channelCount: 2,
sampleFormat: SampleFormat16Bit,
sampleRate: 44100,
deviceId: -1 // Use -1 or omit the deviceId to select the default device
}
})
const musicBeatDetector = new MusicBeatDetector()
const analyzer = musicBeatDetector.getAnalyzer()
const ws = createWriteStream('rawAudio.raw')
ai.pipe(analyzer)
.on('peak-detected', (pos, bpm) => {
console.log('PEAK', pos, bpm)
})
.pipe(ws)
ai.start()
It also writes rawAudio.raw
. I tested on OSX (mojave) & linux (Pop!_OS 19.04), but I don't have a mic on my linux box.
On mac it exits immediately, and on linux it seems to keep recording, but I'm really not sure if it's working (no mic.)
Without the createWriteStream
both exit immediately.
I think you need to start the stream and then pipe it but don't take my word for it. I had to take the stream of the mic-stream
instance and pipe that to the analyzer, but not the mic instance itself. I'll share my code when I get home. It should be pretty clear.
The above code is taken directly from the naudiodon
docs, and as I said, it does save the rawAudio file correctly in the first example. As far as I can tell, it's meant that you setup the stream, then call start
to get it rolling.
Maybe analyzer
has it's own sort of start
, but I thought pipe
did that for us (like how createWriteStream
works.)
After piping to musicBeatDetector.getAnalyzer()
, the musicBeatScheduler.start()
is what needs to be called to actually start working with the data that's being analyzed. It's basically musicBeatScheduler.start()
> then do something
. But chaining the detector to the scheduler without the speaker instance in the middle is where I'm getting hung up.
Yeh, I went back to music-beat-detector docs, and tried same idea:
import { AudioIO, SampleFormat16Bit } from 'naudiodon'
import { MusicBeatDetector, MusicBeatScheduler } from 'music-beat-detector'
import { createWriteStream } from 'fs'
// You can use this kind of stuff to select another device
// const devices = naudiodon.getDevices()
const ai = new AudioIO({
inOptions: {
channelCount: 2,
sampleFormat: SampleFormat16Bit,
sampleRate: 44100,
deviceId: -1 // Use -1 or omit the deviceId to select the default device
}
})
const musicBeatScheduler = new MusicBeatScheduler(pos => {
console.log(`peak at ${pos}ms`) // your music effect goes here
})
const musicBeatDetector = new MusicBeatDetector({
scheduler: musicBeatScheduler.getScheduler()
})
const ws = createWriteStream('rawAudio.raw')
ai
.pipe(musicBeatDetector.getAnalyzer())
.on('peak-detected', (pos, bpm) => {
console.log('PEAK', pos, bpm)
})
.pipe(ws)
ai.start()
musicBeatScheduler.start()
Again, on OSX it just exits right away, but after it does this (with some techno blaring):
PortAudio V19.6.0-devel, revision unknown
Input audio options: default device, sample rate 44100, channels 2, bits per sample 16, max queue 2
Input device name is Built-in Microphone
PEAK 73 0
peak at 73ms
AudioIO: portAudio status - input overflow
AudioIO end
Not sure what input overflow
is, but it looks like it might be moving in the right direction.
When I tested on linux (without mic) it stayed open & wrote to the file, but I'm really not sure if it's working (no mic.) I will see if I can find a mic somewhere.
switching them did same:
musicBeatScheduler.start()
ai.start()
on mac it called my callbacks:
PEAK 16 0
peak at 16ms
but then:
AudioIO: portAudio status - input overflow
AudioIO end
and then quit.
I looked at the raw file on mac (in audacity) and it does appear to be grabbing a single beat-slice (sounds like a snare, sort of.)
Here is another one that appears to record up to a bass-thump:
Is it just exiting at the first beat?
I tried combining the demo-code, and got the same problem:
const fs = require('fs')
const { MusicBeatDetector, MusicBeatScheduler, MusicGraph } = require('music-beat-detector')
const { AudioIO, SampleFormat16Bit, getDevices } = require('naudiodon')
console.log(getDevices())
const musicGraph = new MusicGraph()
const ws = fs.createWriteStream('rawAudio.raw')
const ai = new AudioIO({
inOptions: {
channelCount: 2,
sampleFormat: SampleFormat16Bit,
sampleRate: 44100,
deviceId: -1 // Use -1 or omit the deviceId to select the default device
}
})
const musicBeatScheduler = new MusicBeatScheduler(pos => {
console.log(`peak at ${pos}ms`) // your music effect goes here
})
const musicBeatDetector = new MusicBeatDetector({
plotter: musicGraph.getPlotter(),
scheduler: musicBeatScheduler.getScheduler()
})
ai.start()
ai
.pipe(musicBeatDetector.getAnalyzer())
.on('peak-detected', (pos, bpm) => console.log(`peak-detected at ${pos}ms, detected bpm ${bpm}`))
.on('end', () => {
fs.writeFileSync('graph.svg', musicGraph.getSVG())
console.log('end')
})
.pipe(ws)
.on('open', () => musicBeatScheduler.start())
This produced a 1-beat SVG before exiting with AudioIO: portAudio status - input overflow
:
The raw-audio file looks similar:
This works! My only issue now is that the mic input is nowhere near as accurate as piping the audio straight to the analyzer.
mic
.pipe(musicBeatDetector.getAnalyzer())
.pipe(fs.createWriteStream("/dev/null"))
.on("open", () => musicBeatScheduler.start());
I'm still thinking about this, but don't have too much time to work on it, lately. Any progress?
Here are some other ways we might get cross-platform audio + beat-onset-detection:
Not much progress after the last "breakthrough" haha. I've since been playing with Phillip's Hue lights
I would like to see something like this done with Electron as it would be more portable (I think?) and rely less on system dependent libraries.
I think it's very portable with all 3 methods, but electron is probly easiest. I played around with deskgap (a light electron alternative that runs faster) for a while, but couldn't get audio capture to work. I can revisit later with electron. personally, I really don't want a GUI to be required, so C++ or even a python-pipe sounds a bit better.
fun stuff here :)
One idea might be to use my new crossaudio to do some analysis. It currently doesn't have native (non-browser) mic-input setup, but should be pretty easy to add, and if nothing else could be used in a browser (like via electron or similar) to send commands to the lights.
Pretty sure the spectrum/bargraph stuff I am doing could be used to make lights act in a cool way with sound, even if not used to detect beats. this could sort of directly translate to "make a bunch of colored lights show something cool to music"
this is a cool use of code to change colors of light, but could be much more efficient if it all ran in node instead of python, and would show off how to use API better:
Need to do some research to resolve
TODO
'sAlso, I should figure out Kyle Dixon's github name (got code from an email they sent.)