Open ValleyBell opened 5 years ago
The Mac version of vgmplay
relies on the libao
dynamic library and it says above that:
support for Mac OS X audio APIs (done, supports Core Audio)
Does this mean libao
is not required for libvgm
and you're using Core Audio directly instead?
Yes, that's correct. We have a Core Audio driver: https://github.com/ValleyBell/libvgm/blob/c068dffd272bc7ad3a8e96b17426b3f48e7ccf2d/audio/AudDrv_CoreAudio.c
In the vgmplay config, setting AudioDriver = "Core Audio"
should do the trick, if you have compiled libvgm with Core Audio support. (AUDIODRV_APPLE ON
in the CMake configuration)
Works like a charm. Can't wait to see what people make with this!
Is it the correct place to add a feature request?
I have a project were I'd like to feed vgmplay from a pipe.
I know .vgm files have a header etc, but once the header has been read or the clocks correctly set, it could be useful to process incoming events in realtime. This could be useful to test sound engines or libraries against emulated chips. Also, the 'sleeping' commands would have to be bypassed and/or ignored.
What do you think about it?
Real time via pipe is not really feasible, anyway, since pipes buffer data at rates you don't really control, similar to how USB devices buffer commands in blocks. You'll likely want to handle timing and delay commands yourself as you stream them to it. Even if it's only stepping in small steps between commands and checks for input, assuming you're piping data from a MIDI or HID input.
Is it the correct place to add a feature request?
No, in general feature requests should be new issues.
I have a project were I'd like to feed vgmplay from a pipe.
I know .vgm files have a header etc, but once the header has been read or the clocks correctly set, it could be useful to process incoming events in realtime. This could be useful to test sound engines or libraries against emulated chips.
This sounds interesting, but I won't add parsing custom commands for now.
For the things that you want, you should just use the emu
library directly. Or you make a copy of the VGMPlayer class that you modify to your own needs.
For the things that you want, you should just use the emu library directly.
Okay, I didn't know about this library, but this is exactly what I needed. thanks a lot.
General plan and current state
The idea behind libvgm is to make a collection of libraries and classes that make it easy to write a VGM player or programs that emulate sound chips in general. It includes sub-libraries for audio output, sound chip emulation and VGM playback. (and maybe VGM parsing as well)
The project is CMake based and will allow you to enable/disable almost all features. So if you include it within your project, you may e.g. just compile the sound chips you really need.
C++ might be used during development when I think that stuff is faster/easier to do with it. Eventually I'd like to have everything as pure C code. (mostly C90 with C++ comments)
audio library
The audio sub-library should be a simple abstraction layer in order to allow platform-independent audio output. There is support for multiple (often platform-dependent) "audio drivers".
You can either set a callback that will be called with a buffer to be filled or you can poll the driver and explicitly send data to it.
Features:
emulation library
The emulation sub-library allows you to emulate sound chips in software, resulting in samples being rendered into a buffer. In order to access sound chips, you need to read and write registers and memory. It focuses on being fast and flexible rather than being very easy to use.
player library
The player sub-library should make it easy to play back VGMs (and possibly other logged formats). It uses the emulation library for sound chip emulation and outputs samples into a buffer.
Future plans
audio library
I don't have any additional future plans for this right now. Maybe a dummy device that just redirects all data to custom callback? (for potential FLAC/MP3 export in host applications)
emulation library
player library
other