projectM-visualizer / projectm

projectM - Cross-platform Music Visualization Library. Open-source and Milkdrop-compatible.
https://discord.gg/mMrxAqaa3W
GNU Lesser General Public License v2.1
3.22k stars 365 forks source link

Integrating projectM into a web app #812

Open evoyy opened 1 month ago

evoyy commented 1 month ago

Please confirm the following points:

Topic

Third-Party Application Interfaces and Remote Control

Your Request

I'm investigating the possiblity of replacing Butterchurn with projectM. Butterchurn is no longer maintained and projectM seems to be the focal point of Milkdrop related development now.

Butterchurn renders into a canvas in the DOM. This is great because it allows the visualizer window to be controlled and styled with HTML elements, resized, transitioned, full-screened, and even detached from the browser using the Picture-in-Picture API.

As I understand, projectM can be compiled into WebAssembly using Emscripten. I found an example here:

https://github.com/projectM-visualizer/examples-emscripten

My plan is to do something similar, except without using the SDL library; the browser's DOM will be the UI and projectM will be controlled by Javascript. I wondered if this is possible and I found Embind:

Embind is used to bind C++ functions and classes to JavaScript, so that the compiled code can be used in a natural way by “normal” JavaScript. Embind also supports calling JavaScript classes from C++.

https://emscripten.org/docs/porting/connecting_cpp_and_javascript/embind.html

Do you think what I want to do is a good idea, or even possible? I'm not a C programmer and I am new to WebAssembly so it will be a challenge for me.

kblaschke commented 1 month ago

SDL isn't really required, but very convenient as it's integrated into Emscripten, making it basically free to use.

You can use any other means of acquiring a WebGL context, e.g. Emscripten's built-in C++ functions and handle input and audio recording elsewhere. The WebGL context is always bound to a canvas, so it'll automatically receive the rendering output.

projectM itself just needs the active GL context and audio data being passed to it. Anything else is totally up to the integrating app.

You'll at least need some C++ code to glue functionality to the JavaScript side, there are functions in Emscripten's API to do this.

evoyy commented 4 weeks ago

Thanks, good to know it's possible. I will attempt a proof of concept next week, rendering into a WebGL canvas.

kblaschke commented 4 weeks ago

Would be great to hear how it worked out!

revmischa commented 4 weeks ago

Yes we did get somewhat close to getting emscripten working. It should be quite possible. Emscripten can bundle up the preset files (and textures should be added) to simulate a filesystem it can read the presets/textures from.

Somewhat shockingly that header says I tried 10 years ago in 2014:

Screenshot 2024-06-02 at 1 57 30 PM We did a lot of work to get to compatibility with GLES for this project.

Come chat with us on discord if any questions.

evoyy commented 3 weeks ago

An update on this.

I have studied WebAssembly/emscripten and I have an idea of how this can work. It's more complicated that I thought. The main issue is passing audio data from my app (JavaScript) to projectM (WebAssembly). As far as I can tell, it is not possible to access the raw audio data from an AudioContext instance in the browser's main thread. It seems the correct way to process audio data is by using an Audio Worklet which runs in a separate thread. They recommend WebAssembly to do this:

It's worth noting that because audio processing can often involve substantial computation, your processor may benefit greatly from being built using WebAssembly, which brings near-native or fully native performance to web apps. Implementing your audio processing algorithm using WebAssembly can make it perform markedly better.

https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Using_AudioWorklet

So it seems I need two WebAssembly modules; one for projectM, and an audio worklet, and audio data is shared between the two threads.

At a high level, this is what I believe needs to be done:

Whether or not I will continue with this, I'm not sure. I'm concerned about burdening myself with technical debt, even if I get it working.

kblaschke commented 3 weeks ago

I don't think there's a need to use workers, as the only thing you have to do is getting the audio data from an Audio buffer, then created an interleaved array from the buffer (AudioBuffer stores channels separately(e.g. one LLLL, the other RRRR), but projectM required an array with samples for each channel following each other, e.g. LRLRLRLR...), but that's a very fast operation. The actual audio processing is done by projectM in WASM, which is exactly what the quote above states. projectM doesn't do any complex processing though, just a FFT tonget spectrum data and some simple smoothing.

I'm not too familiar with the web audio API, but I guess you can just query the audio buffer for samples each time the rendering function is called. Ideally, there should be around 735 frames of audio available if your context captures with 44.1 kHz and you render at 60 FPS.

evoyy commented 3 weeks ago

Thanks for the AudioBuffer tip! I did look into AudioBuffer but I think I got confused by decodeAudioData (which requires loading an actual audio file). audioCtx.createButter() is exactly what I was looking for.

So I will continue with this, and I will update again soon with my progress!

evoyy commented 3 weeks ago

I have built projectM for Emscripten, as suggested at the link below, but I have run into a problem importing from it.

https://github.com/projectM-visualizer/examples-emscripten#configure-and-compile-projectm

To build for Emscripten:

cd libprojectM-4.1.1
mkdir build
cd build
emcmake cmake .. \
    -D CMAKE_BUILD_TYPE=Release \
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D ENABLE_EMSCRIPTEN=1
emmake cmake \
    --build . \
    --target install \
    --config Release

After the build I can see the headers and static lib:

# ls -l /usr/local/include

drwxr-xr-x 2 root root 4096 Jun  9 19:59 projectM-4

# ls -l /usr/local/lib

-rw-r--r-- 1 root root   71986 Jun  9 18:20 libprojectM-4-playlist.a
-rw-r--r-- 1 root root 1931530 Jun  9 18:20 libprojectM-4.a

This is what I am trying to compile with Emscripten:

#include <emscripten/html5.h>
#include <projectM-4/projectM.h>

int main() {
    // initialize WebGL context attributes
    EmscriptenWebGLContextAttributes webgl_attrs;
    emscripten_webgl_init_context_attributes(&webgl_attrs);

    EMSCRIPTEN_WEBGL_CONTEXT_HANDLE gl_ctx = emscripten_webgl_create_context("#my-canvas", &webgl_attrs);
    emscripten_webgl_make_context_current(gl_ctx);

    // enable floating-point texture support for motion vector grid
    // https://github.com/projectM-visualizer/projectm/blob/master/docs/emscripten.rst#initializing-emscriptens-opengl-context
    // https://emscripten.org/docs/api_reference/html5.h.html#c.emscripten_webgl_enable_extension
    emscripten_webgl_enable_extension(gl_ctx, "OES_texture_float");

    projectm_handle projectMHandle = projectm_create();

    return 0;
}

But it results in what a appears to be a linking error:

emcc \
    -v \
    -I /usr/local/include \
    -o build/output.js \
    -s MIN_WEBGL_VERSION=2 -s MAX_WEBGL_VERSION=2 \
    -s FULL_ES2=1 -s FULL_ES3=1 \
    -s ALLOW_MEMORY_GROWTH=1 \
    demo.cpp

 "/emsdk/upstream/bin/clang++" -target wasm32-unknown-emscripten -fignore-exceptions -mllvm -combiner-global-alias-analysis=false -mllvm -enable-emscripten-sjlj -mllvm -disable-lsr --sysroot=/emsdk/upstream/emscripten/cache/sysroot -DEMSCRIPTEN -Xclang -iwithsysroot/include/fakesdl -Xclang -iwithsysroot/include/compat -v -I/usr/local/include demo.cpp -c -o /tmp/emscripten_temp_df7zcv60/demo_0.o
clang version 19.0.0git (https:/github.com/llvm/llvm-project 7cfffe74eeb68fbb3fb9706ac7071f8caeeb6520)
Target: wasm32-unknown-emscripten
Thread model: posix
InstalledDir: /emsdk/upstream/bin
 (in-process)
 "/emsdk/upstream/bin/clang-19" -cc1 -triple wasm32-unknown-emscripten -emit-obj -disable-free -clear-ast-before-backend -disable-llvm-verifier -discard-value-names -main-file-name demo.cpp -mrelocation-model static -mframe-pointer=none -ffp-contract=on -fno-rounding-math -mconstructor-aliases -target-cpu generic -fvisibility=hidden -debugger-tuning=gdb -fdebug-compilation-dir=/src -v -fcoverage-compilation-dir=/src -resource-dir /emsdk/upstream/lib/clang/19 -D EMSCRIPTEN -I /usr/local/include -isysroot /emsdk/upstream/emscripten/cache/sysroot -internal-isystem /emsdk/upstream/emscripten/cache/sysroot/include/wasm32-emscripten/c++/v1 -internal-isystem /emsdk/upstream/emscripten/cache/sysroot/include/c++/v1 -internal-isystem /emsdk/upstream/lib/clang/19/include -internal-isystem /emsdk/upstream/emscripten/cache/sysroot/include/wasm32-emscripten -internal-isystem /emsdk/upstream/emscripten/cache/sysroot/include -fdeprecated-macro -ferror-limit 19 -fgnuc-version=4.2.1 -fskip-odr-check-in-gmf -fcxx-exceptions -fignore-exceptions -fexceptions -fcolor-diagnostics -iwithsysroot/include/fakesdl -iwithsysroot/include/compat -mllvm -combiner-global-alias-analysis=false -mllvm -enable-emscripten-sjlj -mllvm -disable-lsr -o /tmp/emscripten_temp_df7zcv60/demo_0.o -x c++ demo.cpp
clang -cc1 version 19.0.0git based upon LLVM 19.0.0git default target x86_64-unknown-linux-gnu
ignoring nonexistent directory "/emsdk/upstream/emscripten/cache/sysroot/include/wasm32-emscripten/c++/v1"
ignoring nonexistent directory "/emsdk/upstream/emscripten/cache/sysroot/include/wasm32-emscripten"
#include "..." search starts here:
#include <...> search starts here:
 /usr/local/include
 /emsdk/upstream/emscripten/cache/sysroot/include/fakesdl
 /emsdk/upstream/emscripten/cache/sysroot/include/compat
 /emsdk/upstream/emscripten/cache/sysroot/include/c++/v1
 /emsdk/upstream/lib/clang/19/include
 /emsdk/upstream/emscripten/cache/sysroot/include
End of search list.
 /emsdk/upstream/bin/clang --version
cache:INFO: generating system asset: symbol_lists/3255bda3b3dd995124fdd53295fa4ff1dbe7b258.json... (this will be cached in "/emsdk/upstream/emscripten/cache/symbol_lists/3255bda3b3dd995124fdd53295fa4ff1dbe7b258.json" for subsequent builds)
 /emsdk/node/18.20.3_64bit/bin/node /emsdk/upstream/emscripten/src/compiler.mjs /tmp/tmpbywb42nf.json --symbols-only
cache:INFO:  - ok
 /emsdk/upstream/bin/wasm-ld -o build/output.wasm -lembind-rtti -L/emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten /tmp/emscripten_temp_df7zcv60/demo_0.o -lGL-webgl2-full_es3-getprocaddr -lal -lhtml5 -lstubs-debug -lnoexit -lc-debug -ldlmalloc -lcompiler_rt -lc++-noexcept -lc++abi-debug-noexcept -lsockets -mllvm -combiner-global-alias-analysis=false -mllvm -enable-emscripten-sjlj -mllvm -disable-lsr /tmp/tmpmz14bl5ylibemscripten_js_symbols.so --strip-debug --export=emscripten_stack_get_end --export=emscripten_stack_get_free --export=emscripten_stack_get_base --export=emscripten_stack_get_current --export=emscripten_stack_init --export=_emscripten_stack_alloc --export=__getTypeName --export=__get_temp_ret --export=__set_temp_ret --export=__wasm_call_ctors --export=_emscripten_stack_restore --export-if-defined=__start_em_asm --export-if-defined=__stop_em_asm --export-if-defined=__start_em_lib_deps --export-if-defined=__stop_em_lib_deps --export-if-defined=__start_em_js --export-if-defined=__stop_em_js --export-if-defined=main --export-if-defined=__main_argc_argv --export-if-defined=fflush --export-table -z stack-size=65536 --max-memory=2147483648 --initial-heap=16777216 --no-entry --stack-first --table-base=1
wasm-ld: error: /tmp/emscripten_temp_df7zcv60/demo_0.o: undefined symbol: projectm_create
em++: error: '/emsdk/upstream/bin/wasm-ld -o build/output.wasm -lembind-rtti -L/emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten /tmp/emscripten_temp_df7zcv60/demo_0.o -lGL-webgl2-full_es3-getprocaddr -lal -lhtml5 -lstubs-debug -lnoexit -lc-debug -ldlmalloc -lcompiler_rt -lc++-noexcept -lc++abi-debug-noexcept -lsockets -mllvm -combiner-global-alias-analysis=false -mllvm -enable-emscripten-sjlj -mllvm -disable-lsr /tmp/tmpmz14bl5ylibemscripten_js_symbols.so --strip-debug --export=emscripten_stack_get_end --export=emscripten_stack_get_free --export=emscripten_stack_get_base --export=emscripten_stack_get_current --export=emscripten_stack_init --export=_emscripten_stack_alloc --export=__getTypeName --export=__get_temp_ret --export=__set_temp_ret --export=__wasm_call_ctors --export=_emscripten_stack_restore --export-if-defined=__start_em_asm --export-if-defined=__stop_em_asm --export-if-defined=__start_em_lib_deps --export-if-defined=__stop_em_lib_deps --export-if-defined=__start_em_js --export-if-defined=__stop_em_js --export-if-defined=main --export-if-defined=__main_argc_argv --export-if-defined=fflush --export-table -z stack-size=65536 --max-memory=2147483648 --initial-heap=16777216 --no-entry --stack-first --table-base=1' failed (returned 1)

Adding these flags:

-L /usr/local/lib \
-l libprojectM-4 \

Results in:

wasm-ld: error: unable to find library -llibprojectM-4

If I comment out the following line it does compile:

projectm_handle projectMHandle = projectm_create();

I am completely stuck. Do you have any idea what could be wrong?

evoyy commented 3 weeks ago

For reference, I have encapsulated my projectM-emscripten build in this Dockerfile:

# Build:
#     docker build --tag projectm-emscripten-builder .
#
# Run:
#     docker run --rm -t -u $(id -u):$(id -g) -v $(pwd):/src projectm-emscripten-builder emcc ...

FROM emscripten/emsdk:3.1.61

ARG PROJECTM_VERSION=4.1.1

RUN apt-get update && apt-get install -y --no-install-recommends \
        # libprojectM build tools and dependencies
        # https://github.com/projectM-visualizer/projectm/wiki/Building-libprojectM#install-the-build-tools-and-dependencies
        libgl1-mesa-dev \
        libglm-dev \
        mesa-common-dev \
    && rm -rf /var/lib/apt/lists/* \
    # download projectM
    && wget https://github.com/projectM-visualizer/projectm/releases/download/v$PROJECTM_VERSION/libprojectM-$PROJECTM_VERSION.tar.gz \
    && tar xzf libprojectM-*.tar.gz \
    && rm libprojectM-*.tar.gz \
    && cd libprojectM-* \
    # build projectM
    # https://github.com/projectM-visualizer/projectm/blob/master/BUILDING-cmake.md
    && mkdir build \
    && cd build \
    && emcmake cmake .. \
        -D CMAKE_BUILD_TYPE=Release \
        -D CMAKE_INSTALL_PREFIX=/usr/local \
        -D ENABLE_EMSCRIPTEN=1 \
    && emmake cmake \
        --build . \
        --target install \
        --config Release \
    # allow container to be run as a non-root user
    && chmod 777 /emsdk/upstream/emscripten/cache/symbol_lists*
evoyy commented 3 weeks ago

Solved! (https://github.com/projectM-visualizer/projectm/issues/812#issuecomment-2156769860)

It turns out I need to link the library and code together, like this:

emcc \
    -I /usr/local/include \
    -o build/output.js \
    -s MIN_WEBGL_VERSION=2 -s MAX_WEBGL_VERSION=2 \
    -s FULL_ES2=1 -s FULL_ES3=1 \
    -s ALLOW_MEMORY_GROWTH=1 \
    demo.cpp /usr/local/lib/libprojectM-4.a

There goes my entire Sunday. Sorry for the noise!

evoyy commented 2 weeks ago

I don't think there's a need to use workers, as the only thing you have to do is getting the audio data from an Audio buffer, then created an interleaved array from the buffer (AudioBuffer stores channels separately(e.g. one LLLL, the other RRRR), but projectM required an array with samples for each channel following each other, e.g. LRLRLRLR...), but that's a very fast operation. The actual audio processing is done by projectM in WASM, which is exactly what the quote above states. projectM doesn't do any complex processing though, just a FFT tonget spectrum data and some simple smoothing.

I'm not too familiar with the web audio API, but I guess you can just query the audio buffer for samples each time the rendering function is called. Ideally, there should be around 735 frames of audio available if your context captures with 44.1 kHz and you render at 60 FPS.

I managed to get projectM working in an HTML canvas in my browser, but only with the default idle preset. As I attempted to pass the audio data I realised that zero-filled arrays of sample data were always returned by audioBuffer.getChannelData(). Then I came across this:

Note: createBuffer() used to be able to take compressed data and give back decoded samples, but this ability was removed from the specification, because all the decoding was done on the main thread, so createBuffer() was blocking other code execution. The asynchronous method decodeAudioData() does the same thing — takes compressed audio, such as an MP3 file, and directly gives you back an AudioBuffer that you can then play via an AudioBufferSourceNode. For simple use cases like playing an MP3, decodeAudioData() is what you should be using.

https://developer.mozilla.org/en-US/docs/Web/API/BaseAudioContext/createBuffer

This is the preferred method of creating an audio source for Web Audio API from an audio track. This method only works on complete file data, not fragments of audio file data.

https://developer.mozilla.org/en-US/docs/Web/API/BaseAudioContext/decodeAudioData

This is a shame. It seems I was kind of right the first time.

My web app streams audio from files hosted on AWS S3. It does not have access to the complete file data immediately. Many of the audio files are recordings that last for hours. So it seems that this is not going to work for me after all.

I guess now I will have to stop work on this. I will checkpoint my work in case some new feature comes along that makes it possible to decode fragments of audio file data. The basic demo I created works very well and it is simple to bind functions to control projectM entirely with JavaScript.

I'm not sure whether to close this issue or not. Please feel free to close it if you wish.

evoyy commented 2 weeks ago

I had a thought. The Web Audio API provides an AnalyserNode.

The AnalyserNode interface represents a node able to provide real-time frequency and time-domain analysis information. It is an AudioNode that passes the audio stream unchanged from the input to the output, but allows you to take the generated data, process it, and create audio visualizations.

The getByteFrequencyData() method of the AnalyserNode interface copies the current frequency data into a Uint8Array (unsigned byte array) passed into it.

The getFloatFrequencyData() method of the AnalyserNode Interface copies the current frequency data into a Float32Array array passed into it. Each array value is a sample, the magnitude of the signal at a particular time.

This node was specifically designed to facilitate audio visualizers. It is what typical web based audio visualizers use as input data. I'm fairly sure it's what Butterchurn uses. I wonder if I can use or adapt this data to work with projectM.

So I'm not ready to give up just yet. I will see if I can get the data from the analyser node to work with projectM...

revmischa commented 2 weeks ago

Oh that's interesting, super neat if we can bypass the addPCM() call and directly send the output from the AnalyserNode to projectM, like some sort of addFFT() call instead of addPCM()

evoyy commented 2 weeks ago

That would be ideal. For now I will simply process the data into the form projectm_pcm_add_float() needs.

After that I need to test packaging and loading presets, and then finally optimize the build. Then I will publish my code and host a demo online.

evoyy commented 2 weeks ago

Having thought about it, since I believe a fourier transform (or signal processing magic) has already been done on the raw signal to produce an interpolated frequency array representing a moment in time, I'm not sure I can reverse that back into raw pcm. At least I have some kind of data to give projectM, but it won't be pcm data,

kblaschke commented 2 weeks ago

All things considered, it looks more like an issue with handling the stream data between the server and the browser APIs properly. Most streaming websites either use an audio streaming service like IceCast, which will provide the proper MP3 header after connecting, then just streams the data. Or, you could use HLS, which will split your audio data into segments (3-5s long each) and using a playlist (M3U8) to get each chunk. the player regularly updates the playlist from the server to retrieve new chunks. Thus, HLS doesn't require any specialized audio server, it's often based on simple physical files hosted on the web server.

projectM requires the actual waveform data to render the visuals properly, so you should use AnalyserNode.getFloatTimeDomainData() or the originally decoded audio and pass the result to projectM's add_pcm_float method.

projectM has own FFT implementation internally. It's a specially adapted algorithm taken directly from Milkdrop, which does a bit more than just running a discrete FFT on the waveform. It applies an envelope filter and does additional noise filtering. This FFT also returns a very specific value range required for preset spectrum and beat detection data. If these values are off, presets will render erratically - this was one of the main issues in earlier projectM versions, which used an off-the-shelf FFT implementation.

The actual FFT algorithm is very fast (just a bunch of additions and multiplications running at near-native speeds as it's compiled in WASM), so there won't be any (measurable) performance difference in comparison to the AnalyserNode.

evoyy commented 2 weeks ago

All things considered, it looks more like an issue with handling the stream data between the server and the browser APIs properly.

It would be nice to be able to use projectM with generated and live audio streams and as well though.

projectM requires the actual waveform data to render the visuals properly, so you should use AnalyserNode.getFloatTimeDomainData() or the originally decoded audio and pass the result to projectM's add_pcm_float method.

I will try this, using the maximum fftSize of 32768 to give projectM as much data as possible to work with.

evoyy commented 2 weeks ago

Good news, it seems to work! The default preset is reacting to my audio and seems to be synchronized with the beat.

There is a drawback with Emscripten that when calling compiled C functions from JavaScript, only byte arrays can be passed to them (without manually allocating memory and writing to it; something I'd rather not do). Fortunately projectM provides the function projectm_pcm_add_uint8() and I am using this along with getByteTimeDomainData() on the JavaScript side.

I am aware that projectM will have to interpret my audio data as mono as I don't have access to the separate channels. In the future maybe I could make the effort to create a worklet to decode the audio, instead of using the AnalyserNode, and then I'd be able to get the raw pcm data in stereo. But I wonder if it would be worth it, do presets generally look better when projectM is supplied with stereo data?

kblaschke commented 2 weeks ago

There is a drawback with Emscripten that when calling compiled C functions from JavaScript, only byte arrays can be passed to them (without manually allocating memory and writing to it; something I'd rather not do).

You can use Embind to create a JavaScript binding for any C/C++ function, and even use C++ classes from JavaScript. There are many examples in the docs. This allows you to pass a float array to projectM. You can even expose the whole projectM API to JS using this technique and implement all the control/setup code in JS.

evoyy commented 2 weeks ago

I'm using Embind. These are my bindings which are simple wrappers that encapsulate the projectm_handle:

EMSCRIPTEN_BINDINGS(projectm_bindings) {
    function("destruct", &destruct);
    function("init", &init);
    function("renderFrame", &render_frame);
    function("setWindowSize", &set_window_size);
}

I tried passing a float array using Embind but could not get it to work, so I am exporting a wrapper using Emscripten's ccall feature. This allows passing byte arrays but not float arrays.

extern "C" {
    void add_audio_data(uint8_t* data, int len) {
        if (!pm) return;
        projectm_pcm_add_uint8(pm, data, len, PROJECTM_MONO);
    }
}
evoyy commented 2 weeks ago

When I call projectm_pcm_get_max_samples() it returns 480. Does that mean that the length of the samples array I pass to projectm_pcm_add_uint8() should not exceed 480?

Because I have set the fftSize of my Web Audio analyser to 32768, and so my samples array has length 32768.

revmischa commented 2 weeks ago

I believe yes you don't need a super great amount of bins. Most presets just work off bass/mid/treble anyway.

evoyy commented 2 weeks ago

I'm not having much luck with presets. They blend in correctly but then crash after about 2 seconds and just freeze/flicker.

I wonder if WebGL is not configured correctly. I'm using the default attributes.

Any idea what could be causing this?

Edit: Since the default idle preset runs perfectly, I don't think it is a problem with WebGL. It might be a memory problem, when loading presets. Though I am using ALLOW_MEMORY_GROWTH=1, maybe there is something else I need to do.

evoyy commented 2 weeks ago

I have published my work here:

https://github.com/evoyy/projectm-webgl-demo

I would be very grateful if somebody could take a look. If you don't want to build the docker image, and run the demo, no problem. At least you can see what I'm trying to do.

kblaschke commented 2 weeks ago

projectM uses half-float textures for the motion vector grid to store the displacement of the previous frame's warp mesh. WebGL 2.0 sadly doesn't support this texture format by default (while OpenGL ES 3 does), so you'll have to at least enable the following WebGL extensions after context creation and before initializing projectM:

Otherwise, the textures will be missing or incomplete, which will cause presets using motion vectors to break, and can have other issues as the rendering framebuffers may also be marked as incomplete.

evoyy commented 2 weeks ago

Unfortunately that didn't work. I am enabling these extensions before initializing projectM:

emscripten_webgl_enable_extension(gl_ctx, "OES_texture_float");
emscripten_webgl_enable_extension(gl_ctx, "OES_texture_half_float");
emscripten_webgl_enable_extension(gl_ctx, "OES_texture_half_float_linear");

When I load a preset, it takes about 7 seconds before the transition starts. The transition goes perfectly, and the new preset runs for about 2 or 3 seconds before freezing.

I still have a feeling it might be a memory problem.

evoyy commented 2 weeks ago

I discovered that if I call projectm_load_preset_file() with the smooth_transition flag set to false, the loaded preset does not freeze; it continues to run indefinitely. However, it still takes about 5-7 seconds for a new preset to load.

It seems that projectM's smooth_transition feature is not compatible with WebGL. Do you have any idea why?

Any idea why loading a preset is delayed for several seconds, instead of taking effect immediately?