flucoma / flucoma-sc

Fluid Corpus Manipulation plugins for Supercollider
BSD 3-Clause "New" or "Revised" License
70 stars 16 forks source link

plotter-4.scd not working on Arch Linux #135

Open tedmoore opened 2 years ago

tedmoore commented 2 years ago

This appeared over on the SuperCollider forum.

https://scsynth.org/t/tutorial-coding-a-2d-corpus-explorer/6357/2

It seems that this file (see below) is getting stuck on Arch Linux.

// define this big function, then way down below, execute it
(
~twoD_instrument = {
    arg folder, sliceThresh = 0.05;
    fork{
        var loader = FluidLoadFolder(folder).play(s,{"done".postln;});
        var src, play_slice, analyses, normed, tree;
        var indices = Buffer(s);

        s.sync;

        if(loader.buffer.numChannels > 1){
            src = Buffer(s);
            FluidBufCompose.processBlocking(s,loader.buffer,startChan:0,numChans:1,destination:src,destStartChan:0,gain:-6.dbamp);
            FluidBufCompose.processBlocking(s,loader.buffer,startChan:1,numChans:1,destination:src,destStartChan:0,gain:-6.dbamp,destGain:1);
        }{
            src = loader.buffer
        };

        FluidBufOnsetSlice.processBlocking(s,src,metric:9,threshold:sliceThresh,indices:indices,action:{
            "done".postln;
            "average seconds per slice: %".format(src.duration / indices.numFrames).postln;
        });

        play_slice = {
            arg index;
            {
                var startsamp = Index.kr(indices,index);
                var stopsamp = Index.kr(indices,index+1);
                var phs = Phasor.ar(0,BufRateScale.ir(src),startsamp,stopsamp);
                var sig = BufRd.ar(1,src,phs);
                var dursecs = (stopsamp - startsamp) / BufSampleRate.ir(src);
                var env;

                dursecs = min(dursecs,1);

                env = EnvGen.kr(Env([0,1,1,0],[0.03,dursecs-0.06,0.03]),doneAction:2);
                sig.dup * env;
            }.play;
        };

        // analysis
        analyses = FluidDataSet(s);
        indices.loadToFloatArray(action:{
            arg fa;
            fork{
                var spec = Buffer(s);
                var stats = Buffer(s);
                var stats2 = Buffer(s);
                var loudness = Buffer(s);
                var point = Buffer(s);

                fa.doAdjacentPairs{
                    arg start, end, i;
                    var num = end - start;

                    FluidBufSpectralShape.processBlocking(s,src,start,num,features:spec,select:[\centroid]);
                    FluidBufStats.processBlocking(s,spec,stats:stats,select:[\mean]);

                    FluidBufLoudness.processBlocking(s,src,start,num,features:loudness,select:[\loudness]);
                    FluidBufStats.processBlocking(s,loudness,stats:stats2,select:[\mean]);

                    FluidBufCompose.processBlocking(s,stats,destination:point,destStartFrame:0);
                    FluidBufCompose.processBlocking(s,stats2,destination:point,destStartFrame:1);

                    analyses.addPoint(i,point);

                    "slice % / %".format(i,fa.size).postln;

                    if((i%100) == 99){s.sync};
                };

                s.sync;

                analyses.print;
                normed = FluidDataSet(s);
                FluidNormalize(s).fitTransform(analyses,normed);

                normed.print;

                tree = FluidKDTree(s);
                tree.fit(normed);

                // plot
                normed.dump({
                    arg dict;
                    var point = Buffer.alloc(s,2);
                    var previous = nil;
                    dict.postln;
                    defer{
                        FluidPlotter(dict:dict,mouseMoveAction:{
                            arg view, x, y;
                            [x,y].postln;
                            point.setn(0,[x,y]);
                            tree.kNearest(point,1,{
                                arg nearest;
                                if(nearest != previous){
                                    nearest.postln;
                                    view.highlight_(nearest);
                                    play_slice.(nearest.asInteger);
                                    previous = nearest;
                                }
                            });
                        });
                    }
                });

            }
        });
    }
};
)

~twoD_instrument.(FluidFilesPath());
suspiria commented 2 years ago

Thanks for filing the issue, I'm the OP of that thread. Some more info:

Platform: Latest x86_64 Arch Linux SC version: 3.12.2 (Built from source using modified ABS PKGBUILD with DNATIVE = ON, DSC_ABLETON_LINK=OFF, DCMAKE_BUILD_TYPE=Release) FluCoMa version: 1.0.2+sha.2ca6e58.core.sha.804a3b39

Here's an isolated version of the code I'm trying to run (taken from the 2D Corpus Explorer tutorial, part 5 - plotter-5-starter.scd as linked in the description).

// the folder containing the corpus
~folder = FluidFilesPath();

// load into a buffer
~loader = FluidLoadFolder(~folder).play(s,{"done loading folder".postln;});

// sum to mono (if not mono)
(
if(~loader.buffer.numChannels > 1){
    ~src = Buffer(s);
    ~loader.buffer.numChannels.do{
        arg chan_i;
        FluidBufCompose.processBlocking(s,
            ~loader.buffer,
            startChan:chan_i,
            numChans:1,
            gain:~loader.buffer.numChannels.reciprocal,
            destination:~src,
            destGain:1,
            action:{"copied channel: %".format(chan_i).postln}
        );
    };
}{
    "loader buffer is already mono".postln;
    ~src = ~loader.buffer;
};
)

// slice the buffer in non real-time
(
~indices = Buffer(s);
FluidBufOnsetSlice.processBlocking(s,~src,metric:9,threshold:0.05,indices:~indices,action:{
    "found % slice points".format(~indices.numFrames).postln;
    "average duration per slice: %".format(~src.duration / (~indices.numFrames+1)).postln;
});
)

// analysis
(
~analyses = FluidDataSet(s);
~indices.loadToFloatArray(action:{
    arg fa;
    var spec = Buffer(s);
    var stats = Buffer(s);
    var stats2 = Buffer(s);
    var loudness = Buffer(s);
    var point = Buffer(s);

    fa.doAdjacentPairs{
        arg start, end, i;
        var num = end - start;

        FluidBufSpectralShape.processBlocking(s,~src,start,num,features:spec,select:[\centroid]);
        FluidBufStats.processBlocking(s,spec,stats:stats,select:[\mean]);

        FluidBufLoudness.processBlocking(s,~src,start,num,features:loudness,select:[\loudness]);
        FluidBufStats.processBlocking(s,loudness,stats:stats2,select:[\mean]);

        FluidBufCompose.processBlocking(s,stats,destination:point,destStartFrame:0);
        FluidBufCompose.processBlocking(s,stats2,destination:point,destStartFrame:1);

        ~analyses.addPoint(i,point);

        "analyzing slice % / %".format(i+1,fa.size-1).postln;

        if((i%100) == 99){s.sync;}
    };

    s.sync;

    ~analyses.print;
});
)

When evaluating everything in order, everything works up until the "analysis" part. When running this last region, I get the following output:

-> Buffer(2, 1474, 1, 48000.0, nil)
analyzing slice 1 / 1473
analyzing slice 2 / 1473
analyzing slice 3 / 1473
...
analyzing slice 98 / 1473
analyzing slice 99 / 1473
analyzing slice 100 / 1473

At this point, the analysis gets stuck without producing any errors. Occasionally, upon rebooting the interpreter/server and trying again, it reaches slice 200 before stopping.

Changing if((i%100)==99){s.sync;} to s.sync; makes the analysis run smoothly without any errors, albeit very very slowly compared to the original version due to the constant syncing. I tried messing around with how often the sync happens, and once every 14-15 iterations seems to be the point where the analysis starts breaking, but it's inconsistent.

var every = 14; // analysis stops working when (every >= 15)
...
if((i%every) == (every-1)) { s.sync };
elgiano commented 2 years ago

I'm also on arch and I see the same problem: something is wrong in the sync mechanism, so the analysis gets stuck. Important note: with server.options.protocol = \tcp I can run it many times in a row without problems. So it looks like some messages get lost over UDP?

In my practice with SuperCollider I noticed the same problem when loading a large number of buffers in parallel. Apparently some b_query replies get lost, and so s.sync stops working (over UDP, not over TCP). Since every processBlocking issues a b_query, I suspect that it's the same problem.

My workaround for these situations is not to rely on sync for completion, but instead on FluCoMa's own \done mechanism. It works better in my experience, and it is fast. It looks like this:

However, using process requires a cleaner callback handling, otherwise code gets way too nested. I made a few functions for this purpose: FluidHelper.await and FluidHelper.bufProcessChain. I apologize if such code is not so clear, it's still work in progress. But the main idea is:

There is also FluidHelper.analSlices, which illustrate the Semaphore process.


Another solution which I found working is to make a bundle for, say, 100 slices, and sync that:

var slices = Array.newFrom(fa).slide(2).clump(2);
slices.clump(100).do { |sliceClump|
    var bundle = s.makeBundle(false) {
        sliceClump.do { |slice| analFunc.(slice) };
    };
    s.sync(bundles: bundle)
}

If bundles are too big, it shows a very scary buffer overflow error. However, the error is harmless because SC handles it automatically by splitting the big bundle in smaller ones.

weefuzzy commented 2 years ago

Thanks everyone. I have an Arch VM so I'll try and reproduce / diagnose when I can, but UDP packet loss does seem like a possible cause. I general having to rely on robust client-server conversations to the extent that we do for batch buffer processing makes me sad, but I've not yet hit on an alternative.

@elgiano thanks for the encapsulations of Nice Things. I'll have a look...

elgiano commented 1 year ago

News: I confirm that there's a problem with UDP: sclang drops some messages if they come too fast/too many. I opened an issue at https://github.com/supercollider/supercollider/issues/5870

tedmoore commented 1 year ago

Hmm. Interesting. Thanks @elgiano for your investigating and reporting!

weefuzzy commented 1 year ago

@elgiano I just had a quick skim of the discussion on that SC issue. Those responses pointing out that this is an inherent feature of UDP are, unfortunately, right: packet loss is just a risk under heavy traffic.

Dealing with this robustly is an interesting problem for us. Clearly we can't just force people to use TCP, yet we have quite a few points where we'd like robust communication between client and server, especially when doing a whole queue of buffer processes. One possibility might be devising some sort of timeout / back-off scheme language side, so that jobs don't simply stall waiting for replies that may never arrive. Even better (but much more work) would be a way of specifying a whole pipeline of work to the server (which would reduce network traffic, and synchronisation overhead).

I'll have a think, but there's definitely a fundamental brittleness here for SC batch buffer processing that I'd like to be able to address in the medium-term.