Neutone / neutone_sdk

Join the community on Discord for more discussions around Neutone! https://discord.gg/VHSMzb8Wqp
GNU Lesser General Public License v2.1
465 stars 21 forks source link

Low-pass effect / sample rate conversion issue #55

Closed renared closed 3 months ago

renared commented 11 months ago

Hi, I'm a PhD student doing deep learning for audio effects, and I'm using Neutone to test my Pytorch models in a DAW. Thank you for making such a great tool! I noticed that the output produced by Neutone is different from what the model should produce depending on the sample rate set in the DAW.

These are live spectrums showing:

My model is only compatible with a sample rate of 44100 Hz. When my sample rate is set to 48000 Hz in Windows and Reaper, I observe this:

image

And when my sample rate is set to 44100 Hz (the same as the model), both outputs are the same as expected:

image

So, my guess would be that Neutone uses a low-pass filter with a bandwidth too short for sample rate conversion. Could this be improved?

OS: Windows 11 DAW: Reaper v6.80 neutone-sdk==1.3.0 neutone VST v1.4.1 No dedicated sound card

christhetree commented 11 months ago

Hi Yann thanks so much for raising this issue. Any chance you can share the code for wrapping your model so that we can play with it? Also you're specifying the native sampling rate in the get_native_sample_rates method in the wrapper right?

Regardless you're right that our current resampling method is not ideal and I've actually been working on an improvement that has not been released yet. I'll prioritize it for the next release and will let you know when you can try it out.

renared commented 11 months ago

Thank you! This is how the model is wrapped:

from neutone_sdk import WaveformToWaveformBase, NeutoneParameter
from neutone_sdk.utils import save_neutone_model
from typing import Dict, List
import torch
from torch import Tensor

class ModelWrapper(WaveformToWaveformBase):
    def get_model_name(self) -> str:
        return "..."

    def get_model_authors(self) -> List[str]:
        return ["..."]

    def get_model_short_description(self) -> str:
        return "..."

    def get_model_long_description(self) -> str:
        return "..."

    def get_technical_description(self) -> str:
        return "..."

    def get_technical_links(self) -> Dict[str, str]:
        return {

        }

    def get_tags(self) -> List[str]:
        return ["dynamics"]

    def get_model_version(self) -> str:
        return "0.0.1"

    def is_experimental(self) -> bool:
        return False

    def get_neutone_parameters(self) -> List[NeutoneParameter]:
        return [
            NeutoneParameter("param1", "param1", default_value=0.1),
            NeutoneParameter("param2", "param2", default_value=0.1),
        ]

    @torch.jit.export
    def is_input_mono(self) -> bool:
        return True

    @torch.jit.export
    def is_output_mono(self) -> bool:
        return True

    @torch.jit.export
    def get_native_sample_rates(self) -> List[int]:
        return [44100]

    @torch.jit.export
    def get_native_buffer_sizes(self) -> List[int]:
        return [1024]

    def get_look_behind_samples(self) -> int:
        return 0

    def aggregate_params(self, params: Tensor) -> Tensor:
        return params  # We want sample-level control, so no aggregation

    def do_forward_pass(self, x: Tensor, params: Dict[str, Tensor]) -> Tensor:
        # x has shape (in_n_ch, look_behind_samples + buffer_size)

        params_tor = torch.stack([params[k][0] for k in ['param1', 'param2']])[None].to("cpu")
        x_tor = x[None].to("cpu")
        x = self.model.forward(x_tor, params_tor)[0, :, self.get_look_behind_samples():]
        return x

class Model(torch.nn.Module):
    pass

model = Model()

wrapper = ModelWrapper(model.eval())
metadata = wrapper.to_metadata()
from pathlib import Path
save_neutone_model(wrapper, Path("neutone_model"), submission=True)
christhetree commented 11 months ago

Great thanks for the code! Also unrelated, but the .eval() and .to("cpu") calls should be unnecessary (if you are running PyTorch without GPU acceleration which should be the case for Neutone).

christhetree commented 10 months ago

@renared we addressed your issue in the latest pull request for the SDK https://github.com/QosmoInc/neutone_sdk/pull/56 The next release of Neutone you should see an improvement, I'll ping you here when it's available.

bogdanteleaga commented 3 months ago

This should be fixed in the 1.5 Neutone FX release.