sdatkinson / NeuralAmpModelerPlugin

Plugin for Neural Amp Modeler
MIT License
1.99k stars 133 forks source link

[BUG] Plugin Version 0.7.10 - Audio is choppy/glitchy in Cubase #507

Closed DomMcsweeney closed 3 weeks ago

DomMcsweeney commented 1 month ago

Describe the bug The audio from input and from playback is choppy/glitchy. See video attached.

To Reproduce

  1. Load the plugin up in Cubase
  2. Arm a track for recording and monitor it / Record audio and playback.
  3. Glitchy/Choppy audio

Expected behavior It should be processing correctly, but that isn't happening with v0.7.10

Screenshots Here's a video: https://github.com/user-attachments/assets/dc3c752c-b77e-4dd7-a98d-8fb9d78d5682

Desktop (please complete the following information):

StratGuitar commented 1 month ago

Same issue with Cubase 12 Elements - Windows 11 - Focusrite 2i4 and Realtek audio. Cubase is unusable with NAM 7.10

Petervos2018 commented 1 month ago

Was 0.7.9 working the way it should?

DomMcsweeney commented 1 month ago

@Petervos2018 Yes, it's only happening with version 0.7.10, so I recommend anyone using Cubase to revert to 0.7.9 until fixed.

Also, I've checked other DAWs / Hosts... It only occurs in Cubase, and for me at least works fine in several other daws - reaper, ableton live, studio 1, bitwig, cakewalk... and other hosts like element, ddmf meta/super, nembrini nexus, etc.

felipevsw commented 1 month ago

I have Cubase 13 Elements, and the problem occurs there too.

sdatkinson commented 1 month ago

@felipevsw @DomMcsweeney @StratGuitar can any of you check whether the behavior happens using Steinberg's VST3 Plugin Test Host? (Youc an download it as part of the VST3 SDK that you can download here.)

Unfortunately, I don't have a copy of Cubase, and I've been unsuccessful at getting a copy for plugin development purposes, so all I have to troubleshoot for Cubase is their test host.

I tried to reproduce the issue there, but unfortunately I was unsuccessful--it runs smoothly for me.

So there's two possibilities:

  1. You see the problem in the test host as well, in which case there's something I need to do yet to correctly reproduce the problem.
  2. You don't see the problem in the test host, in which case there's (probably) something strange going on with Cubase.

Let me know what you find.

StratGuitar commented 1 month ago

Dear Steven, The problem is about sample rate. NAM with no profile loaded, works. NAM with profile loaded = Cubase issue. NAM with profile, but changing the Cubase project's sample rate -> Menu -> Project -> Project Setup -> Record File Format -> Sample Rate = Issue is gone until change NAM's profile again.

Hope it gives a clue. Best regards.

Also, if you deactivate and reactivate NAM, it works untill change profile again. Screenshot 2024-10-16 220658

sdatkinson commented 3 weeks ago

Ok, I grabbed Cubase and I can replicate the Issue.

The problem is introduced in commit 83fa931; reverting to 220c8b6, things seem fine again. Changes (possible culprits) listed below. Interestingly, it happens even when the models are 48k and the host is too (i.e. no resampler / no latency). My wild guess is that the problem is calling SetLatency() outside of reset. I'm going to test this.

I see this workaround where the maximum latency is "set" and the plugin manages other cases internally. This works fine for playback, then latency can be compensated for. However, for real-time playing, it's not ideal because latency is needlessly introduced.

So I'm going to start with 220c8b6 and add back the changes one by one and check. Should be able to figure it out soon 🙂

Possible culprits:

NeuralAmpModeler/NeuralAmpModeler.cpp

1.

@@ -366,6 +366,7 @@ void NeuralAmpModeler::OnReset()
   // If there is a model or IR loaded, they need to be checked for resampling.
   _ResetModelAndIR(sampleRate, GetBlockSize());
   mToneStack->Reset(sampleRate, maxBlockSize);
+  _UpdateLatency();
 }

2.

@@ -533,7 +534,7 @@ void NeuralAmpModeler::_ApplyDSPStaging()
     mModel = nullptr;
     mNAMPath.Set("");
     mShouldRemoveModel = false;
-    SetLatency(0);
+    _UpdateLatency();
   }
   if (mShouldRemoveIR)
   {

3.

@@ -548,7 +549,7 @@ void NeuralAmpModeler::_ApplyDSPStaging()
     mModel = std::move(mStagedModel);
     mStagedModel = nullptr;
     mNewModelLoadedInDSP = true;
-    SetLatency(mModel->GetLatency());
+    _UpdateLatency();
   }
   if (mStagedIR != nullptr)
   {

4.

@@ -881,6 +882,17 @@ int NeuralAmpModeler::_UnserializeStateLegacy_0_7_9(const IByteChunk& chunk, int
   return pos;
 }

+void NeuralAmpModeler::_UpdateLatency()
+{
+  int latency = 0;
+  if (mModel)
+  {
+    latency += mModel->GetLatency();
+  }
+  // Other things that add latency here...
+  SetLatency(latency);
+}
+

NeuralAmpModeler/NeuralAmpModeler.h

5.

@@ -126,7 +126,7 @@ public:
       // We can afford to be careful
       throw std::runtime_error("More frames were provided than the max expected!");

-    if (GetExpectedSampleRate() == GetEncapsulatedSampleRate())
+    if (!NeedToResample())
     {
       mEncapsulated->process(input, output, num_frames);
       mEncapsulated->finalize_(num_frames);

6.

@@ -156,7 +156,7 @@ public:
     mFinalized = true;
   };

-  int GetLatency() const { return mResampler.GetLatency(); };
+  int GetLatency() const { return NeedToResample() ? mResampler.GetLatency() : 0; };

   void Reset(const double sampleRate, const int maxBlockSize)
   {

7.

@@ -182,6 +182,7 @@ public:
   double GetEncapsulatedSampleRate() const { return GetNAMSampleRate(mEncapsulated); };

 private:
+  bool NeedToResample() const { return GetExpectedSampleRate() != GetEncapsulatedSampleRate(); };
   // The encapsulated NAM
   std::unique_ptr<nam::DSP> mEncapsulated;
   // The processing for NAM is a little weird--there's a call to .finalize_() that's expected.

8.

@@ -271,6 +272,9 @@ private:
   int _UnserializeStateLegacy_0_7_9(const iplug::IByteChunk& chunk, int startPos);
   // And other legacy unsrializations if/as needed...

+  // Make sure that the latency is reported correctly.
+  void _UpdateLatency();
+
   // Update level meters
   // Called within ProcessBlock().
   // Assume _ProcessInput() and _ProcessOutput() were run immediately before.
sdatkinson commented 3 weeks ago

Putting things back...

So let's remove it from current.

That worked.

I don't know why OnReset is being called when I'm changing models (at least), but the amount of UpdateLatency() that it does seems to be sufficient to cause the glitches.

Fix by only calling _UpdateLatency() when the sample rate changes.