ffont / shepherd

GNU General Public License v3.0
12 stars 2 forks source link

Need some guidance #2

Open juanmartin opened 1 year ago

juanmartin commented 1 year ago

Hi @ffont Thank you for your great work. Found this project when I had the idea of doing something similar but using a Teensy for a more bare metal feeling, closer to a hardware device.

I was able to compile it both in a Raspberry Pi 4 and my Macbook Pro running macOS 12.6.1 Connected my Push 2 and a Steinberg audio inteface that provides MIDI I/O, worked on both rpi and mac sending a sequence (of unknown origin lol) to a hardware synth.

The problem I'm having is that I can only get some notes to play that are saved somewhere I don't know every time I hit play. I cannot play MIDI notes from Push to my audio interface midi out port.

These are my configs: hardwareDevices.json

[
  {
    "type": "output",
    "name": "placa",
    "shortName": "placa",
    "midiOutputDeviceName": "Steinberg UR44 Port1"
  },
  {
    "type": "input",
    "name": "Push",
    "shortName": "Push",
    "midiInputDeviceName": "Push2Simulator",
    "controlChangeMessagesAreRelative": false,
  },
// or
  {
    "type": "input",
    "name": "Push",
    "shortName": "Push",
    "midiInputDeviceName": "Ableton Push 2 MIDI 1",
    "controlChangeMessagesAreRelative": false,
  }
]

backendSettings.json

{
    "metronomeMidiDevice": "Steinberg UR44 Port1",
    "metronomeMidiChannel": "16",
    "midiDevicesToSendClockTo": ["Steinberg UR44 Port1"],
    "pushClockDeviceName": "Ableton Push 2 MIDI 1",
}

I can also use the simulator (only on macOS, I think I could not access to the rpi one because of network problems).

Also, after trying with a hardware synth, I looped the MIDI out cable from my audio interface back in and monitored the messages that the simulator would send to see if they were received in my audio interface again, but no.

image

As you can see, some messages do return, but those are only occurring while the sequencer is playing, and as I said before, I cannot seem to find where that sequence is recorded and how to modify it. I cannot play individual notes and get it to send MIDI to my devices or back to my audio interface.

So anyway, I think I might be quite close to get it to work fully, but I think better usage instructions could be made. I can sure help submitting PRs for documentation once I understand everything and get it to properly work.

Let me know if something wasn't clear enough, and again, thank you!

ffont commented 1 year ago

Hi @juanmartin, thanks for your message and for trying to get Shepherd running! Yeah, it looks like you are close to have it working, although I think I might need some more information to help you. But let me try. Here some comments:

{
    "type": "input",
    "name": "Push",
    "shortName": "Push",
    "midiInputDeviceName": "YOUR_PUSH_MIDI_PORT_NAME",
    "controlChangeMessagesAreRelative": true,
    "notesMapping": "-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1",
    "controlChangeMapping": "-1,1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,64,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1"
}

(if this works, I'll document that better).

Let me know how it goes and thanks a lot!

juanmartin commented 1 year ago

Hey @ffont thank you for your answer! I came back from holidays and tested with a clean setup. These became my working config:

backendSettings.json

{
    "metronomeMidiDevice": "Steinberg UR44 Port1",
    "metronomeMidiChannel": 16,
    "midiDevicesToSendClockTo": [],
    "pushClockDeviceName": "Ableton Push 2 Live Port"
}

hardwareDevices.json

[
  {
    "type": "input",
    "name": "Push In",
    "shortName": "Push IN",
    "midiInputDeviceName": "Ableton Push 2 Live Port",
    "controlChangeMessagesAreRelative": true
  },
  {
    "type": "output",
    "name": "placa out",
    "shortName": "placa out",
    "midiChannel": 1,
    "midiOutputDeviceName": "Steinberg UR44 Port1",
    "controlChangeMessagesAreRelative": true
  }
]

Using macOS Monterey for now, I'll test my rpi setup later.

Still getting to know the workflow, so a couple of new questions maybe you can help me with:

So in conclusion only thing I'm missing is how to filter out unwanted notes.

Again, thank you for your great work and availability :D 🚀

ffont commented 1 year ago

Hi @juanmartin , sorry for the late reply. The issue you're having with the notes mapping makes sense. The way shepherd in which Push2 communication works is through MIDI. The Python app talks to Push via MIDI and this is how it receives information about the pads and encoders that are touches/rotated, etc. However, to use push to play notes it is better to also send the MIDI messages from Push directly to the shepherd backend so we don't add any extra latency. The problem is that Shepherd needs to know how to interpret this MIDI messages and to which notes they should correspond. This is what the notesMapping and controlChangeMappingare for (notesMapping for the pads, controlChangeMapping for the encoders). You have to set these parameters as instructed in hardwareDevices.json. By default these mappings are set to -1 which means that all notes coming from Push will be ignored. However, when you go into "note mode", the python app should tell shepherd to use a specific mapping (depending on the octave you have selected) and then you should be able to hear notes. The problem you're having is that this communication is not happening properly and therefore shepehrd does not know what to do with the incoming midi from Push. Also if you remove this parameters, there's no mapping applied and all MIDI input is sent to the selected track output. That is why in this case you hear notes when performing other actions than pressing pads.

Looking at your config file, I think that the problem is that the push device should be named Push and not Push IN (you can use the same for name and short name). The python script has the push device name hardcoded somewhere, and it expects it to be Push (see here https://github.com/ffont/shepherd/blob/3428ed79f2a64fce23b213ab1bbf96fe56c02d3e/Push2Controller/modes/melodic_mode.py#L172).

Let me know if that change fixes it :)

juanmartin commented 1 year ago

OK man, got it working now!! thanks a lot 🚀 Following your comments, I put Push as the name and put the mapping settings on that object and removed from the output devices and voilà, no more phantom notes coming from touching the knobs 🔥 I see a lot of potential here :D

Just a few more questions:

Are there plans for:

As soon as I have capacity I'm willing to take a more in-depth look at the project and see if I can collaborate with you on this. That is, if you are interested in maintaining the project, of course.

ffont commented 1 year ago

That is amazing :)

The flickering of the buttons is because of a bug that I have not addressed. Basically there will be several parts of the UI script trying to update this button, and "if/else" missing somewhere.

If you have looked a bit into the code you might have noticed that Shepherd is dividend in the backend part (the JUCE C++ code) and the frontend part (the Python script). The communicate with each other via WebSockets and this is how the Python script maintains "a copy" of the current state of the backend and it can show information to the user. All the detailed communication with Push happens in the Python script, and the backend does not know anything specific about push, but it also does receive its MIDI and the frontend tells the backend how to treat it (this notesMapping and controlChangeMapping thing).

To implement new scales, that should happen in the Python script part and should not be complicated. Basically the script needs to know which is the active scale to colour pads accordingly and has to send a new notesMapping to the backend so it does interpret the MIDI notes correctly.

For the arpeggiator, it should be implemented as a "MIDI effect" in the backend. I have thought about it some times, but so far there's nothing implemented for MIDI effects. We would need to define a new type of class for MIDIEffect, and add support for adding them Track objects (and possibly to Clip objects as well). This objects need to be implemented in a particular way so that their state is shared with the frontend as well, and we would need to implement some methods to update the objects using WebSockets. Once this work is done, then adding other MIDI effects should be easy.

I don't have any specific roadmap for the project, but I'd be happy to continue working on it if other people wants to contribute as well. In fact, lately I've been merging this project with another project I worked on which is a sampler called Source (see https://github.com/ffont/source). This adds capabilities for loading sounds into the different tracks so that Shepherd does not need external gear to make music. Nevertheless the integration is very raw so far. This integration compiles the backend part of the sequencer from this repository, but the frontend code is duplicated because I made changes and added some stuff only relevant to the sampler part. This I don't like it, I might actually move the whole python script folder to a new repository that can be shared between both projects. In this way all improvements that happen in Shepherd alone and in Shepherd+Source are shared.

Anyway, sorry for the long explanations. If you want to get involved let me know and let's talk about it.

juanmartin commented 1 year ago

I followed your explanations, thanks for that, it gives context :) I'll give the code a better look now and see what I can make out of it. I've just seen the demo of Source and it looks really fun and quick to get something going quickly, now it's my next thing to try 😃 although I don't have other than a RPi so I'll try Desktop. It would be amazing to use it with Push 2 🔥 Also seen you research at UPF and caught my eye, so I guess we could talk.

juanmartin commented 1 year ago

Hey Frederic

I'll post this here since it's related to Shepherd's CC output, but using Source as an instrument receiving its MIDI.

So now I was testing Source on my mac, just downloaded the release you have in the repo. As I wanted to play with it using MIDI (and the only handy thing I have here is the Push2), I launched Shepherd in parallel to send MIDI to Source. Notes are working fine (I guess, as I can hear the samples playing, one sample per octave?). Now I wanted to map some CC to mod a filter on one sample, but CC doesn't seem to be sent anywhere.

I was monitoring the messages with MIDI Monitor:

image

As you can see, ShepherdBackendNotesMonitoring and Ableton Live Push 2 Live Port do get notes. But CC is only captured from the raw push input, and not Shepherd. Is this meant to be? This is the first time I'm trying to send CC with Shepherd as the hardware synth I tested with before does not support CC input.

In Source, I have tried selecting one of ShepherdBackendNotesMonitoring or Ableton Live Push 2 Live Port, even both simultaneously 😆 but could not get the CC showing on Push's screen to be mapped to something in Source.

Thanks again for your great work. I have started looking at some JUCE tutorials, reminds me a bit of openFrameworks but just for audio.

ffont commented 1 year ago

Hi @juanmartin, sorry for the late reply, my notification for this message was buried. I think you are configuring the MIDI stuff in the right way, however the main issue is that the mapping of midi channels/notes for every Source sound must be set up in a very specific way. Also, you can assign CC messages to parameters in Source at the bottom section of every sound.

There is a way to compile Source which already includes Shepherd in it, and then you run a python script which is very similar to the one you are already running in Shepherd. You should follow the steps to compile Source (they are similar to Shepherd so it should not be difficult), but running fab compile_with_sequencer instead of fab compile. Then run the controller python script here https://github.com/ffont/source/tree/master/SourceShepherdPush2Controller (instead of the one you're running now). I think this should pre-configure Source and Shepherd to run nicely together.

That said, there are many details that should be understood for this to work in a more consistent way and the integration is not very advanced so it is not easy. If you want, we can have a videocall so I can tell you more.