jamtools / midithru

0 stars 0 forks source link

Implement chord families #8

Open mickmister opened 1 month ago

mickmister commented 1 month ago

When you turn on chord family mode, you should be able to choose what midi input and octave should do the thing. You click the midi input device you want to use to control, then hit a note in the octave you want to use, and the program is now configured to use that octave as the trigger notes for the chords

Similarly you choose the chord family and output during this phase

When there is no chord assigned to a given note, have the program choose major and infer the chord shape based on translating an existing chord shape in this family to be that root. But if you have "no mistake" mode on, then it won't play anything in that case. Need "local only" toggle implemented for that to work

mickmister commented 1 month ago

To avoid analysis paralysis on how the data model should work, start with the user. Like imagine you're sitting at a drumset with the keytar in your lap, and you want to do some quick data entry for some chords. Maybe a new chord family, maybe an existing one

You play the chord and hold it until you got what your want, and then hit a foot pedal to lock in the chord

Here's the point: How does the program know to listen for that foot pedal for this "lock in" trigger? Let the user configure that. It doesn't matter how this is stored. Just make it work. This is way better than manually editing code to configure this.

Make a big map in localStorage for midi mappings. Anything can store its own things using a unique key for its use case like "chord-family-confirm-trigger", which would probably be an array. Don't over engineer what it means or how it works. Let each piece of code that needs get/set configuration handle its own data structures. Let it scale by being decoupled

Make the whole local storage a importable and exportable entity through JSON files

mickmister commented 1 month ago

You can use react hooks to switch between contexts of modes of things happening

Use websockets to simulate plugged in music instruments on the pi, so it's easier to scale with the React things. Just implement the IInput and IOutput interfaces by passing the raw event data. Some React context makes the React components none the wiser if these are real midi instruments. All local/remote things will be chosen through React context

Basically try to naturally mirror the UI's state with what's going on on a remote midi host. Think "one user" for now. No multiplayer in this app yet. One UI, and one host

A react hook will prime an App system (local or remote) with actions to set certain application modes, based upon changes of UI state. Use trpc to bridge the gap here for shared functionality and function calls. Whichever is the midi host will always be configured to do whatever is matching the UI

Especially since the versions of the two programs should match since the UI is hopefully being served by the device. If not, you will need to do websocket proxy through remote server. TAN Stack maybe to make configurable states navigable. Maybe that will play a role in multiplayer - serializable redirectable state

mickmister commented 1 month ago

The React hooks will be responsible for knowing the context of what's going on with the current mode. This couples the UI to that code as much as possible

The server's data store should work similar to localStorage's. Key value store at its core. Shared with the UI as if it came from localStorage

mickmister commented 1 month ago

This way there is no "core engine" to maintain. Every thing has its own pie to store data about anything it wants. Every mode is then a plugin which makes it horizontally scalable

mickmister commented 1 month ago

We may need to implement an "effect rack"-like UI for each midi input. I feel that's the best way to see what's going on and toggling things. Then you have a bus of midi events going through each used plugin. Just like max for live devices. But this is more composable and portable in the browser etc.

A given rack can have certain inputs assigned to it, so not just one given instrument. A combination of midi input and octave etc.

Then jam tools can have its own square/plugin to help process midi triggered human actions. Each mode or plugin has certain configurations for itself. Some plugins are composite modes like adhoc/playback progression mode. The more racks the better I think

You could then have modular input things. Let's make a name for that. "input role". The keytar could be serving two (or more technically) input roles at a given time. One says "keytar and just this first octave" the other says "keytar and just these other two octaves (or just exclude the previous octave), and control octave up&down with two midi buttons (which is necessary since we're assigning roles to specific octaves already)"

This allows each component to grow organically and be able to sustain itself as time goes on

You could have an input role powered by scribbletune as well. That'd be cool

mickmister commented 1 month ago

Things like what key you're playing in are global. Chord families can be swapped out for each output easily, and can be applied to multiple outputs at the same time. Make it easy to reuse and cycle through them

mickmister commented 1 month ago

A given plugin's action could work with multiple input role types, so we should be able to assign multiple types and have the plugin accept only those specific kinds. These input role types can be specified in the plugin's manifest.

For instance, a "chord playback" plugin should be able to accept a 12-note octave input role, and a 6-note scale degree input role.

I'm thinking every chain/rack will really just have one plugin block. We'll see, but I don't see the blocks having a "stdout". They just "do things" with midi instruments and other arbitrary things. They all support attaching multiple input roles though.

Random thought, but most WLED code should probably live in its own package and be imported.