JacquesLucke / animation_nodes

Node based visual scripting system designed for motion graphics in Blender.
Other
2.29k stars 342 forks source link

Proposal new node : Midi #1195

Closed Patochun closed 4 years ago

Patochun commented 5 years ago

Hello, I start a code for using MIDI events to drive some animation in blender.

Like this : https://www.youtube.com/watch?v=9N9v5mHrADw

Now, I think it's time to release my work for everyone. I think early to create a blender add-on but animation nodes seems to be the best choice

I work on a proof of concept for now. Animating a list of integer where each of them represent a note and her velocity value. So I limit in this POC for only one MIDI track.

I have few questions to start this :

Animation nodes is a big beauty piece of code. I'm a python dev level 4 than AN requires dev level 15... it's a journey for me.

Clockmender commented 5 years ago

I did all this MIDI stuff over a year ago, including multi-track, etc. see this:

https://www.youtube.com/watch?v=QM0b1PHrigc

There are more like this on my Youtube. Never got any interest, or take up from AN devs... Talk to me if you want to know more.

Patochun commented 5 years ago

Hello Clockmender, what a great job ! It's not clear to me of how did you pass the MIDI evts into AN Have you coded a new AN node ?

Clockmender commented 5 years ago

I coded many new nodes to read MIDI files, converted to csv format. One that "bakes" the MIDI file to a series of empties, then use lots of other standard and my AN nodes to build animations. In 2.79 I baked the MIDI events to F-Curves in the blend file, but this did not work in 2.8 and I got no help from here as to how to fix it, so for 2.8 I took that option out.

Take a look at this: https://clockmender.uk/blender/animation-nodes/midi-controls/ This will explain it all....

OmarEmaraDev commented 5 years ago

I am not sure what you mean by your second question, but ideally, the node should take a frame number as an input, much like the Sound node.

Patochun commented 5 years ago

Absolutely. The goal is to take the frame number to act with midi events. I try to create a list output animated with these events. Just to drive a POC for now.

Clockmender commented 5 years ago

Do you have anything to show us yet? You will need to convert MIDI "Pulse" timings, then into seconds, then into Frame timings.... POC????

OmarEmaraDev commented 5 years ago

@Patochun I think discussing more details about what you are doing is of essence here. Otherwise your efforts may not align with the goals of the project. So make sure to share the elementary design and workflow of the system first.

Patochun commented 5 years ago

Design is not net fixed. POC = Proof Of Concept. And yes I convert MIDI time notation to seconds. I wil take time to write a more detailled project and I post here.

OmarEmaraDev commented 5 years ago

@Patochun I see. Take your time then. Let me know if you need anything.

Patochun commented 5 years ago

To introduce the discussion I write this "MIDI node Design". @OmarEmaraDev, Tell me by return what you think of this design. What would I need to change or dig more. I would need information on how to create a new socket.

@Clockmender, have you published your code ? Personally I use the MIDO module for managing MIDI data.

It is a succinct document. Just to know if it's going in the right direction AN MIDI Design.pdf

Clockmender commented 5 years ago

Reading the MIDI file and writing the animations to something, be that Empties, or F-Curves, should be a one-time operation, not something that is done "on-the fly" at each execution, it will kill your processor like this IMHO.

You can ue a Generic socket for now, until you know what your intended socket is supposed to do.

Yes, my code is published, see my website...

OmarEmaraDev commented 5 years ago

@Patochun I just read your design document, I will think about it and get back to you with feedback.

Clockmender commented 5 years ago

@Patochun Something else to read:

https://blenderartists.org/t/were-in-the-midi-audio-daw-real-time/1142160

OmarEmaraDev commented 5 years ago

@Patochun @Clockmender Here are my initial thoughts.


Animation Nodes is data driven by design, not event driven. So we should avoid giving users events and instead give them intuitive structured data. The structure I have in mind is as follows:

Midi File:

Midi Track:

Midi Note:

I intentionally left out all other events like Polyphonic Pressure and track events because they will complicate the design at this point. But we can add them later, putting in mind that they will not be exposed as events either. For instance, Polyphonic Pressure can be represented as an interpolation or a spline.

In terms of nodes, we can provide Midi Tracks From File, Midi Track Info, and Midi Note Info nodes to access the data of Midi files. Additionally, we can provide utility nodes to filter notes by channel or note number as well as notes to evaluate the sound of a group of notes using ADSR envelops.

Patochun commented 5 years ago

Hi,

@OmarEmaraDev I need some time to read and rethink from your proposal. I fully understand the need to maintain a data approach. You have to animate the data a little.

I just push my POC here : https://github.com/Patochun/animation_nodes/tree/MIDI-node/animation_nodes/nodes/sound

I am a little old developer and I learn in addition to python (1 year exp) dev environments like github, VSC, etc. By the way it's very difficult to debug with just data print So I needed first to check that I could integrate my code in a new context It's just a test but it works a little. There are still some bug. And he approaches my initial design.

I am very happy to be able to participate in something that will be useful to many people. At my modest level.

Patrick

Patochun commented 5 years ago

MTB.py is my initial code very much retouched to fit our goals.

Clockmender commented 5 years ago

@patochun @OmarEmaraDev you both seem to want to re-invent a wheel I solved a long time ago, so I am removing myself from this project of yours. I don't expect, nor want, to see ant if my code, systems, functions, or nodes in your project.

Perhaps, Omar, you might explain to me why you never took up my work, given all I know about MIDI and all I have done for AN. I am very disappointed in this situation, I will not be offering any more help here.

OmarEmaraDev commented 5 years ago

@Clockmender I am sorry you feel this way. I have always shown interest in your work, and Jacques and I always tried to help you with your interesting projects. However, if I am being honest, I never really considered your approach an optimal one, and the code quality was questionable. Your code wasn't even working correctly and you never submitted your code for review, so I didn't handle it at full capacity. Yet, I never gave up on your work, the project's thread is still open for discussion. Feel free to present your project again there, and I will be there discussing it with you again just like I am discussing this project.

You are not the first to try to implement MIDI in Animation Nodes; Leon, Steven, Jacques, and me all tried different approaches at some point and non of our trials made it in Animation Nodes, which is fine, we will just try again. Nobody is reinventing the wheel, we are just trying different approaches.

@Patochun is enthusiastic about the project and wanted to give it a go, he has some very good proofs of concept and I think the project has the potential to finally add MIDI support to Animation Nodes. I will make sure Patochun have all the support he needs to complete the project.

In conclusion, if you think your project is better, feel free to discuss it with us. Don't undermine other projects.

Clockmender commented 5 years ago

Don't undermine other projects.

Oh well, I must say that I have not undermined this one, merely left it.

However, your publicly ridiculing my approach and code might be perceived as you not obeying your own dictates. Let's examine that:

My approach of baking the MIDI file to F-curves was agreed with Jacques and he provided the framework for me to work with, so your criticism here should be aimed at him, not me. This I believe is in line with the way sound is baked for animation, so that approach is optimal, the same for MIDI is apparently not.

The only part of my code that does not work in 2.8 is the part provided by Jacques (reading the baked F-curves), which worked perfectly well in 2.79. Despite repeated requests for resolution here, none was forthcoming, only dismissal.

At least 9 months ago you suggested you and I had a conversation on MIDI, to which I agreed enthusiastically, I never heard from you again on the matter.

In conclusion - Live by the rules you dictate to others.

OmarEmaraDev commented 5 years ago

@Clockmender You stated that we are "reinventing a wheel you solved a long time ago", implying that this project should not go on and that your project should be adopted instead. To me, this sounds like you are undermining the project. But I am sure you didn't mean to, so lets just put this aside.

I never ridiculed your approach, I simply don't consider it an optimal approach, it is just my personal opinion. I apologize if I sounded ridiculing.

It is fine if Jacques agrees with your approach, but that is beside the point. You are the one working on this project and my criticism will be directed to you.

Sound baking used to work that way in Animation Nodes. I considered the approach sub-optimal and rewrote the whole sound system to be more optimal. This is exactly what I am trying to do here, except I want to do it right from the start. So your argument is counterintuitive.

It doesn't really matter which part is not working, what matters is that your code is not working, regardless of the reason. There was no "dismissal", there was me trying to help you repeatedly resolve the issues you were having. It is true that I couldn't pin point the issue in your code, but I really tried, I don't know what is expected from me here. Also, this is not the first time you brought this up, so please reconsider your position.

I don't recall ignoring a conversation with you about MIDI, if I did, please forgive me, but I really don't remember this, maybe I just missed your message. I want everybody to be happy here, and I will make amends if possible. What can I do to fix this situation?

Patochun commented 5 years ago

Hello all, I feel rather uncomfortable given the situation. I apologize for not having foreseen what was going to happen. I understand @clockmender that you think that here we reinvent the wheel considering your work already well advanced. On the other hand, I did not know your work before I started and I also built something to use MIDI to animate. I'm not super comfortable with English as you can imagine (I'm French).

So I had a little trouble understanding the concept you bet on. We can put together something that marries the best of all worlds. What I have in mind:

-1 node to generate fcurves, one per notes by track? -1 node that provides a bit more raw data, like a list with the names of the used notes and another list with their velocity value as a function of time -1 node that will later take care of the data on the tracks themselves.

It seems to me more complicated for the facilitators to have another level of data up to the notes themselves. They would then need a very large number of nodes to gather the life of the notes to animate a track.

With a list or a fcurve you do not lose access to the note. For example, we can locate in the rhythm track the note corresponding to the bass drum to create a pulsed movement.

I keep in mind that the goal is to allow you to animate everything you want from MIDI data. What do you think, would it be a good way to move forward together without losing anyone?

OmarEmaraDev commented 5 years ago

@Patochun So you are proposing we skip the structure for MIDI Notes and instead provide lists for all the MIDI Note attributes? While this might be advantageous because it present data in a contiguous form, it is disadvantageous because we lose structure and end up with events, bringing us back to the conversation about "event driven" vs "data driven" designs. What do you think?

I fail to see how using FCurves makes sense here. The only way to use FCurves in Animation Nodes is to evaluate them, so why not just take a shortcut and evaluate MIDI notes directly? Unless you are talking about writing FCurves back to objects' animation channels, which is arguably not how we do things in Animation Nodes.

@Patochun Please don't feel discouraged or pressured in anyway. My disagreement with Clockmender is not related to your project and should not affect your project in anyway.

Patochun commented 5 years ago

Only a "structured data" view with note-level granularity, I have trouble figuring out what all the nodes needed to animate a 15-track music with 20 notes per track would look like. Can you enlighten me on it?

OmarEmaraDev commented 5 years ago

This is what I have in mind:

MIDI Tracks From File -> Loop Over Tracks -> Track Info -> A List Of MIDI Notes. MIDI Notes -> Evaluate MIDI Notes -> Floats MIDI Notes -> Loop Over MIDI Notes -> Get start and end times

Patochun commented 5 years ago

OK I see the idea better. No work is done upstream, we provide raw data and all the intelligence of the treatment would be realized by the structure of the set of nodes and in particular by the "specialized nodes" for the treatment: loop, and eventually script.

A treatment consists, for example, in damping attacks and releasing notes in order to avoid a "square" signal and difficult animations.

With the unprocessed raw data, will this operation have to be done via the other nodes like script?

Moreover, even before damping anything, it should already position the notes in time. In a MIDI file it is something quite complex, which must take into account the tempo (variable in time) and the quantization of time provided by several data. I imagine that this part will have to be treated minimally by the node that opens the MIDI file and creates the track data?

For raw data, it would be time to rule on the structured data file, tracks, notes. You already proposed something above, I think it is necessary to describe the objects and the fields.

Clockmender commented 5 years ago

What can I do to fix this situation?

@OmarEmaraDev Well we have probably reached the low point and can only go up from here. You could show a little more respect for my age, experience and knowledge and I could refrain from churlish, remarks like explaining that the reason for deficiencies in my code were due to two factors; 1) I was learning to write nodes from a knowledge base of zero and 2) the AN API documentation had not progresses far beyond "inadequate". So let's put that aside and move on.

I have to say that I think you are hopelessly over-worked on this development and could benefit from more assistance, I tried to help, but felt my ideas were largely ignored.

As far as development is concerned, there seems to be a confusion over MIDI data. A MIDI file contains a chronological list of time-stamped events, like "Time, note On, etc." and they do not represent a continuous file of data that can be accessed on a time basis, like sound files. By that I mean that you cannot, for example, ask the file which notes are on and at what velocity, at a point in time without back-tracking the file for the previous event for that note, which might not be in place as far back as the beginning of the file, so reading the MIDI file is inherently inefficient.

The basis of my research here and hence my comment about re-inventing the wheel, was not to obviate further work, but to say that I had already resolved that the data must be converted as a one time operation into "continuous", rather than "event" data. I sought to do this by either writing F-curves to the blend file, or writing the same to control empties. I am not so conceited as to suggest that my methods, or code should be adopted.

It should be noted that MIDI data is good at driving sound generation, it is not so good at driving animations. Take an example; at say time = 2.2 seconds note E4 is activated for some time, then immediately re-activated in succession. If the notes are quantised perfect, the time lapse between first note off and second note on is 0 - a great problem for the animator.

The minimum time length a human can achieve is widely considered to be 1/64th second, so frame rate is also important and, indeally should be based upon the BPM and shortest note length. This causes more problems in setting the blend file to be consistent for variable BPM files.

Another problem is the animator does not want a piano key mesh, for example to "snap down" immediately - caused by a lack of damping (as Patrick calls it), or easing as I called it and also wants a definite separation between consecutive executions of the same note - hence my "note separation" value. This gives us the chance to animate a keyboard realistically from perfect quantised MIDI data.

No human could play perfect quantised sequential notes, but there is nothing to stop someone editing a MIDI file with a DAW to achieve just that, or indeed note lengths of shorter than 1/64th second. Often drum tracks consist of very short "trigger" events, rather than note lengths reflective of the sound that is produced by the drum sample.

In terms of efficiency, I have conducted experiments on a MIDI file (Scott Joplin's the Entertainer), that I seem to remember consists of 55 individual unique notes played a total of 7,500 odd times. It took my Mac about 1.5 seconds to make the F-Curves, or controls, from this data, I cannot imagine that this will work "on-the-fly" between two frames even running at 1/24th second. I have also looked at various classical pieces where the note duration and frequency would mean we have to work in 1/64ths of a second, or multiple thereof to get a realistic animation.

The system has to be able to cope with multi-track MIDI files, one of my test MIDI files has 23 tracks and lasts 11 minutes, this is impossible to interpret "on-the-fly" between frames. Then there is the problem if the user "scrubbing the timeline to say frame 1100 then expecting the system to show which notes are on at this point, from a MIDI file this can only be done by back-tracking - Pink Floyd's "Shine on you Crazy Diamond" holds the first chord for 30 seconds, from memory? that is an awful long way to back-track to find the first note-on event.

So, I should welcome any thoughts on these points I have made. I cannot and never did find a way to read events data without first converting it to continuous data, or to avoid the need to clean, ease and separate notes in order to get a satisfactory animation, there are quite a few examples on my YouTube, including a drum animation, to reflect the level of realism I was able to achieve.

Cheers, Clock.

Patochun commented 5 years ago

if you have in your data structure representing a track all the timestamped notes for on and off with a measure in seconds (float), I do not think it is impossible to find the position of the notes at a time T. The data being necessarily sorted in the timebase. Then we can use a dichotomous algo to quickly find the state of the notes that corresponds to the last change of state that may have occurred before for the moment T. Then we must go back as you indicated . There remains the problem of the note supported at time 1 while one looks for what is active at time 1100 (in frames). It's something rare, but all the same.

I think aloud as they say here.

Maybe it would be necessary to create the structure of a track a little like the way in which one optimizes the changes in video (mean compressed video codec). We limit the return by noting at regular intervals (*) the state of all the keys. It is simplified by replacing process times in the amount of data to be stored (almost free).

(*) calculated by the time granularity of the music for example.

What do you think ? Patrick.

Clockmender commented 5 years ago

I'll think about what you said, I personally never found a more efficient way to handle the data other than a curve, this will always return the state of any note at any time very quickly, but let me know how this progresses. I am concerned with the concept of back tracking the file, for what could be a long way, in between frames where you may only have say 1/64th of a second to do so, along with all the other processing required for the animation.

One thing I forgot to say is that whilst you might not want to use a MIDI.csv file, they are very useful, given they are human readable, for the purposes of debugging and data confirmation.

I agree with you that time granularity is very important, but presents issues when the BPM changes...

Clock.

Patochun commented 5 years ago

To be more precise, I'm talking about the resulting temporal granularity. Understanding the positioning in pulsation by quarter note modulo the tempo that can change often (in one of my example files it changes at each bar).

Then have nodes for notes individually with their timeline and use loops or list nodes evolving with frame event? I think we should test both to see the feasibility in terms of ms.

Clockmender commented 5 years ago

Then have nodes for notes individually

Do you mean one node for each note? I have a file with 23 tracks and average of 35 uniques notes per track - that means a lot of nodes (805), unless I have misunderstood you...

Patochun commented 5 years ago

Yes 805 potentials nodes, that's what I understood from AN's "data" centric design as described by Omar. And this is where loop management comes in. But it also leaves other possibilities that one imagines can not be today ... choose note by note if it's make sense, do some mathematical operations that will make sense for a given animation.

I could see in depth your site and understand that you are looking to recreate existing instruments in 3D (and beautiful work by the way) but there are other ways to imagine a music-based animation. Why not smoke evolving, drops of water falling, things that vibrate without speaking of light.

The data centric versus events vision leaves open all doors to all imaginations

Clockmender commented 5 years ago

Yes, I imagined this also, The original 2.79 method stored an F-Curve for each note in the "Bake" node. Each Bake node held all the notes for one track and output the values from the evaluated F-curves on each frame as a float. So a list of floats was output. Example: Scott Joplin's The Entertainer - the output was a list of 55 floats, one for each unique note played - so one Bake node per track. This was fed into a loop that also had an input Object List of objects to be animated, this could be anything at all and these then responded to the floats.

In some animations I drove Shapekeys, in some I drove Armature Bones, in some I drove Material values, etc. it was completely adaptable to any situation. In the video I posted much earlier here, lights are pulsing to the drum beat for example.

However, in AN 2.1/BLender 2.8 that stopped working, presumably due to design changes in AN, Blender, or both, so I went back to Empties - one per note per track and used the animation F-curves of the Z axis to drive anything, so the empties moved in Z only from 0 (velocity 0, or note off) to 1 (velocity 127) this movement could then be used to drive anything through AN nodes.

The advantage this has over baked F-curves is that you can go in and physically change the F-curves (move keyframes) if you spot problems in the animation, so some notes too close together - just move the keyframes in Graph Editor, whereas the only way to change the baked curves was to edit the MIDI.csv file, the re-run the bake.

Personally, I would be happy with a system that stored all the notes for one track in one node, I am not convinced by one node per note, I think the node tree would be too big, un-workable and confusing.

Take this example, I know the resolution is not good:

nodes-11

Here, 8 tracks are driving many animations and a "sound bake" is driving others as well. There is no reason why one "bake", or "Track" node cannot drive many separate animations, just use more output loops and object feeds.

EDIT:

I am not proposing Empties as the solution, just that we keep one "feeder" node per track.

OmarEmaraDev commented 5 years ago

Please note that the "perceived continuity" of FCurves is only guaranteed by the FCurve evaluation function, however, the FCurve is structurally not continuous, it is just a list of spline anchors. Similarly, our MIDI Notes list will not be continuous, the continuity will be guaranteed by an evaluation function, which we will encapsulate in an Evaluate MIDI Notes node as I described above. Using our own evaluation function is, of course, very advantageous. In particular, it is not destructive, so no baking will have to be done, ever. Also, it is very flexible, we can use ADSR envelopes while evaluating the notes, a high release time can be used to visualize drums even if the note is infinitesimally short.

In addition, a data driven approach does not mean we don't have access to events. In fact, they are readily available inside MIDI Notes for low level access. The same can not be said to the event driven approach.

I see that you are worried about the performance. While I don't have numbers for you, I highly doubt it will be expensive. We we will never have to back-propagate through events, because delta times will be resolved to absolute float times of great precision at parse time. Moreover, the MIDI notes will be parsed automatically sorted, so we can use things like binary search, which is an O(log(n)) algorithm where n is the number of notes. So we shouldn't worry about performance for now.

Patochun commented 5 years ago

Ok Omar, I'm in with your proposal. FCurve insight is very instructive. Have you an idea of structure data and storage (and cache ?) ?

Can we use the MIDO python module or we have to recode the reading and decoding of the MIDI file ? So it will be a good practice to update my doc and redesign the solution.

OmarEmaraDev commented 5 years ago

@Patochun I presented one such structure above, what do you think about it? The MIDI tracks will be cached using an LRU cache upon parsing the file, just like we do with sounds.

We will probably write our own MIDI parser, but I think using MIDO at this stage is ok, in fact, I encourage it.

Patochun commented 5 years ago

We have to design what structure in output of the MIDI file node. And obviously the time signature for events inside. Actually I manage with MIDI time code and tempo change to put events in second (float). I think it's the best approach.

OmarEmaraDev commented 5 years ago

I think the node should return a list of MIDI Tracks. And yes, I agree with you approach, delta times should be resolved to absolute float times.

Clockmender commented 5 years ago

So, if I understand you correctly, you are going to parse the MIDI file to cache a set of absolute on-off times for each note? I know that is simplistic, but bear with me. Four questions please:

1) How are you going to cater for note velocity? 2) How are you going to "ease", or dampen movements of meshes, like piano keys? they should not go from off to on in 0 time. 3) How are you going to cater for consecutive quantised-perfect notes? Users will need to see that the note was played more than once, not just held on for the duration of consecutive on-off's. 4) How will users be able to "tweak" the animation, by moving notes a little, changing velocity, etc. without editing the MIDI file in a DAW.

I think you should consider these points now, rather than later... But I like the approach.

EDIT:

Time signature will allow users to set music bars, tempo as a output is more important I think.

OmarEmaraDev commented 5 years ago

@Clockmender

  1. Each MIDI Note will store the on and off velocities. A processor node, such as an Evaluate MIDI Notes node, will then use those velocities for evaluation, for instance to determine attack times.
  2. Easing can be done when evaluating MIDI Notes or possibly at a lower level by the user. High level nodes can provide parameters to evaluate notes at an ADSR envelop. I mentioned this three comments above.
  3. This is up to the user, a low attack and release times should handle this. And the evaluation algorithm should put this in consideration.
  4. The user can tweak the animation by looping over the notes and editing them procedurally. Animation Nodes is a procedural system, manual editing has to be done on the data itself using a DAW like you said. So I personally don't consider this a limitation.
Clockmender commented 5 years ago

Thank you for your answers, you are clearly on top of this and I can't think of any way I can add to this project at the moment.

In my DAW nodes, I used the Aud ADSR function to affect output sounds from a generator node, so I am familiar with ADSR, I just missed your comment...

So, for this, can I assume that if a note has this (written out):

Note E4, note-on = 10.5 , note-off = 10.7, velocity = 101

Gets adjusted so the ramp-up to velocity 101 and the ramp-down to 0 takes place over say 0.05 seconds (figures are only for illustration) and would be user set as a node input?

By editing procedurally, do I understand that you will be able, with a "Separation" node, to increase the start time and/or decrease the end time, for a note procedurally? Is it possible to have a node that can do this wherever it sees two consecutive note activations that have no time gap, or would the user have to do this on a case-by-case basis. A node that could automatically look for 0, or less than a set value, time spaced consecutive activations would be nice, if you can see a way to do this.

Example (written out):

Note E4, note-on = 12.4, note-off = 12.6, velocity = 89
Note E4, note-on = 12.6, note-off = 12.8, velocity = 93

These two events have no separation. So what is in the cache (however you plan to store that) gets changed procedurally, in the node's output, to say this:

Note E4, note-on = 12.4, note-off = 12.56, velocity = 89
Note E4, note-on = 12.6, note-off = 12.8, velocity = 93

Or

Note E4, note-on = 12.4, note-off = 12.58, velocity = 89
Note E4, note-on = 12.62, note-off = 12.8, velocity = 93

And a user input for the node (Min Separation), say 0.04 to be applied whenever the separation is less than this figure automatically.

I am just trying to throw in my experience from the problems I found when I did this before.

Patochun commented 5 years ago

I'm working tonight on this first node and the design document and you make the proposal at the end.

A precision that must be requested on the node in addition to the path of the MIDI file: the use of channels yes / no.

In MIDI they are two different notions but which finally address the same subject.

MIDI files use a 1 to n track relationship to 1 to n channels (all combinations are possible)

1 track to 1 to n channels => use of channels to separate the instruments.

n tracks to n channels => same as before

N tracks to 1 channel => situation created by some DAW (Reason for example). The channel numbers are at 1 and do not represent the different instruments.

It is therefore necessary to add a property which indicates if one wishes to separate the instruments with respect to the numbers of channels (the most general case) or compared to the tracks without taking into account the number of channel (the rarest case)

Ideas for the name of the property? It would be a boolean.

Patochun commented 5 years ago

In terms of storing notes it will be more so I think:

Note E4, note-on = 12.4, velocity = 89 Note E4, note-off = 12.6, velocity = 0

Do we take this opportunity to normalize the import of note_on type writing to 0 velocity to make a note_off and turn it into real note_off event instead?

Patochun commented 5 years ago

@clockmender your experience, like ours too, avoids us falling into traps and thinking enough about all important aspects.

Clockmender commented 5 years ago

The reason I thought of having the note-on and note-off values in the record is it may be easier to see if the current time is contained within the event (I know @OmarEmaraDev doesn't like this term, but I can think of no other name for it just now).

So, if on frame 100 (current time = 4.166 seconds at 24 fps) it is may be easier, or more efficient for the system to know that this event is currently on at velocity 67:

Note E4, note-on = 4.0, note-off = 4.8, velocity = 67

This could even be written as (times are absolute again):

E4, 4.0, 4.8, 67

The one thing we do know is that note events in the MIDI file will be ordered note-on, note-off, note-on, note-off, etc. always in pairs.

What do you think? I am just thinking about combining note-on and note-off into one record to make things easier rather than having to look at two records to see the status of an event. I don't know if this is practical, or not, but it might help if the user "scrubs" the timeline, or jumps to say frame 102...

With your separate records for note-on and note-off, you could write it like this:

E4, 4.0, 67
E4, 4.8, 0

That would work also I think...

You know the note is on because current time is > first record and < second.

Patochun commented 5 years ago

It may be to avoid presuming the future. Later we can also have other events than note_on and note_off. For example polyphonic aftertouch for notes or pitchwheel, modulation controler, sustain pedal, expression pedal, pan, volume, etc. for track.

And Internally something like that : Note E4, note-on = 12.4, velocity = 89 Note E4, aftertouch = 12.42, velocity = 92 Note E4, aftertouch = 12.45, velocity = 95 Note E4, aftertouch = 12.48, velocity = 104 Note E4, aftertouch = 12.52, velocity = 115 Note E4, aftertouch = 12.56, velocity = 103 Note E4, aftertouch = 12.58, velocity = 95 Note E4, note-off = 12.6, velocity = 0

Clockmender commented 5 years ago

Good point, so that could be:

E4, 12.4, 89
E4, 12.42, 92
E4, 12.45, 95
E4, 12.48, 104
E4, 12.52, 115
E4, 12.56, 103
E4, 12.58, 95
E4, 12.6, 0

Or would you want to know that there was an aftertouch event?

Patochun commented 5 years ago

Almost :) Think about other MIDI msg, track msg ?

Clockmender commented 5 years ago

Have a 3 character code for the event type?

NOT, E4, 12.4, 89
AFT, E4, 12.6, 92
TEM, 13.6, 88
SUS, 13.9, 127
SUS, 14.2, 0
MOD, 15.6, 45
MOD, 15.8, 0
NOT, E4, 16.0, 0
TEM, 16.0, 64

Am I getting close? NOTe AFTertouch TEMpo SUStain MODulate ? Tempo is in Beats Per Minute.

Clockmender commented 5 years ago

You could also include track number so it becomes:

NOT, 1, E4, 12.4, 89

So format is; Message type, Track Number, Time, Velocity?

That way we could handle the entire MIDI file in one operation and filter by track & type? Maybe have a subsequent "Track" node to separate out each track for routing to an animation loop?

Patochun commented 5 years ago

Almost almost I think TEMPO is useless because all msg event are now in real time. So the change of tempo is already include.

I vote for your proposal and I add something usefull in first position the canal number.

1, NOT, E4, 12.4, 89 2, NOT, E4, 12.4, 89 1, SUS, 13.9, 127 2, AFT, E4, 13.95, 98 2, AFT, E4, 14.05, 91 1, SUS, 14.2, 0 1, MOD, 15.6, 45 1, MOD, 15.8, 0 2, NOT, E4, 16.0, 0 1, NOT, E4, 16.0, 0

What about the order of fields, channel and msg time and ? What about fixed number of fields ? I don't know.