Closed TubularCorpration closed 4 years ago
hii @TubularCorpration ! and thanks for the detailed description / idea - i agree this would be a super useful way to enhance the "performance" aspect of this instrument (which is something i think about often)
when i first implemented midi i did try to also read the midi clock. one problem i found then was the clock is pretty fast and the python thread reading this is not super optimised , so i found it was flooding the message queue a bit and dropping some actual messages. (thats not to say it cant be done , just that would need to rethink this part a bit / play around to get it working)
for the main sampler engine (playing back video files) the shortest loop possible is around 2 seconds , which amounts to a lot of midi ticks (~23 000) -> in my experiments, since triggering video files works better with longer cycles like bars rather than notes, i found that using a midi sequencer to send a SWITCH (->) command every bar / every four bars was a better way to synchronise
still i agree with you that having this kind of clock control would be rad - it is noted, and will take a look when im working on a new version (not sure when that will be though)
finally, selecting the start / end subloop points with midi-cc is possible (and pretty trivial to implement) ; the only problem is that setting a new loop point triggers a video reload (actually only a new start point needs this) , so it would be better with some kind of scroll then accept interface (like a push encoder style) otherwise it will be constantly reloading a video on every value as you scroll over the timings with cc - ya know ?
That makes a lot of sense, and as I use it more the MIDI clock idea seems less useful anyway.
For my use case, the CC to subloop points retriggering the sample wouldn't actually be an issue, but I definitely see how that would make it less useful if you were using the CC to scroll through timings. What I actually had in mind was retriggering the video at different starting points from a step sequencer, essentially being able to do a crude version of sample slicing with a single video file. I can get a bit of the way there by enabling random start in the setup menu but not being repeatable makes it only so useful, and also I've noticed that the random start points get less random as you retrigger a video more rapidly (below around 4 seconds it still randomizes but seems to only randomize between two or three specific points in the video; below about 2 seconds between retriggers it doesn't randomize at all).
If sending a single CC value to set the start point of the subloop also retriggered the current video that would actually make it work BETTER for what I had in mind than how I thought it would work (send a CC value to set the new start point and then a SWITCH command to manually retrigger). Obviously it would still be limited by the python performance you mentioned, but it would still be really useful to be able to do this in my opinion.
Hopefully that makes sense. The simplest version is I'd like to be able to retrigger the current video at repeatable different start points from a step sequencer, and the behavior you described would actually be perfect for this even though it wouldn't work well for manually scrolling through start points. If it's easy for you to implement (or if you can point me in the right direction for doing it myself, with the caveat that I've got minimal coding experience and none in Python) it would be very appreciated!
Great idea, was intrigued how to pull it off so had a quick look, this is my best take but I'm not overly familiar with the video stuff so please correct me if I'm wrong Tim:
Add a method to actions.py, called like "seek_to_position" that takes a 'position' value 0-1. This method would check the current loop length from VideoPlayer and change the seek position as appropriate (position * length), similar to the seek_forward_on_player and seek_back_on_player methods. You should then be able to bind 'seek_to_position' to a MIDI CC to control the playhead like a jog wheel?
Or you could add a bunch of methods to jump to specific points in the video (or %s) and map those to notes so you can trigger them from a step sequencer.
Some of the features in the dev branch to (more advanced MIDI mappings+less typing for methods like described above) would make it possible to have specific timestamps (not limited to MIDI 7bit resolution) set as macros that could be triggered by noteon/offs too. I'll try and have a play with this when I get chance and see how it works.
Presuming that my assumption about how the video seeking works is in the right area of course? :)
That sounds like it would do everything I'm after and more if it works.
The way this fits in to my setup right now is that I've got it patched to input A on a switcher, with a 3trinsrgb+1c on input B and a couple of different kinds of feedback loops on the aux inputs. I'm using r_e_c_u_r to trigger clips in time with music from a couple tracks on a MIDIbox SEQ v4 and then doing keying, mixing and processing in the switcher and using it as visuals for livestreams I've been doing as half of a duo (although we've taken a coupole weeks of fof streaming while I reorganize the video setup around r_e_c_u_r instead of a VCR and a tape of David Carradine's Tai Chi Workout, which is what I was using as my main video source before). Since I'm juggling video and music and we build everything up from scratch over about 90 minutes each stream, the more I can control from the sequencer the better. In the long run I'm hoping to incorporate some of the audio from the video clips into the music.
Oops didn't realise I'd committed it to your branch Tim, I meant to run it past you first, I usually got told I didn't have permissions when I tried to push -- thanks for giving me access! Hope what I've pushed looks OK?
I've added the function I suggested above to the master branch, and to the dev branch too with some fixes to the ManipulatePlugin to make it easier to use random etc.
If I map a knob to /seek_to_location_on_player/ and turn it then the indicated play position moves and the video starts (after a bit of a delay that I presume is the video player seeking to position?).
On the dev version I've added examples to the json_objects/midi_action_mapping_APC Key 25.json file (right at the bottom) demonstrating how to map buttons to jump to 0, 25%, 50% and to a random position.
If you want to give it a try then you'll need to update to the latest dev image from and then hopefully 'UPDATE CODE' will work, but let me know how you get on @TubularCorpration !
Excited to try it! I'll need to wait until the weekend, I got one of these in the mail today and edited my keymapping file to break out some of the function+key commands to dedicated keys now that I have more to spare, but I was working on the LCD and made a typo someplace, because Recur won't run unless I swap it out for the old keymap file I'd customized for the old 18 key keypad I'd been using temporarily. Won't have time to got through and figure it out until Saturday, but I'll definitely update to the dev image since I'm going to be starting all of my custom settings again from scratch anyhow, and hopefully I'll have everything working and test out sequencing seek-to-location values and see how it works.
Oh in case it wasn't clear, the basic seek_to_location_on_player is available on the master branch if you update code too, but certainly the dev version allows a bit more flexibility :)
Excited to hear how you get on and if there's any other stuff that'll improve it :)
Thanks, I wasn't totally sure if you'd left it in the master branch or not after the conversation above.
For now it's moot because I just discovered that when I'm running the 2.10 image I'm not able to get a network connection ("no wireless interfaces found" message; no luck with ethernet). For now I'm working on getting the keypad mapping set up, and then I'll look at manually installing the updated code from the USB drive.
I use Linux occasionally but my experience with it is pretty superficial, so I don't want to go poking around in the network config files on a customized image like this without knowing what I'm doing, but if you have any ideas it would definitely make things easier if I could get a network connection. Networking works fine with the standard Raspberry Pi OS images.
EDIT: No luck with the network connection, but I manually copied over the lastest code and gave it a try, and it works well. Like you pointed out there's a little bit of delay before the video starts playing but it's no worse than the half second or so of delay I get when I use play next to retrigger the current clip, and since I almost always add a bit of time blur and strobe in my switcher when I'm using prerecorded clips it's not an issue at all, it just registers as a crossfade rather than a fraction of a second hung on a single frame. If I was trying to do quick cuts or use audio from the video clips it might be an issue but for what I'm doing now this is exactly what I was after, thanks!
EDIT 2: already figured out what's happening with the keypad mapping, didn't realize I needed to manually edit the default keymap to get access to keys beyond S. Hadn't checked the instructions for creating an image before.
EDIT 3: Manually setting up wifi per the description in the dotfiles readme didn't help, still no network access with ethernet or wifi.
Yes the wlan thing is slightly different on that version (sorry for the confusion this caused I will try and fix this for the next version coz I now realize what a pain this is ) hopefully this will help a bit https://github.com/langolierz/r_e_c_u_r/wiki/recur-v2.1.0-release-notes#note-about-wlan-connection
Not sure why Ethernet wouldn’t work tho
On Sat 25. Jul 2020 at 16:12, TubularCorpration notifications@github.com wrote:
Thanks, I wasn't totally sure if you'd left it in the master branch or not after the conversation above.
For now it's moot because I just discovered that when I'm running the 2.10 image I'm not able to get a network connection ("no wireless interfaces found" message; no luck with ethernet). For now I'm working on getting the keypad mapping set up, and then I'll look at manually installing the updated code from the USB drive.
I use Linux occasionally but my experience with it is pretty superficial, so I don't want to go poking around in the network config files on a customized image like this without knowing what I'm doing, but if you have any ideas it would definitely make things easier if I could get a network connection. Networking works fine with the standard Raspberry Pi OS images.
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/langolierz/r_e_c_u_r/issues/142#issuecomment-663859993, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC3WCEQW7VFTAV5BQ7OHJ7LR5LR6FANCNFSM4PAFO2ZA .
Thanks!
I had to finally learn a little Python, but I got the keypad working the way I need by making a couple small changes to the on_key_press and on_key_release functions in numpad_input.py
` def on_key_press(self, event):
numpad = list(string.ascii_lowercase + string.digits[0:36])
if event.char is '.':
self.actions.quit_the_program()
elif event.char in numpad:
self.run_action_for_mapped_key(event.char)
else:
print('{} is not in keypad map'.format(event.char))`
` def on_key_release(self, event):
numpad = list(string.ascii_lowercase + string.digits[0:36])
if event.char in numpad:
self.check_key_release_settings(event.char)`
Not sure if there's a more elegant way to do achieve the same thing that wouldn't break your original keypad setup like this does, but it works fine for my purposes.
I'm sure the network issue is something obvious I'm missing because of not needing to mess with network configuration or Linux very often.
Hey, I've been getting a lot of use out of this, just reopening to ask how plausible it is to implement support for pitch bend messages for this in addition to generic CC, to allow 14 bit resolution instead of 7 bit. I had a look at the code but it's over my head so far.
I've started working with large collections of footage in single video files ad using this to jump to locations, and it would be really nice to have finer resolution for working with 3-4 hour long videos. It's great for shorter 10-20 minute clips and works well with really long clips but being able to have finer control would open up more possibilities for storing a lot of footage as a chain in a single long video file while still being able to get a lot of variation out of a small range within that file.
Sounds like it would be not too much trouble to add this. Good idea ! Glad it is working well for you so far !
On Thu 6. Aug 2020 at 21:53, TubularCorpration notifications@github.com wrote:
Hey, I've been getting a lot of use out of this, just reopening to ask how plausible it is to implement support for pitch bend messages for this, rather than generic CC, to allow 14 bit resolution instead of 7 bit. I had a look at the code but it's over my head so far.
I've started working with large collections of footage in single video files ad using this to jump to locations, and it would be really nice to have finer resolution for working with 3-4 hour long videos. It's great for shorter 10-20 minute clips and works OK ever with really long clips but being able to have finer control would open up a lot more possibilities for storing a lot of footage as a chain in a single long video file while still being able to get a lot of variation out of a small range within that file.
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/langolierz/r_e_c_u_r/issues/142#issuecomment-670159972, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC3WCEQAV6B6IQUAKNPBT7LR7MC3RANCNFSM4PAFO2ZA .
Thanks! It's working very well so far.
Just to be clear, regular 7 bit CC control is still more generally useful for me if only because the sequencer I'm using has a lot more generative potential with 7 bit CC than it does with pitch bend (I've been experimenting with very low resolution CC LFOs instead of sequenced values today and that works really well, but I'm pretty sure the LFO section in the sequencer only supports 7 bit messages - about to check the manual to make sure). So both 7 bit CC and 14 bit pitch bend control absolutely have their places here, I guess it's more about making pitch bend available as a control source more generally for contexts where finer resolution is useful, without having to resort to something cumbersome like NRPN.
Although thinking about it now, that could be useful too - being able to map two arbitrary 7 bit CC sources as as the MSB and LSB of a 14 bit NRPN value internally in Recur has some serious generative potential since you could do things like using two free running sequences of different lengths to change the MSB and LSB on the fly and letting them fall out of phase so the NRPN values would chance as the timing of the MSB and LSB messages modulated against each other, if that make sense.
I guess you would want to use the last received value for the missing byte when only one message was received to make this viable. So let's say the MSB and LSB were being sent to r_e_c_u_r asynchronously as CC numbers 75 (LSB) and 76 (MSB). Before any messages are recieved it's sitting at MSB=0 LSB=0. The message CC76=50 is sent and the most recent value for CC75 is 0, so the CC message is interperated as a NRPN with MSB=50 and LSB=0. A few steps later, CC75=64 is sent. Because the most recent value for CC76 was 50, this new message is interpereted as a NRPN with MSB=50 and LSB=64. Then CC75=10 is sent, giving us MSB=50 LSB=10. Now imagine how much variety you could get if the values of those two CCs were, say, sent from two separate sequences, one 7 steps long with values sent on steps 1, 4 and 6 and the other 32 steps long with values sent on steps 5, 12, 24 and 30 - madness. Or it could be used for something simpler like having the MSB and LSB values sent from two knobs (or the axes of an X/Y pad or joystick) on a MIDI controller. Or the velocity from a drum pad and the position of a fader. Obviously because of the speed of the r_e_c_u_r loop it wouldn't be useful for smooth parameter changes but for something like seek_to_location_on_player it would be a way to get some pretty complex results from simple inputs.
That's just an example usage case, but hopefully it shows the potential. But that's all a lot more niche, pitch bend is a way to send a 14 bit value as a single message without having to worry about the mechanics of it.
I should probably start learning more about Python.
Glad its working :D.
You got me thinking about ways to do this too so I rustled this up, using the ManipulatePlugin to do MSB/LSB.
If you can do an 'update code' on the dev version then I've made a change to support calling other methods from inside the 'formula' action in ManipulatePlugin.
Check the example in the APCKey25 config, or add something like these:
"NAV_MANI": ["set_variable_playheadmsb&&f:pc.actions.call_method_name('seek_to_location_on_player',((x*10) + playheadlsb)/11 ):|"],
"NAV_MANI": ["set_variable_playheadlsb&&f:pc.actions.call_method_name('seek_to_location_on_player',((playheadmsb*10) + x)/11 ):|"],
to two control_change sections in your config.
I think that works?! I'm not sure how it'll interact if you've got two knobs being turned/sequenced simultaneously though, worth testing out, looking forward to seeing how you use this!
Some approaches I tried originally could cause conjur to crash when it was told to seek out of range so might need to add some range checking if any similar edge cases come up :)
Let me know how you get on!
I admit though I've no idea how pitch bend/NPRN works, that's probably something I should learn how to do too, but I think that'll require a bit more coding on the MIDI input bit of recur in order to support NPRN mappings and messages :)
However, I believe that no matter how a value is received into recur via a mapping (osc, midi cc, hypothetical NPRN), it gets converted into a float before goes to any of the internal actions, so other than that initial scaling you shouldn't need to worry about converting or scaling the value in the action.
I'll definitely try that out this evening or more likely tomorrow. To be honest, I don't really use or think about NRPN more than absolutely necessary, which isn't that often. NRPN is just two standard 7 bit CC messages concatenated into a 14 bit message. Pitch bend is 14 bit across the board, and would probably be more useful for most people but for my specific workflow it's actually more useful to use NRPN, so hopefully this will work well. I think the trick is going to be seeing what happens if it only gets half of the NRPN value; if that doesn't produce a useful result then it would be a lot more practical to just use pitch bend for 14 bit messages, since it's over all a lot more straightforward on the end user side. I do thing it should be possible to make this work though, at most I think we'd need to store the last recieved MSB and LSB values as variables and then create the NRPN from the current values in those variables.
The way I'm thinking about it, two variables would be declared to hold MSB and LSB values. Whenever a CC message corresponding to one of those as received, first the corresponding variable would be set to the received value, and then both variables would be concatenated into a single 14 bit value and that's what would be passed to the actual function that was being controlled (seek_to_location_on_player in this case). So there would be three variables in play (MSB, LSB and the combined NRPN), and in really vague pseudocode, when a CC message for, say, MSB was received, this would happen:
VAR_MSB = CC_MSB VAR_NRPN = VAR_MSB ⧺ VAR_LSB
The idea being that you can send MSB and LSB CC messages completely asynchronously but always end up with a single 14 bit word made up of the two most recent CC values.
I'm not sure, but I think one of the things that should happen with the sort of implementation I'm talking about is that changes to only the MSB would cause large jumps in the playback position and changes to the LSB only would cause small changes RELATIVE TO THE POSITION SET BY THE MSB, so by changing the two values asynchronously you could use the MSB for setting the coarse playback location and the LSB to make smaller jumps relative to the last MSB value.
I don't know enough Python syntax to be able to tell if that's happening in your code, or if you're getting the same result in a more elegant way, but if it works then it works. I'll let you know.
So the bad news is I can't even get even an unmodified dev version to load on my Pi (it gets partway through loading services and then just stops), but even if I could I'm relying on my modified version of numpad_input.py to be able to work with the larger keypad, and the changes made in the dev version conflict with some keys I've been using, so right now it's not really an option for me regardless.
I tried adding /plugins/ManipulatePlugin.py in the r_e_c_u_r directory with the latest 2.10 version I've been using and that loaded fine but as soon as I added the actual MIDI binding code, r_e_c_u_r stopped loading; it will load some services and then stop at a different point every time.
But the good news is while I was messing with this I realized that I had accidentally swiched the video backend to ofomxplayer at some point, instead of ofvideoplayer (which I had been using becuase it seemed to switch clips and seek with a bit less delay) and the performance difference wasn't that big when I was using MJPEG encoded files but now that I've switched to h.264 ofvideoplayer is a LOT faster than ofomxplayer, to the point where the new seek feature is essentially seamless AND responds quickly enough that instead of needing about 48 ticks between CC messages to work consistently at 120bpm, it only needs 4-6 now, so seamless stuttering and LFO modulation work now. With MJPEG there wasn't any appreciable performance difference.
The kind of on the fly slicing via MIDI CC I wanted to do is 100% working with h.264 encoded files using the ofvideoplayer backend. I haven't hooked up audio yet (and ironically I just stripped the audio out of all of my clips yesterday to save storage space) but I expect with the performance I'm getting now I'll have no trouble using the audio from r_e_c_u_r as part of the music I'm doing now.
Tomorrow I'll experiment with mapping notes to locations an see if it responds fast enough to make finger drumming seek locations practical.
Ah, that's a shame that you can't get the dev version working :/. Whereabouts does it hang on boot?
Regards the 14-bit msb/lsb thingy, I think the ManipulatePlugin binding as I wrote it above acts how you describe, although I'm not sure if the output is actually 14 bit resolution -- what its doing is:
For the broad control: "set_variable_playheadmsb&&f:pc.actions.call_method_name('seek_to_location_on_player',((x*10) + playheadlsb)/11 ):|"
(and the fine control does the same, but sets playheadlsb and uses the previously stored playheadmsb)
I can't get my head around how to work out what the actual resolution is haha. I guess it is 14 bit still because you've still got 7 bits representing the broad control and 7 bits representing the fine control.
I'll have to load it again to check exactly which service it's hanging on. I didn't write it down because I figured I'd remember and of course I didn't.
That sounds like it should do what I was describing.
Anyway, for now I'm going to stick with the main version since I've already got it working with my keypad the way I want it to, but at some point I'll see if I can get the dev version working on another SD card with a keymap for the little 19 key numpad I've got, so I can at least try it out.
For now, being able to seek instantly with 32nd note resolution since I switched backends and codecs has changed everything, and that alone is going to give me a lot to work with. Controlling the seek location with pitch bend would be a great option but even that is less important now. I did find this good, concise overview of pitch bend (and 14 bit MIDI messages in general) just now, might be of interest.
Hey @TubularCorpration, hope its still working OK for you? :)
I did find this good, concise overview of pitch bend (and 14 bit MIDI messages in general) just now, might be of interest.
Did you mean to post a link here? :)
Weird, I definitely pasted the URL in there, I'll try to track it down.
Still working great, but I've had a bunch of stuff to deal with plus the guy I've been livestreaming with is on vacation and we've been having a heat wave, so I've been mostly messing with Composers Desktop Project on the laptop and not doing anything that I need to turn on any hardware or the desktop for, just to avoid heat.
Obviously I wasn't able to track it down again, I sort of lost track of this thread after I moved over to scanlines.xyz.
Midi CC to start point has been working VERY well for me, barring the occasional MIDI overflow if I try to send too much data (and quickly unplugging and replugging the USB cable from my sequenced fixes that 99% of the time). I'm going to try moving to using notes mapped to percentage soon so that I can take advantage of MIDI delay and arpeggiators, but haven't had a chance to set it up yet. Also, the response is fast enough with h264 encoded files that finger drumming should actually be possible.
EDIT: forgot that note-to-location is only available in the dev version, and unfortunately the dev version isn't a good option for me until I adapt the changes I made to on_key_press in the main branch to work with dev. Too bad, but being able to use a larger keypad is more important right now from a performance standpoint. I could always use some kind of outboard MIDI processor to map notes to CC values I guess.
btw if you havnt found it already there is a channel in the scanlines chat where we can discuss this project too https://chat.scanlines.xyz/channel/r_e_c_u_r
On Tue, Sep 8, 2020 at 12:34 AM TubularCorpration notifications@github.com wrote:
Obviously I wasn't able to track it down again, I sort of lost track of this thread after I moved over to scanlines.xyz.
Midi CC to start point has been working VERY well for me, barring the occasional MIDI overflow if I try to send too much data (and quickly unplugging and replugging the USB cable from my sequenced fixes that 99% of the time). I'm going to try moving to using notes mapped to percentage soon so that I can take advantage of MIDI delay and arpeggiators, but haven't had a chance to set it up yet. Also, the response is fast enough with h264 encoded files that finger drumming should actually be possible.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/langolierz/r_e_c_u_r/issues/142#issuecomment-688529362, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC3WCEXXL3BBB4EZ7P7SM33SEVNW5ANCNFSM4PAFO2ZA .
Thanks, I hadn't really looked at chat yet and hadn't seen it, that would be more convenient for sure. I'll close this.
I think a really useful feature would be the ability to receive MIDI clock and set the start and end time to a user definable grid when the corresponding key on the keypad (or equivalent MIDI message) is received. It would make it a lot easier (possible) to match loop lengths to music.
It would probably be good to also add a clock reset command to make it easy to fix sync issues that could happen if, say, Recur needed to be reset in a live situation - send the MIDI message bound to clock reset on the downbeat of a bar from your clock source and you'd be back in proper sync.
The basic idea being that you could set an arbitrary number of quarter notes (or ticks) and any loop start r end commands would be quantized to the NEXT CYCLE of that length. So for example, setting a quantize length of 4 quarter notes/96 ticks wouldn't mean that the loop start would be set 96 ticks from the moment the button was pressed (the way some step sequencers handle event quantization), but rather there would be an internal counter that would start when r_e_c_u_r received its first clock message, and could be reset to 0 by a MIDI start message or any message bound to clock reset in the midi map file, and would reset every N tics, where N was the quantize value set by the user, the way quantizing to a grid in most piano roll sequencers.
Also, it would be really useful to be able to use MIDI CC to set the start and end points to arbitrary locations in the current clip. Nothing complex, just (number of frames/128)*CC value, so a CC value of 0 would be the first frame, 64 would be the halfway point, etc.
Those two features would make it possible to integrate r_e_c_u_r into live music performance a lot more than you already can, without adding much complexity at all.
Unless it's already possible to do some or all of this and it isn't in the documentation yet. I know it's possible to continuously adjust the start and end points in detour but that's all I could find. One of the reasons it would be great to be able to do this in sampler (even if it's not as seamless as Detour) is that you could quickly select musically meaningful ranges out of a large file, and also because sampler has audio playback so it would be useful for integrating the audio from video files into the music.