As someone who just started VJing a few weeks ago, this is incredible and deeply inspiring. Thank you for showing us a bit of your artistic process
2 года назад
Fantastic. Thanks for the insights! We would love a tutorial but if it's going to be paid please consider those who doesn't make a living in euros or usdollars 😅
Damn, i finally can stop bursting my head to figure out your incredible techniques! Thanks for uploading this brother! I think thins is one of the very few videos of yours that give a behind the screens preview into your owkr style, oother than the old VVVV video! looking forward to more! Keep Inspiring!
Been a follower since 76 subscribers! Spectacular. Thank u for sharing. Playback is one thing, creating original content quite another. I’d be fascinated to know more about your approach to this and especially Savej Solstace x TAS.
Sick setup! I've been building Max for Live devices that send custom OSC over wifi to my visuals laptop and lighting console. That allows me to be in Standalone mode on stage and use the Push 3 as a highly customizable controller for visuals, lighting, and lasers at front of house.
firstly - you're my favorite VJ and have been for like 10 years. That said -My problem with this set up is the seven bit resolution of MIDI. With 128 values, along with the ± 10 ms jitter, I ruled out using any of these VJ programs. It's a shit load of work, but if you build just the things you want to do in jitter (inside Ableton using max for live), that automation can be controlling floating-point values with 1 ms latency - plus those values could be Parameters of real-time shaders.
10 ms would be hard to notice If you build just the things you want to do in GLSL and control floats directly in GPU buffer.... you can go sub ms, but why?
@@vacuumhack Because it’s tighter. Depending on what you’re doing if there’s a detectable visual transient 10 ms drift in either direction feels squishy. I’ve even seen 15 ms with the MIDI loop back. Additionally, if that’s a continuous value change and it updates at that interval for a cluster of messages it’s going to look steppy for color or brightness changes. Sure, you can interpolate with a smoothing factor (or can you in Resolume?) but then you smear across an even longer lag time. For me, I’m also interested in brainwave entrainment and I need that hard sync. Those Neurons gotta fire same time even if you can’t consciously detect it. The bigger issue across the board really though is the seven bit resolution. It’s very useful to be able to use three parameters at eight bit resolution for RGB. You decidedly need 256 values for each to do that well. And finally, it’s Ableton, keeping it all in Ableton is less unwieldy using Max for Live. And finally finally I invented the APC40 a spell ago and that sysex message all the VJ peeps are using to unlock it I leaked (it’s what tells Ableton it’s connected). I want it all Ableton. Suffice it to say, I’m kind of attached to Ableton for everything I just think it’s beautiful software.
Hey wow this is amazing; how do you use arrangement view to do this? All M4L plugins for resolume are just dong session view clip launch. Please advise whats the best way to use arrangement view thanks :)