We are trying to find someone to help us with a current project. We are trying to combine 12 projectors to display one large image. We are currently using Datapath FX4 devices to support this need. We need assistance to edge blend the 12 projectors to create one clean image. We’ve heard there is softwares that can help but don’t have any experience in this arena. The projectors we are using are the Viewsonic LS550wh. Do you have any experience by chance or know of someone who may be able to help? We would be happy to pay you of course. Thank you so much!
I've just found your channel/videos and am enjoying them one-by-one. I like your low-techm prepared paper slides diagram style! For this video, I wish I could learn more about how the electron filling a "hole" in P-material to release a photon. LEDs are magical, but they shouldn't be... :)
This video is very useful, thanks a lot. My question is can we export the video in regular size, and later adjust in Q-lab to screen into two screens? Or should we export them dividing into two?
Hey do you you have any idea how to run a video signal from an Iphone to Qlabs to then project it up on a video surface? Can you do that? Or do you know someone who does know?
3:48 NO! It's the opposite; you want to disable that. Here; from the Qlab website: Disable 'Displays have separate Spaces'. Spaces is Apple’s name for virtual desktops. If you don’t know what this means, don’t worry about it; the main purpose of Spaces is irrelevant to QLab, but they have a side effect that is important for video users: if your displays are set to have separate spaces, the Menu bar also appears on all displays, which means it will be visible to your audience when no cues are playing through QLab. To fix this, open System Preferences → Mission Control and uncheck the box labeled Displays have separate Spaces. In Qlab preferences select 'Disable disruptive OS features in Show Mode'. 'Blackout desktop backgrounds' doesn't seem to work in Sonoma so I had to do it manually on my new MBP, but it used to work fine and I guess they'll fix the bug.
This video claimed I would know everything about the OSC after watching it, but it never touched on any show, character, or even the community’s history. I’m dissatisfied and I’m sending you to the TLC *bfdi.screaming.noise.sfx*
Thank you. I'd love to see how syphon works by taking video output from touchdesigner to resolume arena on mac os. Seems to be no videos about covering this. Thanks again!
Hey my friend, great videos - love this kind of content, hopefully you're still doing. I have a project at the moment I'm interested in. I wondered if you might have any advice. I'm trying to make a screen that uses a camera to use head perspective to shift the gaze around with the head movement (thereby matching the view on the screen). I'm thinking of using Unity and setting up OpenCV to monitor the head movement...etc... and then that's as far as I have got in concept :) I think the correct term for it is 'perspective-corrected rendering' - have you done any? Thanks for the projection mapping videos, I'm totally getting into this combined with some photogrammetry (which I've been doing for so long!)
I dont think a terminator at the end is "important" almost never used one and worked just fine even in 120m runs i know technically the reflections should couse troble but when i had dmx problems a Terminator never was the solution and second thing RDM where the fixture can talk back to the console it also only uses pin 2 and 3, 4 and 5 is not realy specified in DMX512-A and is very very rarely used
We had our windows tinted when we moved in. When I tried the rear projection you can hardly see anything. Can I project on the front of the window? How would that work?
not sure if you have already done a video that touches on this idea but I have been wanting to take my projection mapping skills to the next level by having a piece that is truly interactive similar to the examples i've seen of people dancing or waving their arms and the projected image moves as well. I believe this is being accomplished by a 3d/VR type camera that has the capability of detecting movement and distance or depth of the object in motion. I haven't been able to find something that explains it to where I feel confident I am investing in the correct camera or sensor to accomplish this. Is there any way you may be able to shed some light on this. Even a simple example would be great as I am trying to do a larger installation eventually for a light festival later next year. Either way, thanks for posting these types of videos. There have definitely been a few things you have gone over that helped with some projects of mine.
I had no idea of what you were talking about for the first half of the video because you kept mentioning Qlab. I didn't know what you were saying, but eventually caught on to the name. Then googled it. As a Linux user, Qlab is not in my lexicon.
ETHERNET! ETHERNET ports and cables! That's what "NDI" really is. A device that has an ETHERNET port (camera, switcher, etc.) can just be plugged directly into a network switch such as Cisco, Netgear, or TP-Link. Please show me how "NDI" works differently.
Uhhh, that's kinda like saying marinara is a type of pasta. Ethernet is one of several things you can use to transmit data (copper, fiber, wirelessly, etc) but NDI is the actual format with which the data is organized in order to be transmitted across the chosen medium.
Its the Emf around the wire that transfers the energy. Elections rock forward and backward. That is why it converts to heat at resistince. Im Ezekial Davis the only person to ever solve Unified Feild in Physic. I have working math solutions to solve for Energy and time intensity in the full spectrum of EMF. I can explain anything in physics down to the movement. blackholes, teleportion, dark matter, anything. God's will, how i'm the first human to understand unified feild but not the first. The bible is writen in a way that says " Not writen by man."