Тёмный

6DoF Video Point Clouds - Combining Multiple Cameras 

Josh Gladstone
Подписаться 1,1 тыс.
Просмотров 6 тыс.
50% 1

The most common question I get about 6DoF video is about combing multiple camera angles, either to limit occlusion, or to expand a project into multiple "rooms". I had already experimented with it in one of my first 6DoF videos (see the end this one: • Real-Time 6DoF Volumet... ), but it wasn't very usable. In this video, I go back and take another look.
Subscribe if you're interested in 6DoF and VR filmmaking, leave a comment if you have something to say, and head over to / 6dof for more information about 6DoF video and Pseudoscience 6DoF Viewer.

Кино

Опубликовано:

 

6 май 2018

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 26   
@HarryUnderwoodMedia
@HarryUnderwoodMedia 5 лет назад
I remember asking this question a while back in the comments. I didn't know about the extraneous point clouds getting in the way of each other when overlapping, so I'm glad you answered this question.
@JoshGladstone
@JoshGladstone 5 лет назад
I could see this being a thing in the future, but unfortunately without much more accurate depth maps, it's not really going to practical for most projects. I can think of a few specific applications though for it, as-is though
@ScottLynchFilm
@ScottLynchFilm 6 лет назад
Awesome! Josh thank you for working on this! This technique is what I see as the future of 6DOF capture. It seems like it should also be possible to shoot photogammetry of the area and incorporate that data in this workflow for static areas like floors or other non moving things that may need more detail.
@nielsmiller5893
@nielsmiller5893 4 года назад
This is mind blowing.
@BenEncounters
@BenEncounters 4 года назад
Hi Josh, as many people have said it your work is absolutely great. I was wondering if you are still working on 6DoF and if you have finished some narratives based on that technology. I also wonder what are the ideal technologies today beyond the Kandao Obsidian & google Jump and what has evolved since 2018. I do feel that a mixture of static data and dynamic ones, as well as integration of Lidar and Light Field technology could actually soon bring us to satisfying quality for creating stories!
@davidcameron2600
@davidcameron2600 6 лет назад
Thanks for the incredible insight! Do you see a big difference in point clouds between using Kandao video from 2D Depth mode and Google Jump footage with depth map? I've been trying to find the best way to bring Obsidian footage into Unity for 6Dof Video. I'm guessing you're using native camera depth maps, vs. top/bottom depth maps generated from Stereo2Depth?
@JoshGladstone
@JoshGladstone 6 лет назад
These videos were from Google Jump. I'm not sure if there's a big difference between Kandao and Google Jump. I've seen some very nice images from Kandao, and also some not great stuff. Google just improved their depth maps, though, and that seems better. I guess I'd give the edge to Jump right now, but neither are perfect. I definitely wouldn't put Stereo2Depth in the same class as either of them. Stereo2Depth uses a much less sophisticated algorithm.
@kellekafali
@kellekafali 6 лет назад
Hi, Nice work! . I'm trying to find out if i can use kandao ObsidianR camera for Lidar like scenarios. I wondering if we can extract combined point clouds for 3D modeling reference. Can we extract point clouds using your 6DOF player?
@JoshGladstone
@JoshGladstone 5 лет назад
Hi, sorry. Just saw this. No, while the player does display a point cloud, it does not export anything. There are other solutions out there for converting depth maps to point clouds. Just off the top of my head, I believe OpenCV does it.
@brettcameratraveler
@brettcameratraveler 6 лет назад
Thanks for explaining this Josh. At this point what do you estimate each of the camera's resolution would eventually need to be to have clean 6dof video? With that in mind, do you think the better move is to instead add LIDAR to existing Jump cameras in order get the final clean results you are hoping to achieve?
@JoshGladstone
@JoshGladstone 6 лет назад
Jump is already capable of 8k video, and the videos I'm using here are well below that. Higher resolutions just sort of make things crisper and clearer, but they don't really help with accuracy. More accurate depth maps along with multiple cameras could achieve cleaner results, so it's possible lidar could help. Although, I'm not familiar with any lidar solutions that can capture the full 360º at video speeds. And beyond that, even if you had absolutely perfect depth maps, you're still going to have issues with transparency and reflections and things that keep it from being totally photorealistic. The real question for me is how far does it need to advance before it's a viable option for storytelling with enough immersion and realism that the viewer is able to get into the story? So I'm trying to put a small short film together to see if we're there yet.
@brettcameratraveler
@brettcameratraveler 6 лет назад
Oculus has some solutions for those reflections. Check it out. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-RJxoDggqLoo.html
@JoshGladstone
@JoshGladstone 6 лет назад
Lightfields also solves the issue to an extent, but that's a whole different thing.
@brettcameratraveler
@brettcameratraveler 6 лет назад
Specular, and other secondary qualities of light are naturally captured by light fields but a light field camera that can capture a scene from all angles is obviously far from practical. Perhaps, since we have the depth data, these other secondary light qualities will be simulated and then added on top of the real 360 video as a 3D generated blended layer. Seems to me that these secondary light qualities are pretty subtle in most cases though. The focus should be on high quality depth maps, no?
@brettcameratraveler
@brettcameratraveler 6 лет назад
Yes only to essentially total diameter of all of the 360 camera's lenses. If LIDAR isn't fast enough and, on the other hand, a single 360 cameras isn't wide enough to give very detailed camera depth data then it's perhaps necessary to have the depth data from at least TWO cameras spaced much wider apart for EACH new/unseen direction the viewer will be looking in rather than only looking for visual RBG coverage from multiple cameras.
@frenchtouchfactory
@frenchtouchfactory 6 лет назад
Hello, Which depth cam are you using ?
@JoshGladstone
@JoshGladstone 6 лет назад
These videos were shot with a GoPro Odyssey, which is a Google Jump camera. Google Jump provides 360 video stitching, as well as 360 depth maps. Other 360 camera solutions such as Kandao Obsidian, and Nokia Ozo also provide depth maps out-of-the-box. I have also released an open source python script that can be used to generate depth maps from stereo 360 footage. For more info, check out this video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-BHGvcEBRA98.html, and head over to reddit.com/r/6DoF.
@T1mothyTee
@T1mothyTee 2 года назад
Would this be possible to do by rendering a 360 pre-rendered video from Blender or Unreal? how is this processed? thanks! very cool
@JoshGladstone
@JoshGladstone 2 года назад
Not the combining multiple cameras part necessarily, but you can absolutely export a depth map from Blender, and then combine it with the color picture. Should end up looking something like this: www.dropbox.com/s/ff5026xg0bylbwq/Morgan_lives_in_a_rocket_house_6DoF.mp4?dl=0
@T1mothyTee
@T1mothyTee 2 года назад
@@JoshGladstone very cool.. what software do you use to make the 6dof model?
@sumo242424
@sumo242424 5 лет назад
Wondering if you've made this project available online? I've been following you for some time now, and I must say you've done very interesting work.
@JoshGladstone
@JoshGladstone 5 лет назад
The whole project or the app? I did release a 6DoF viewer app on all VR platforms as well as desktop mac and pc, links can be found in the sidebar at reddit.com/r/6DoF.
@sumo242424
@sumo242424 5 лет назад
@@JoshGladstone the unity project, would love to create something based on it.
@JoshGladstone
@JoshGladstone 5 лет назад
@@sumo242424 Sorry, no I don't have any plans to release the Unity project.
@BeingOfLight-gq4fm
@BeingOfLight-gq4fm 5 лет назад
What software are you using for the point clouds?
@JoshGladstone
@JoshGladstone 5 лет назад
Everything is in Unity, and the point clouds are from a custom shader and script that I wrote. There's a bit more about it here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-xB-OnSYiPfo.html
Далее
СМОТРИМ YOUTUBE В МАЙНКРАФТЕ
00:34
Просмотров 292 тыс.
6DoF Video Point Clouds - Filling in the Gaps
10:24
Просмотров 2,1 тыс.
Labeling Lidar Point Clouds for Object Detection
5:30
Helios2 3D Camera - Point Cloud Examples
0:59
Просмотров 5 тыс.
3D Video Capture with two Cameras
1:00
Просмотров 1,8 тыс.
3D Projection Mapping Workflow - Richard Burns
3:27:27
Просмотров 140 тыс.
ToRung short film: 🙏save water💦
0:24
Просмотров 71 млн
ПИРАТСКИЙ ЗАЖИМ ФРЕГАТАМИ
0:51