Тёмный
ugocapeto3d
ugocapeto3d
ugocapeto3d
Подписаться
Комментарии
@user-gx5rn9dm6u
@user-gx5rn9dm6u 16 часов назад
Olá, me chamo Mario moro no Brasil e é o que eu estava procurando mesmo. Obrigado
@allourep
@allourep 18 дней назад
is there a way to use 2d video and make it into 3d?
@ugocapeto3d
@ugocapeto3d 17 дней назад
yes, i believe so. You may have to google on that one though. Note that it will most likely be done frame by frame. Maybe huggingface has something or somebody might have to put a python notebook to do just that on google colab. I know some pple do video conversions like Philip Heggie (he's on youtube so you can ask him his process).
@abhinavmohan2657
@abhinavmohan2657 Месяц назад
So, i have a 94 LPI lenticular sheet and a 600 dpi Brother 2320D printer. Can i expect to make and print something at home. This is just for testing purpose, if it works, i can always get it printed from a professional high quality printer. I am looking to have 10-15 frames of animation, what is the best combination for this use case?
@ugocapeto3d
@ugocapeto3d Месяц назад
94 lpi??? Are you sure??? You can do good stuff with inkjet if you are under 50 lpi. For animation, it's difficult if the images vary rapidly, like for instance, a batter swinging at a bat. You ain't gonna see the bat moving basically. If it's slow moving, should be ok. You can definitely try stuff at home, just use good quality paper and make sure you do a pitch test prior. I am really not an expert at doing lenticulars.
@thehulk0111
@thehulk0111 Месяц назад
may you try crestereo ? and put it on colab 😁
@alexmejias6158
@alexmejias6158 2 месяца назад
Hello! First of all, thank you for sharing this amazing project... I have a question about how I can change the number of brush strokes for large images because in my projects the image does not have as many details as I expected.
@ugocapeto3d
@ugocapeto3d 2 месяца назад
Thanks. In this day and age, it's probably easier to use AI to generate painterly images from photos. But, to try to answer your question, if you look at the input file github.com/ugocapeto/thepainter/blob/main/main/test/waterlilykiwi/thepainter_input.txt, you can see that the number of brushes used is 5, starting with a radius of 128 pixels, then 64, then 32, then 16, then 8. The lower the brush radius, the more detail you are gonna get. Also, the 4th number, here, 50.0, indicates how close the painting should be to the original photo, so is you put 20.0 instead of 50.0, you are gonna get more brushstrokes. What I used to do is create several output paintings at different level of resolutions and then use Gimp to combine the paintings. Hope that makes sense. Note that the code is available on github at github.com/ugocapeto/thepainter. Also, note that I haven't touched this stuff for a few years, so i don't remember all the details but i do try my best to explain what I recall :)
@rafatsheikh8442
@rafatsheikh8442 4 месяца назад
This aap name
@rafatsheikh8442
@rafatsheikh8442 4 месяца назад
How to download this app
@Axis23
@Axis23 4 месяца назад
👍..Gracias por la informacion!👌
@viktorreznov1687
@viktorreznov1687 4 месяца назад
hi, thank you for the video. even though I uploaded 2 stereo images and followed the steps you followed, it gives an output with effects instead of a depth map. no matter what I did, I couldn't fix it. can you help me ?
@produccionessobrinas7594
@produccionessobrinas7594 4 месяца назад
Man I really appreciate your video but sadly I have to say this is not working for me, it seems leonardo ai has changed his model and now your tutorial , at less for me, does not work. Please is there any way to in other way this oil painting effect now?
@ugocapeto3d
@ugocapeto3d 4 месяца назад
Things change very rapidly. You can try hugging face spaces that relate to stable diffusion: huggingface.co/spaces?sort=trending&search=stable+diffusion. You are gonna need an image-to-image and in the prompt, put "oil painting" or something like that.
@MyHe-art
@MyHe-art 5 месяцев назад
Please help. When I try to open DPT large as in the tutorial, I get a runtime error. Please what should I do about it?
@ugocapeto3d
@ugocapeto3d 5 месяцев назад
yeah, i don't know what's going on with those runtime errors. All the midas stuff on hugging face have runtime errors. Anybody knows why???
@1qzurliu23
@1qzurliu23 5 месяцев назад
Hi! I've triple check my pinch test etc. Printing 600dpi with 75dpi lenticulars (tried 2/4/8-image flip) it just doesn't flip nicely, always can see part of the other images in my view. Any ideas what part went wrong?
@ugocapeto3d
@ugocapeto3d 5 месяцев назад
2 flip should work as long as the dpi is not too high. Higher lpi (like 75) is difficult to deal with. If you have more images to flip, you're likely to get ghosting. But a 4 image flip should be ok, i think. 8 is pushing it especially if the images are completely different. With 60 or lower lpi, you should be able to do animations with 8 images but that assumes the images are not too different from each other. If you do an animation of a bat swing in baseball, the bat movement will not be sharp. Not that I am not an expert in making flips but I have tried to do animations with 60 lpi, and it's quite difficult if not impossible with an inkjet printer.
@1qzurliu23
@1qzurliu23 5 месяцев назад
Hi~ I see Grape and SuperFlip only do vertical lenticulars. How can I use it horizontally? Rotate my original images 90 degrees?
@ugocapeto3d
@ugocapeto3d 5 месяцев назад
yes, correct. The only rule to follow, If you use inkjet printer, is that you need to print so that the "stripes" are 90 degrees w/r to the print head carrier. in other words, You don't want the stripes going in the same direction as the print head carrier. This gives you better print for lenticular. At least, that's what I have always been told.
@1qzurliu23
@1qzurliu23 5 месяцев назад
got it, many thanks.@@ugocapeto3d
@jordanlotus188
@jordanlotus188 5 месяцев назад
nice
@leoncioresende6955
@leoncioresende6955 6 месяцев назад
Como faço para baixar este artigo em pdf? Onde encontro ele? pois pesquisei aqui no google entrei neste blog ai e não encontrei este artigo ai.
@ugocapeto3d
@ugocapeto3d 5 месяцев назад
Check www.dropbox.com/s/wsuelhwgxnj8a6n/ugosoft3d-11-x64.rar?dl=0 This archive contains SfM10 and MVS10, it will have the manuals in pdf form. 3dstereophoto.blogspot.com/2016/04/structure-from-motion-10-sfm10.html 3dstereophoto.blogspot.com/2016/04/multi-view-stereo-10-mvs10.html
@yuvrajsinghrajpurohit3341
@yuvrajsinghrajpurohit3341 6 месяцев назад
thanks it helped me a lot 😁
@ugocapeto3d
@ugocapeto3d 5 месяцев назад
Glad it could be of use. I had fun making the video as I love making "paintings" from photos.
@fellowkrieger457
@fellowkrieger457 6 месяцев назад
Weird subject to choose :S, nice reconstruction though.
@jordanlotus188
@jordanlotus188 6 месяцев назад
nice
@teresa6775
@teresa6775 7 месяцев назад
words would be nice
@derekhaller1835
@derekhaller1835 8 месяцев назад
Get this every time I use the server - TypeError: expected size to be one of int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int], but got size with types [<class 'numpy.int64'>, <class 'numpy.int64'>]
@redman458
@redman458 8 месяцев назад
Looks like Colab broke the notebook with it's update. I fixed the issue by reverting to a previous version of Torch.
@kinnguyen635
@kinnguyen635 9 месяцев назад
Thank you for your video! Please let me know if there is an update so that I do not need to adjust the image size to 512x512?
@jordanlotus188
@jordanlotus188 9 месяцев назад
Very nice Thanks!!
@faqu2gamer566
@faqu2gamer566 10 месяцев назад
I just fell over this youtube, very interested in the potential here to create Bas Reliefs. The test will be how something like zBrush or CnC carver program like Aspire will handle the greyscale hight map is imported. So far most such programs fail as there are too many artefacts that have to be attended to which makes it pretty much impossible to machine with a CnC. Hope this works thank you for the tutorial.
@ugocapeto3d
@ugocapeto3d 10 месяцев назад
Hi, don't get your hopes too high. The depth maps obtained will, in most cases, require hand-tuning. But Midas v3.1 is the most advanced ai tool to get depth maps from single images. As it gets updated, it gets better and better. But the updates don't come out too often.
@faqu2gamer566
@faqu2gamer566 10 месяцев назад
@@ugocapeto3d I suspected as much, over the many past years things get close but not close enough. Hand tuning was a must with everything else I have seen previously (non AI). Zbrush does a wonderful job with 3d setup scenes in creating a bas relief almost perfect. I have seen midjourney images of depth models/bass reliefs that look amazing but I suspect only for viewing and not for real world application. Ty for your comment.
@yonahbs
@yonahbs 11 месяцев назад
Ótimo tutorial!👏🏾👏🏾👏🏾
@ugocapeto3d
@ugocapeto3d 11 месяцев назад
Thanks a lot!
@MundusVR
@MundusVR 11 месяцев назад
Hello Ugo. It's been relatively recently that I've been researching how to make lenticular images. I am trying to get a sequence of photos with a 2d image and a depthmap image. From what I saw on your blog you also use a program called Frame Sequence Generator 6 (FSG6). But I found it a bit complicated, many steps. I've been playing around with Stereophoto Maker 6.25a and I think there is an easier way to make a sequence of photos with a 2d image + the depthmap. have you used it? Oh! And thank you very much for your generosity in sharing all your knowledge and material on the networks!
@milchoiliev4824
@milchoiliev4824 Год назад
The idea is very good but the video is useless. Blurry screen. It is not visible what he is doing and he does not speak, no sound. Probably the video will help for people who are good with photoshop or gimp. But the video will be waste of time for new ones.
@ugocapeto3d
@ugocapeto3d Год назад
Sorry, this was done at a time when annotations were still a thing on youtube. Anyway, this is old tech. I recommend using automatic AI tools now, like Midas V3.1. See this vid: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-R7XgmC_XuoA.html. If you want more ease of use, you can use LeiaPix converter although it is based on an earlier version of Midas: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-jqcc8OaV6Bo.html
@pranav_k__
@pranav_k__ Год назад
do you by chance have any direction i can take in order to actually learn how these monocular depth map models work? i would appreciate it if you could point me in the right direction if you have any info.
@ugocapeto3d
@ugocapeto3d Год назад
Check the github repo: github.com/isl-org/MiDaS. At the end, there is a link to their latest paper.
@pranav_k__
@pranav_k__ Год назад
@@ugocapeto3d well I went through said paper but I still struggled to actually understand how they got their loss function and such
@dankdreamz
@dankdreamz Год назад
I appreciate you taking the time to make videos. They have always been interesting.
@ugocapeto3d
@ugocapeto3d Год назад
Thanks a lot for your comment!
@Der_X_Buddne
@Der_X_Buddne Год назад
Great tech and thanks for sharing! Is there a way to get those colored depth maps too?
@luchoprata
@luchoprata Год назад
Thanks a lot! i been search a long for a simple explanation!
@acadvideoart
@acadvideoart Год назад
NOTHING WORKS FOR ME, HAVE I TRIED IT SEVERAL TIMES ALREADY??
@jordanlotus188
@jordanlotus188 Год назад
nice
@thelightsarebroken
@thelightsarebroken Год назад
Thanks for this, really enjoying playing with this effect this eve!
@thes3Dnetwork
@thes3Dnetwork Год назад
Owl3d does this to. I found that it's good to test leia and owl3d because one of them might do better than the other at converting.
@Omnifonist
@Omnifonist Год назад
I am not able to get a higher resolution than 1024 px. Do you have any ideas on this? How can I access a higher output resolution?
@tomaskrejzek9122
@tomaskrejzek9122 Год назад
Can I generate in batch for multiple images?
@AnasQiblawi
@AnasQiblawi Год назад
I agree 👍💯
@AnasQiblawi
@AnasQiblawi Год назад
can you try ffmpeg and review its results
@nuvotion-live
@nuvotion-live Год назад
I recreated this using ffmpeg minterpolate. It's much slower and the results are more datamosh-y. But still cool.
@lucifergaming839
@lucifergaming839 Год назад
Hello can you help me. I want to use 3d inpainting on google colab and i have used it quite a lot in past but there is some error regarding cynetworkx or other things like torch. Can you try to update them and make a working colab. Please
@jordanlotus188
@jordanlotus188 Год назад
very nice!!
@tombradford7035
@tombradford7035 Год назад
Your sound is crap.
@vinayaka.b1494
@vinayaka.b1494 Год назад
thank you for the tutorial, it was really helpfull
@ugocapeto3d
@ugocapeto3d Год назад
You're welcome!
@EditArtDesign
@EditArtDesign Год назад
NOTHING WORKS FOR ME, HAVE I TRIED IT SEVERAL TIMES ALREADY??
@BrawlStars-jd7jh
@BrawlStars-jd7jh Год назад
thanks for sharing this, do you know if exist any chance to download the .obj model with the point cloud render mode?
@ugocapeto3d
@ugocapeto3d Год назад
You can download obj with depthplayer.ugocapeto.com but you will lose all the colors.
@BrawlStars-jd7jh
@BrawlStars-jd7jh Год назад
@@ugocapeto3d yes, i already did that, but when i open the .obj model in blender, it appears with the "solid" render mode. For the colors i just have to put the base image as a material.
@tombradford7035
@tombradford7035 Год назад
Youre so long winded...
@vanillagorilla8696
@vanillagorilla8696 Год назад
I wish I could use a depth map I made with it.
@ugocapeto3d
@ugocapeto3d Год назад
If I remember correctly, you can use your own depth maps if you use the implementation that's on google colab. I remember making a video about it.
@ugocapeto3d
@ugocapeto3d Год назад
This one: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-uRtor5E-jng.html at around 18:52, I use my own depth map.
@VirtualTurtle
@VirtualTurtle Год назад
When I run Create Depth Map, it opens up the GUI for dmag5 and asks to input images and disparities, seemingly disregarding the entire host program. any reason why this might happen? thanks!
@ugocapeto3d
@ugocapeto3d Год назад
You need to download the nogui archive. You must have downloaded the other one. Follow this tutorial by the SPM creator if you have difficulty (in particular, step 4): www.stereo.jpn.org/eng/stphmkr/makedm/index.html. Note that this is to get depth map from stereo pair. If you have a single image, I recommend Midas. See this video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-WJNOnLdY9Kg.html where i compare spm dmag5/dmag9b and Midas V3.1.
@laurisaarinen6259
@laurisaarinen6259 Год назад
THANK YOU!!!
@ugocapeto3d
@ugocapeto3d Год назад
You're welcome!
@redman458
@redman458 Год назад
Update: (NEW) Alternate Option: I added a new option that creates a depth map with higher accuracy than the default setting (Improvements may vary on a case by case basis. On average you can expect to see subtle improvements in background and foreground details, or sometimes fix errors in depth placement). By default, the depth map is sent to the encoder with the input image's aspect ratio + 512 base resolution downscaled before inference. This option downscales the image to a square 512x512 resolution before inference instead. The tradeoff is more aliasing (jaggies) in the depth map compared to before when using high resolution input images.
@ugocapeto3d
@ugocapeto3d Год назад
Great! thanks
@vanillagorilla8696
@vanillagorilla8696 Год назад
Could I use a transparency and a backlight to throw the light outward?
@ugocapeto3d
@ugocapeto3d Год назад
I remember seeing lenticulars that were backlit in one of the 2 lenticular facebook groups. So it definitely exists but that's all i know about it.
@vanillagorilla8696
@vanillagorilla8696 Год назад
@@ugocapeto3d Perhaps I could use a transparency, a milk glass or plexyglass, over the backlight. I used to build 35mm prints and we used this kind of glass to splice film.