yes, i believe so. You may have to google on that one though. Note that it will most likely be done frame by frame. Maybe huggingface has something or somebody might have to put a python notebook to do just that on google colab. I know some pple do video conversions like Philip Heggie (he's on youtube so you can ask him his process).
So, i have a 94 LPI lenticular sheet and a 600 dpi Brother 2320D printer. Can i expect to make and print something at home. This is just for testing purpose, if it works, i can always get it printed from a professional high quality printer. I am looking to have 10-15 frames of animation, what is the best combination for this use case?
94 lpi??? Are you sure??? You can do good stuff with inkjet if you are under 50 lpi. For animation, it's difficult if the images vary rapidly, like for instance, a batter swinging at a bat. You ain't gonna see the bat moving basically. If it's slow moving, should be ok. You can definitely try stuff at home, just use good quality paper and make sure you do a pitch test prior. I am really not an expert at doing lenticulars.
Hello! First of all, thank you for sharing this amazing project... I have a question about how I can change the number of brush strokes for large images because in my projects the image does not have as many details as I expected.
Thanks. In this day and age, it's probably easier to use AI to generate painterly images from photos. But, to try to answer your question, if you look at the input file github.com/ugocapeto/thepainter/blob/main/main/test/waterlilykiwi/thepainter_input.txt, you can see that the number of brushes used is 5, starting with a radius of 128 pixels, then 64, then 32, then 16, then 8. The lower the brush radius, the more detail you are gonna get. Also, the 4th number, here, 50.0, indicates how close the painting should be to the original photo, so is you put 20.0 instead of 50.0, you are gonna get more brushstrokes. What I used to do is create several output paintings at different level of resolutions and then use Gimp to combine the paintings. Hope that makes sense. Note that the code is available on github at github.com/ugocapeto/thepainter. Also, note that I haven't touched this stuff for a few years, so i don't remember all the details but i do try my best to explain what I recall :)
hi, thank you for the video. even though I uploaded 2 stereo images and followed the steps you followed, it gives an output with effects instead of a depth map. no matter what I did, I couldn't fix it. can you help me ?
Man I really appreciate your video but sadly I have to say this is not working for me, it seems leonardo ai has changed his model and now your tutorial , at less for me, does not work. Please is there any way to in other way this oil painting effect now?
Things change very rapidly. You can try hugging face spaces that relate to stable diffusion: huggingface.co/spaces?sort=trending&search=stable+diffusion. You are gonna need an image-to-image and in the prompt, put "oil painting" or something like that.
Hi! I've triple check my pinch test etc. Printing 600dpi with 75dpi lenticulars (tried 2/4/8-image flip) it just doesn't flip nicely, always can see part of the other images in my view. Any ideas what part went wrong?
2 flip should work as long as the dpi is not too high. Higher lpi (like 75) is difficult to deal with. If you have more images to flip, you're likely to get ghosting. But a 4 image flip should be ok, i think. 8 is pushing it especially if the images are completely different. With 60 or lower lpi, you should be able to do animations with 8 images but that assumes the images are not too different from each other. If you do an animation of a bat swing in baseball, the bat movement will not be sharp. Not that I am not an expert in making flips but I have tried to do animations with 60 lpi, and it's quite difficult if not impossible with an inkjet printer.
yes, correct. The only rule to follow, If you use inkjet printer, is that you need to print so that the "stripes" are 90 degrees w/r to the print head carrier. in other words, You don't want the stripes going in the same direction as the print head carrier. This gives you better print for lenticular. At least, that's what I have always been told.
Check www.dropbox.com/s/wsuelhwgxnj8a6n/ugosoft3d-11-x64.rar?dl=0 This archive contains SfM10 and MVS10, it will have the manuals in pdf form. 3dstereophoto.blogspot.com/2016/04/structure-from-motion-10-sfm10.html 3dstereophoto.blogspot.com/2016/04/multi-view-stereo-10-mvs10.html
Get this every time I use the server - TypeError: expected size to be one of int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int], but got size with types [<class 'numpy.int64'>, <class 'numpy.int64'>]
I just fell over this youtube, very interested in the potential here to create Bas Reliefs. The test will be how something like zBrush or CnC carver program like Aspire will handle the greyscale hight map is imported. So far most such programs fail as there are too many artefacts that have to be attended to which makes it pretty much impossible to machine with a CnC. Hope this works thank you for the tutorial.
Hi, don't get your hopes too high. The depth maps obtained will, in most cases, require hand-tuning. But Midas v3.1 is the most advanced ai tool to get depth maps from single images. As it gets updated, it gets better and better. But the updates don't come out too often.
@@ugocapeto3d I suspected as much, over the many past years things get close but not close enough. Hand tuning was a must with everything else I have seen previously (non AI). Zbrush does a wonderful job with 3d setup scenes in creating a bas relief almost perfect. I have seen midjourney images of depth models/bass reliefs that look amazing but I suspect only for viewing and not for real world application. Ty for your comment.
Hello Ugo. It's been relatively recently that I've been researching how to make lenticular images. I am trying to get a sequence of photos with a 2d image and a depthmap image. From what I saw on your blog you also use a program called Frame Sequence Generator 6 (FSG6). But I found it a bit complicated, many steps. I've been playing around with Stereophoto Maker 6.25a and I think there is an easier way to make a sequence of photos with a 2d image + the depthmap. have you used it? Oh! And thank you very much for your generosity in sharing all your knowledge and material on the networks!
The idea is very good but the video is useless. Blurry screen. It is not visible what he is doing and he does not speak, no sound. Probably the video will help for people who are good with photoshop or gimp. But the video will be waste of time for new ones.
Sorry, this was done at a time when annotations were still a thing on youtube. Anyway, this is old tech. I recommend using automatic AI tools now, like Midas V3.1. See this vid: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-R7XgmC_XuoA.html. If you want more ease of use, you can use LeiaPix converter although it is based on an earlier version of Midas: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-jqcc8OaV6Bo.html
do you by chance have any direction i can take in order to actually learn how these monocular depth map models work? i would appreciate it if you could point me in the right direction if you have any info.
Hello can you help me. I want to use 3d inpainting on google colab and i have used it quite a lot in past but there is some error regarding cynetworkx or other things like torch. Can you try to update them and make a working colab. Please
@@ugocapeto3d yes, i already did that, but when i open the .obj model in blender, it appears with the "solid" render mode. For the colors i just have to put the base image as a material.
When I run Create Depth Map, it opens up the GUI for dmag5 and asks to input images and disparities, seemingly disregarding the entire host program. any reason why this might happen? thanks!
You need to download the nogui archive. You must have downloaded the other one. Follow this tutorial by the SPM creator if you have difficulty (in particular, step 4): www.stereo.jpn.org/eng/stphmkr/makedm/index.html. Note that this is to get depth map from stereo pair. If you have a single image, I recommend Midas. See this video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-WJNOnLdY9Kg.html where i compare spm dmag5/dmag9b and Midas V3.1.
Update: (NEW) Alternate Option: I added a new option that creates a depth map with higher accuracy than the default setting (Improvements may vary on a case by case basis. On average you can expect to see subtle improvements in background and foreground details, or sometimes fix errors in depth placement). By default, the depth map is sent to the encoder with the input image's aspect ratio + 512 base resolution downscaled before inference. This option downscales the image to a square 512x512 resolution before inference instead. The tradeoff is more aliasing (jaggies) in the depth map compared to before when using high resolution input images.
@@ugocapeto3d Perhaps I could use a transparency, a milk glass or plexyglass, over the backlight. I used to build 35mm prints and we used this kind of glass to splice film.