Тёмный

Image to Mesh using ComfyUI + Texture Projector 

Kefu Chai
Подписаться 496
Просмотров 12 тыс.
50% 1

In today's video, we will talk about image-to-mesh workflow. Including 3D reconstruction from a simple image or multiple images.
00:00 Introduction
00:46 ComfyUI Layer Diffuse
01:52 3D reconstruction solutions
04:54 CRM introduction
05:52 CRM diagram
07:19 ComfyUI 3D Pack
07:41 CRM Image to Mesh workflow in ComfyUI
12:26 Wonder 3D Image to Mesh workflow in ComfyUI
13:24 Import CRM mesh in 3D Max
14:42 Mesh comparison of CRM, TripoSR, Wonder3D+NeuS
15:38 Mesh optimization
16:56 Comparison of retopology and optimization process
22:02 Zbruch Z-Remesher
24:13 UV
24:52 Create outline texture for mesh using Texture Projector in UE
27:22 Texture refinement
29:33 Project textures to mesh using Texture Projector in UE
30:59 Bake texture using Property Baker in UE
32:06 Single image to mesh final results
32:48 Why the reference image must be close to the frontal view
34:43 Use depth Control Net to control the view angle
37:56 Multi-view images to mesh workflow in ComfyUI
42:18 Gaussian Splatting + DMTet
43:02 Restriction of Multi-view images to mesh workflow in ComfyUI
44:04 Summary
Music: Sunny Skies (by Suno)
Create various textures using Texture Projector and Stable Diffusion
• Create various texture...
-----------------------------------
MARS Texture Projector:
www.unrealengine.com/marketpl...
MARS Property Baker:
www.unrealengine.com/marketpl...
MARS Master Material:
www.unrealengine.com/marketpl...
-----------------------------------
Houdini Lego Mesh
• Legoize geometry & RBD...
• brickini - procedural ...
• Procedural Lego Bricks...
Gaussian Splatting
• 3D Gaussian Splatting ...
• Photogrammetry / NeRF ...
• What is 3D Gaussian Sp...
• Step-by-Step Unreal En...
• Gaussian Splatting exp...
-----------------------------------
#imagetomesh #3dreconstruction #sv3d #triposr #unreal #textureprojector #stablediffusion #comfyui

Наука

Опубликовано:

 

6 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 69   
@michaelmurrillus915
@michaelmurrillus915 23 дня назад
methodical presentation. Very well done
@soma78
@soma78 Месяц назад
impressive. the amount of work you put in this video...well done. subscribed.
@vivigomez5960
@vivigomez5960 Месяц назад
Beautiful!! Great video. A lot of work and time in this great explanation.
@EqualToBen
@EqualToBen 2 месяца назад
Awesome topology comparison! This video is gold
@Meteotrance
@Meteotrance 14 дней назад
They could use metaball instead of voxel for the mesh cloud point recreation, it's super light and fast for generated volume, blender handle the metaball convertion to polygon very well...
@brianmcquain3384
@brianmcquain3384 2 месяца назад
cool song, totally unexpected out of the blue!
@teambellavsteamalice
@teambellavsteamalice 2 месяца назад
I have a feeling this has way more potential. Is there any way to deconstruct the image into parts, then compare these parts to a set of variants, pick the closest and construct a composition of these? Like a reference model to help the process? I imagine you'd need a few basic head shapes, ears, chins, eyebrows and perhaps even hairdos. Then have sets of images (angles or a lora model?) for archetype heads (or complete bodies). Like a base bland model, one with extreme elvish ears, one with a pronounced chin, one with exaggerated brows, etc. Then you'd need to reconstruct the mix you want from these archetype sets to get the right mix. I'm not sure you can use interpolation that easily (iirc ControlNet had options?) but if you use the same process on each image of these consistent sets, the resulting set should be consistent too, right? Then if each archetype has a nicely fixed 3d model, you could also generate one for the mixed composition. Would such a process to create an approximation or base model be doable? Could you use this and the actual image (in iterations?) to create a consistent 3D model without any manual fixes?
@kefuchai5995
@kefuchai5995 2 месяца назад
Great idea! That will be the next generation of AI 3D Mesh. SD AI should learn this.
@artmosphereID
@artmosphereID 2 месяца назад
good for hard surface/static/prop assets. For organic and animated models, a big no no, will be a nightmare for animator
@catparadise950
@catparadise950 2 месяца назад
能分享一下怎麼安裝ComfyUI 的3D pack 嗎?我有嘗試過裡面其他模型的custom node 不過這個整合包我就是不太清楚怎麼安裝,我有使用python 虛擬環境。
@kefuchai5995
@kefuchai5995 2 месяца назад
可以先看看这个,之前列的注意事项: www.bilibili.com/read/cv33521683
@RoN43wwq
@RoN43wwq 2 месяца назад
nice. Thanks
@masterkarlzon
@masterkarlzon 2 месяца назад
Really cool!
@mikerhinos
@mikerhinos Месяц назад
Personnally I'm getting this error when trying to run the CRM to multiview to CCM example workflow (it happens at the mesh construction node which is quite frustrating because the different images are looking good) : "RuntimeError: Error building extension 'nvdiffrast_plugin': ninja: error: build.ninja:3: lexing error nvcc = D:\pinokio\bin\miniconda\bin vcc.exe" It may be a path problem I guess, but can't find how to resolve it yet :(
@kefuchai5995
@kefuchai5995 Месяц назад
It is an installation problem with VS or CUDA. Maybe it is the CUDA path? github.com/MrForExample/ComfyUI-3D-Pack?tab=readme-ov-file#install
@linnkoln11
@linnkoln11 Месяц назад
Hey! about the img2img support for layer diffusion. You need to make background 50% grey. For me it was doing a work! By the way i am not yet done with a video but what i've seen so far is awesome!
@kefuchai5995
@kefuchai5995 Месяц назад
Wow! Great. Thanks for sharing.
@samwalker4442
@samwalker4442 2 месяца назад
THANK YOU!
@Zamundani
@Zamundani 2 месяца назад
basically its an over glorified base mesh.
@MrGATOR1980
@MrGATOR1980 2 месяца назад
tbh i would faster sculpt and paint this myself than all that shenaningans
@MaxSMoke777
@MaxSMoke777 2 месяца назад
You could do all of those dozens of steps... OR... just use the front and side images for reference and simply build the model like you would any other. You've put so much work into saving time, you've definitely made it harder.
@kefuchai5995
@kefuchai5995 2 месяца назад
You are right. AI has caused me a lot of confusing behaviors and complicated simple problems.
@IS0JLantis
@IS0JLantis 2 месяца назад
no, Its like spending 5 hours to write a script that will automate a task that only takes 30 min do. it is not intended for single use. once you find a reliable workflow leveraging AI, productivity will skyrocket, old modelling techniques will simply not be able to keep up. we need tests like these to learn from.
@USBEN.
@USBEN. 2 месяца назад
Now just have to automate all this.
@Rahviel80
@Rahviel80 2 месяца назад
Baked light and no pbr textures are showstopper for game dev, they also have this AI look and unoptimised mesh. Thats far from useful.
@kefuchai5995
@kefuchai5995 2 месяца назад
Some checkpoints can generate diffuse (albedo) texture using a light environment prompt like soft ambient light. Then generate PBR textures based on the diffuse texture.
@shiccup
@shiccup 2 месяца назад
sick
@Keji839
@Keji839 2 месяца назад
This would be good for hard surface objects. Organic is a no go
@cj5787
@cj5787 Месяц назад
"looks cool and effective" for the untrained eye.. in reality this is like 20 times more complicated and time consuming than a regular 3d workflow getting a result that it's not even usable...
@kefuchai5995
@kefuchai5995 Месяц назад
It's because we're used to the old workflow.
@arberstudio
@arberstudio 2 месяца назад
or you can learn to 3D model lol
@MaxSMoke777
@MaxSMoke777 2 месяца назад
Yah!
@kefuchai5995
@kefuchai5995 2 месяца назад
yeah
@jonmichaelgalindo
@jonmichaelgalindo 2 месяца назад
"And then adjusted" AKA modeling cuz AI can't 3d.
@kefuchai5995
@kefuchai5995 2 месяца назад
@@jonmichaelgalindo 😂
@generichuman_
@generichuman_ 2 месяца назад
The last words of someone about to lose their job to new tech. Adapt or get left behind...
@piotrek7633
@piotrek7633 2 месяца назад
Even if ai makes you witcher 3 quality models in the future from thin air, there's no sense of fulfillment from that. Ai is taking our jobs and ways to have fun while at it. If ai makes making games ridiculously easy then game dev will be even more competetive than it already is today
@kefuchai5995
@kefuchai5995 2 месяца назад
I would rather think of AI as an assistant.
@piotrek7633
@piotrek7633 2 месяца назад
@@kefuchai5995 Yeah but for how long though. Artists are already taking a hit because midjourney is literally better quality than most of them, and it can pick up styles of top artists. Ai generated ads are already popping up so thats less work for employees. So when will it hit 3d modeling, game dev as a whole is my question, since cleary it's going in that direction looking at meshy, and altman tweeted 2 days ago that "movies are going to become video games and video games are going to become something unimaginably better". Doesn't this mean trouble for game dev's? If we can't do something 'unimaginably better' now in big teams like cd projekt red or rockstar, so what does he mean? Ai generation of course, if he's not yapping and its true that his team is cooking something hot for games, then oh lord have mercy, i will have 0 fun in life if they take game dev, although ai generated movies that let you interact won't affect the video game market though since it will be like traditional art and digital art, people will want to consume both probably
@kefuchai5995
@kefuchai5995 2 месяца назад
@@piotrek7633 For me, I won't think about it for now. I will keep following and looking forward that day when everyone can make games. I hope what altman says is true, not just imagined but experienced.
@vivigomez5960
@vivigomez5960 Месяц назад
Your comment seems more typical of a person from the 15th century in front of the printing press.
@user-li7ce3fc3z
@user-li7ce3fc3z Месяц назад
Куча сил и денег потрачено на програмное обеспечение а на выходе полный шлак, куда проще создать все с нуля а рисунки аи использовать как реф
@scrutch666
@scrutch666 2 месяца назад
Any medicore artist would have completed that task in half the time and successfully. This is not even a mesh you could use in game production for a human character. Would cost more time to fix it over doing it yourself by hand. Nobody would hire you with such topology
@kefuchai5995
@kefuchai5995 2 месяца назад
yes, that's why it still needs re-topo and re-mesh.
@AIJOBSFORTHEFUTURE
@AIJOBSFORTHEFUTURE Месяц назад
@scrutch666 Do not fear the death of an industry, Celebrate the birth of a new reality….or cope
@2slick4u.
@2slick4u. Месяц назад
That's where UE5.3 comes in clutch with their new nanite skeletal.mesh. It's the future and it's gonna hit you hard in the tool box
@TheSleepfight
@TheSleepfight 20 дней назад
⁠@@2slick4u.scrutch is correct..I’m a Principal Character and Hard surface artist with 18 years in the industry and over 15 shipped titles..What do you think nanite will change?
@2slick4u.
@2slick4u. 20 дней назад
@@TheSleepfight with the bandwidth and internal storage constantly increasing its becoming realistic to dump high poly and unoptimized models.UE5 almost completely remove the relevance of retopo
@Mr3Dmutt
@Mr3Dmutt Месяц назад
Alright... watched the whole video and can confidently say this is useless in any level of production, indie or big budget, also, if youre going to invest that much time into something, why not have fun and sculpt and paint it?
@kefuchai5995
@kefuchai5995 Месяц назад
That's right, I only use this when sculpting becomes boring sometimes.
@DimensionDoorTeam
@DimensionDoorTeam Месяц назад
"I only use this when Sculpting becomes boring sometimes"🤞Technically, that's the best part of creating a character.
@AX-032
@AX-032 28 дней назад
Can u share your json file please?
@kefuchai5995
@kefuchai5995 27 дней назад
The ComfyUI workflow? It is from 3D Pack with some customization. github.com/MrForExample/ComfyUI-3D-Pack/tree/main/_Example_Workflows
@GradeMADE
@GradeMADE 2 месяца назад
Hey Bro do you the workflow you created for us to use ?
@kefuchai5995
@kefuchai5995 2 месяца назад
The workflow I used is copied from the example of ComfyUI 3D Pack,. github.com/MrForExample/ComfyUI-3D-Pack/tree/main/_Example_Workflows
@samsilva7209
@samsilva7209 2 месяца назад
@@kefuchai5995 when I open "Multi-View-Images_to_Instant-NGP_to_3DMesh" workflow, for example, even if I install the missing nodes in the manage panel, there are still many nodes with this message: hhen loading the graph, the following node types were not found: [Comfy3D] Preview 3DMesh 🔗 [Comfy3D] Gaussian Splatting Orbit Renderer 🔗 [Comfy3D] Stack Orbit Camera Poses 🔗 [Comfy3D] Switch 3DGS Axis 🔗 [Comfy3D] Load 3DGS 🔗 [Comfy3D] Save 3D Mesh 🔗 [Comfy3D] Instant NGP 🔗 [Comfy3D] Fitting Mesh With Multiview Images 🔗 Nodes that have failed to load will show as red on the graph. Do you have any idea what I might be doing wrong? or not doing? Thank u in advance
@GradeMADE
@GradeMADE 2 месяца назад
@@kefuchai5995 Ty Bruv
@user-bl8lb7yy1l
@user-bl8lb7yy1l Месяц назад
@@kefuchai5995 Hey bro.The example workflow of CRM does not include the upscale-img part.I tried to load an upscale model,but it seems have some errors in python--“Input type (struct c10::Half) and bias type (float) should be the same” Could you help with this?Deep thanks.
@kefuchai5995
@kefuchai5995 Месяц назад
@@user-bl8lb7yy1l ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Y6-JGi_ksos.htmlsi=1qBaFHsGtPETxU87&t=611 Check out the video from this time. The format of CRM-generated images requires manual conversion before they can be used for upscale.
Далее
ComfyUI for Everything (other than stable diffusion)
32:53
Stable Projectorz: How to use (version 1.5 May 2024)
36:14
Lost Vape Ursa Pocket
0:17
Просмотров 88 тыс.
Сделайте что-нибудь Samsung J6 2018
0:59
#miniphone
0:16
Просмотров 920 тыс.