Тёмный

From sketch to 2D to 3D in real time! - Stable Diffusion Experimental 

Andrea Baioni
Подписаться 9 тыс.
Просмотров 14 тыс.
50% 1

Опубликовано:

 

31 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 66   
@jaoltr
@jaoltr 6 месяцев назад
🔥Great content and video. Thanks for the detailed information!
@shshsh-zy5qq
@shshsh-zy5qq 4 месяца назад
you are the best. the workflow is so unique and your explanation is so good. I love each time you explain why we need a node to be connected. By learning it, I could improvise the workflow since I know which node is for what. thank you so much!.
@risunobushi_ai
@risunobushi_ai 4 месяца назад
Thank you! I try to explain a lot of stuff so no one gets left behind
@prodmas
@prodmas 6 месяцев назад
You don't need the lightning Lora with a lightning model. It's only used for the non-lightning models.
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
Uh, I was sure I needed it! Well, the more you know - I guess it’s a remnant of the SAI workflows with non fine tuned models then. I’m pretty sure one of the first workflows tutorials I had seen used it, but they were using a base model IIRC. I’ll try without and see if there’s any differences.
@Crovea
@Crovea 6 месяцев назад
Very exciting stuff, thanks for sharing!
@shape6093
@shape6093 6 месяцев назад
Great content! Love the way you explain everything.
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
Thank you!
@KartikayMathur-y8e
@KartikayMathur-y8e 6 месяцев назад
@@risunobushi_ai too slow my guy... Too slow...
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
I make a point to not jump ahead and leave people behind, while providing time stamps for people who already know how it all works so they can just jump ahead. Up till now I’ve seen a positive reception to this approach, but if enough people would rather have me skipping parts to get to the bottom of the workflows I’ll do that.
@KartikayMathur-y8e
@KartikayMathur-y8e 6 месяцев назад
@@risunobushi_ai No No, ignore my silly comment You are doing great!. I had to understand that not everyone is at higher level 😅.
@LarryJamesWulfDesign
@LarryJamesWulfDesign 5 месяцев назад
Awesome stuff! Thanks for the detailed walk through.
@JavierCamacho
@JavierCamacho 6 месяцев назад
Dude!! You are awesome!!! Thanks for the amount of time and effort you put into this. Is it possible to use SD to add more details to the 3d texture after you have a "good" 3d model? For example, rotate the model and then generate some extra texture?
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
Thank you for the kind words! Yeah, there’s a couple of workflows that go from mesh to texture, I’ll reply to this comment in a couple of hours with some links.
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
You might want to take a look at these links: - www.reddit.com/r/StableDiffusion/s/n603cJOsgC - www.reddit.com/r/comfyui/s/AHKvo5UkXD - www.reddit.com/r/comfyui/s/YEAPX125Db - www.reddit.com/r/StableDiffusion/s/iZin0p4Fv9 - www.reddit.com/r/StableDiffusion/s/T5sfUsckAs - and lastly www.reddit.com/r/StableDiffusion/s/gUP5d5pgFF The last one and the unity one are what I’d like to cover if this video is well received enough. I originally wanted to release this video as a full pipeline implementing retexturing, but just generating 2D and 3D assets was a 25 minutes tutorial, I thought it better to split it into two different videos and check if there’s interest in the subject.
@MotuDaaduBhai
@MotuDaaduBhai 6 месяцев назад
@@risunobushi_ai I don't know how to thank you but Thank you anyway! :D
@ЕкатеринаЧудинова-э1ш
Hi! Thank you for a great tutorial! Unfortunately, my TripoSR Viewer is stucked instantly on Loading scene mode. Do you know how to fix that, please?
@samu7015
@samu7015 8 дней назад
Same here
@Gounesh
@Gounesh Месяц назад
Is there a way to retexture with higher res images?
@bkdjart
@bkdjart 4 месяца назад
Nice tutorial. Would you get faster inference using a lcm model? Also the official tripo website service has a refine feature and it looks great! Do you know if that feature is available in comfyui?
@risunobushi_ai
@risunobushi_ai 4 месяца назад
Off the top of my head I can't remember what's the speed differences between LCM and Lighting models, but I guess you could speed things up even more with a Hyper model. It's also been a hot minute since I looked at new TripoSR features, but I'll take a look!
@remaztered
@remaztered 5 месяцев назад
So great video! But I have a problem with RemBG node - how can I install this node?
@risunobushi_ai
@risunobushi_ai 5 месяцев назад
you can either find the repo in the manager, on github, or just drag and drop the workflow json file in a comfyui instance and install missing nodes from the manager. let me know if that works for you
@remaztered
@remaztered 5 месяцев назад
​@@risunobushi_ai Oh yes, of course, working like a charm, thanks!
@placebo_yue
@placebo_yue Месяц назад
Sadly tripoSR is broken now. Do you have any solution to that bro? Nobody can give me answers or help so far :(
@Luca-fb6sq
@Luca-fb6sq 5 месяцев назад
How to export the 3D mesh generated by Triposr and attach textures, is this possible?
@risunobushi_ai
@risunobushi_ai 5 месяцев назад
The meshes are automatically exported in the output folder inside of your comfyUI directory, and the textures are already baked in. If you don't see them when you import the mesh, you might need to alter the material and append a color attribute node in the shader window
@---Nikita--
@---Nikita-- Месяц назад
more tuts for generating 3d models pls
@kalyanwired
@kalyanwired 6 месяцев назад
this is awesome. How do you save the model as fbx/obj to use in blender ?
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
the model is automatically saved in the default output folder inside of your comfyui folder, it's not clear in the github unfortunately.
@ValleStutz
@ValleStutz 6 месяцев назад
Is it anyhow possible to set the output directory? And what about the image textures in the obj?
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
The output directory is the default one, comfyui -> output. I realized afterwards that it's not clear in the github docs, I should have specified it in the video. As for the image textures, I'm not sure in other programs, but in Blender it's baked into the obj's material. So inside of the shading window, you'd need to add a new material, and then hook up a "color attribute" node to the base color input.
@ValleStutz
@ValleStutz 6 месяцев назад
@@risunobushi_ai thanks, saw that. But I'm wondering, if it's possible to give a own prefix filename, since I like my data being structured
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
I'm not sure you can, I've tried inspecting the viewer and the sampler node's properties and there's no field for filename. I guess you could change the line of code that gives the file its filename, but I wouldn't risk breaking it when it's far easier to rename the files after they're generated.
@voxyloids8723
@voxyloids8723 6 месяцев назад
Always confused by scetch node because I use sketch book pro and then cntr+v ....
@leandrogoethals6599
@leandrogoethals6599 19 дней назад
the requirement for Tripo are an absolute wall for me i cannot continue it's always the damn torch related things: Building wheels for collected packages: antlr4-python3-runtime, torchmcubes Building wheel for antlr4-python3-runtime (setup.py) ... done Created wheel for antlr4-python3-runtime: filename=antlr4_python3_runtime-4.9.3-py3-none-any.whl size=144577 sha256=94db4768f9c65c129ffcf3ca5ace44270622aa6fe8001270ac5a05d8106aea22 Stored in directory: c:\users\l\appdata\local\pip\cache\wheels\23\cf\80\f3efa822e6ab23277902ee9165fe772eeb1dfb8014f359020a Building wheel for torchmcubes (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for torchmcubes (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [45 lines of output] 2024-10-11 19:56:30,599 - scikit_build_core - INFO - RUN: C:\Users\L\AppData\Local\Temp\pip-build-env-_noi4nnw ormal\Lib\site-packages\cmake\data\bin\cmake -E capabilities 2024-10-11 19:56:30,616 - scikit_build_core - INFO - CMake version: 3.30.4 *** scikit-build-core 0.10.7 using CMake 3.30.4 (wheel) 2024-10-11 19:56:30,632 - scikit_build_core - INFO - Build directory: build *** Configuring CMake... 2024-10-11 19:56:30,679 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None 2024-10-11 19:56:30,687 - scikit_build_core - INFO - RUN: C:\Users\L\AppData\Local\Temp\pip-build-env-_noi4nnw ormal\Lib\site-packages\cmake\data\bin\cmake -S. -Bbuild -Cbuild\CMakeInit.txt -DCMAKE_INSTALL_PREFIX=C:\Users\L\AppData\Local\Temp\tmpx1ax6fn6\wheel\platlib loading initial cache file build\CMakeInit.txt -- Building for: Visual Studio 17 2022 -- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.19045. -- The CXX compiler identification is MSVC 19.39.33523.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.39.33519/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- NO CUDA INSTALLATION FOUND, INSTALLING CPU VERSION ONLY! -- Found Python: C:\Users\L\Documents\Analconda\envs\tripo2\python.exe (found version "3.9.20") found components: Interpreter Development Development.Module Development.Embed -- Performing Test HAS_MSVC_GL_LTCG -- Performing Test HAS_MSVC_GL_LTCG - Success -- Found pybind11: C:/Users/L/AppData/Local/Temp/pip-build-env-_noi4nnw/overlay/Lib/site-packages/pybind11/include (found version "2.13.6") -- Found OpenMP_CXX: -openmp (found version "2.0") -- Found OpenMP: TRUE (found version "2.0") CMake Error at CMakeLists.txt:51 (find_package): By not providing "FindTorch.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Torch", but CMake did not find one. Could not find a package configuration file provided by "Torch" with any of the following names: TorchConfig.cmake torch-config.cmake Add the installation prefix of "Torch" to CMAKE_PREFIX_PATH or set "Torch_DIR" to a directory containing one of the above files. If "Torch" provides a separate development package or SDK, be sure it has been installed. -- Configuring incomplete, errors occurred! *** CMake configuration failed [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for torchmcubes Successfully built antlr4-python3-runtime Failed to build torchmcubes ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (torchmcubes
@Darkfredor
@Darkfredor 6 месяцев назад
hello, for me, on the linux mint, the nodes import failled...:(
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
I’m not well versed in Linux, but did you fail importing them via comfyUI manager or via git pull?
@mechanicalmonk2020
@mechanicalmonk2020 6 месяцев назад
Have another neural net generate the starting sketch from the 2D image
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
Error: stuck in a loop. Please send help.
@SilvioEngelhardt-i7p
@SilvioEngelhardt-i7p 6 месяцев назад
Is this also possible in A1111?
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
I haven’t followed the latest developments in auto1111 closely since I’ve been focusing on comfyUI, but I’ll check if it’s possible to do it
@miayoung1343
@miayoung1343 6 месяцев назад
when I select auto Q and hit prompt once, then my comfyUI start prompting constantly even if i didn't change anything, why is that?
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
That’s because auto queue does precisely that: it keeps generating automatically, as soon as the previous generation ends. We keep the seed fixed in the KSampler to “stop” new generations from continuing if nothing changes, but the generations are queued regardless of any change happening. If no changes have happened, the generation stops as soon as it starts, and a new generation starts (and will stop as soon as it starts if nothing changed). This is the logic scheme: Auto queue > generation starts > see that ksampler has “fixed” seed attribute > has something changed? If yes: finish generating > generation starts > see that ksampler has “fixed” seed attribute > has something changed? If no: stop the generation > generation starts > etc.
@miayoung1343
@miayoung1343 6 месяцев назад
@@risunobushi_ai I see~ I changed the seed to "fixed", it's worked just as your's. THX!
@miayoung1343
@miayoung1343 6 месяцев назад
@@risunobushi_ai But, I couldn't Install dependenciesIt says "The "pip" item cannot be recognized as the name of a cmdlet, function, script file, or executable program."
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
Sounds like you don’t have pip installed! You can look up how to install pip, it’s pretty easy. Nowadays it should come with python, but if you installed an older version of python it might not have been included
@miayoung1343
@miayoung1343 6 месяцев назад
​@@risunobushi_ai You are right! And now I have successfully made it.Thanks~~
@bitreign
@bitreign 6 месяцев назад
Core of the whole video: 19:53 (as expected)
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
I build my tutorials so that no matter the knowledge of the viewer they can follow through and understand why every node is used. I know it takes a long time to go through all the workflow building process, but I hope that by giving a disclaimer at the start of the video that one can jump ahead to see how it all works, everyone can jump to the starting point that’s best for them.
@manolomaru
@manolomaru 6 месяцев назад
✨👌😎😵😎👍✨
@justanothernobody7142
@justanothernobody7142 6 месяцев назад
It's an interesting workflow but unfotunately the 3D side of things just isn't good enough yet. These assets would need to be completely reworked at which point you might as well be creating them from scratch anyway. I think at the moment you're far better off just modeling the asset and using SD to help with textures.
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
That's a future part 2 of this video. Originally I wanted to go through SD texturing as well, but once I saw that the video would be 25 minutes just for the 2D and 3D generations, I thought it would be better to record a texturing tutorial separately.
@justanothernobody7142
@justanothernobody7142 6 месяцев назад
@@risunobushi_ai Ok I'll keep an eye out for that. Will be interesting seeing another workflow. The problem with texturing is that so far most workflows I've seen involve generating diffuse textures, which isn't good because you're getting all light and shadow information baked into the texture. What you really want is albedo textures. My workflow for that is long winded but it's the only way I've found to try and avoid the problem. I usually generate a basic albedo texture in Blender and then use that with img2img. I then also use a combination of 3D renders for adding in more detail and for generating controlNet guides. For texturing what we really need is a model trained only on albedo textures so it can generate images without shadow and lighting but nothing like that has been trained as far as I know. There's a few loras for certain types of textures but they don't work that well.
@PSA04
@PSA04 6 месяцев назад
​@@risunobushi_aican't wait! This is a really powerful workflow and puts the control back into the humans hands. 100% for this type of AI creation.
@samu7015
@samu7015 8 дней назад
TrippoSR is broken atm
@sdafsdf9628
@sdafsdf9628 4 месяца назад
it's nice that the technology works, but the result in 3d is so bad... 2d can still keep up and is nice to look at.
@ManwithNoName-t1o
@ManwithNoName-t1o 6 месяцев назад
the final result looks like poop.
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
it does a bit, but it also looks like a promising first step that can be polished through a dedicated pipeline.
@PietKargaard
@PietKargaard 6 месяцев назад
way to complex for me
@risunobushi_ai
@risunobushi_ai 6 месяцев назад
You can try downloading the json file for the workflow and scribble away, you just need to install the missing nodes! No need to follow me through all the steps, I spend time explaining how to build it for people who want to replicate and understand what every node does, but it’s not required.
@dispholidus
@dispholidus 5 месяцев назад
I don't think I will ever understand the appeal of the spaghetti graph stuff over clear, reusable code. Comfy UI really is pure crap. Thanks anyways, the process itself is quite interesting.
@risunobushi_ai
@risunobushi_ai 5 месяцев назад
The appeal lies in it being nothing else but a interface for node-based coding. I don’t know how to code well, but node coding is a lot easier to understand, at least in my experience. Also, in my latest video I go over a implementation of frequency separation based on nodes, and that has both a real life use case and is pretty invaluable in order to create a all in one, 1-click solution that would not be possible with other UIs (well, except swarm, which is basically a webui frontend for comfyUI)
@kenhew4641
@kenhew4641 4 месяца назад
I'm the exact opposite, the moment i see nodes and lines I immediately understood everything, it's just so clear and instinctive. Not to mention a node based system is by its very design an open and potentially unlimited system which makes it exponentially more powerful than any other UI system, so much so that I find myself increasingly unable to use any software if it's not a node-based UI.
@dispholidus
@dispholidus 4 месяца назад
@@kenhew4641 I'd agree it's a nice UI as long as it remains simple and you don't need too much automation or dynamic stuff. In a way, it is similar to spreadsheet systems. Yes, it is powerful, but as complexity increases, it turns into pure hell compared to a real database associated with dedicated code. The grudge I hold against Comfy UI is the same I have against excel in corporate environments. It becomes a standard, cannibalizing and eclipsing other, more flexible and powerful solutions. Just the idea of a 2D representation, by itself, can be a serious limitation if you have to abstract relatively complex data and processes.
Далее
Relight and Preserve any detail with Stable Diffusion
19:02
.NET Live Coding: Client Solutions [e.1]
3:09:00
Easy Inpainting for ANY model (SDXL, Flux, etc)
13:06
Просмотров 1,9 тыс.
A Professional's Review of FLUX: A Comprehensive Look
22:51