you are the best. the workflow is so unique and your explanation is so good. I love each time you explain why we need a node to be connected. By learning it, I could improvise the workflow since I know which node is for what. thank you so much!.
Uh, I was sure I needed it! Well, the more you know - I guess it’s a remnant of the SAI workflows with non fine tuned models then. I’m pretty sure one of the first workflows tutorials I had seen used it, but they were using a base model IIRC. I’ll try without and see if there’s any differences.
I make a point to not jump ahead and leave people behind, while providing time stamps for people who already know how it all works so they can just jump ahead. Up till now I’ve seen a positive reception to this approach, but if enough people would rather have me skipping parts to get to the bottom of the workflows I’ll do that.
Dude!! You are awesome!!! Thanks for the amount of time and effort you put into this. Is it possible to use SD to add more details to the 3d texture after you have a "good" 3d model? For example, rotate the model and then generate some extra texture?
Thank you for the kind words! Yeah, there’s a couple of workflows that go from mesh to texture, I’ll reply to this comment in a couple of hours with some links.
You might want to take a look at these links: - www.reddit.com/r/StableDiffusion/s/n603cJOsgC - www.reddit.com/r/comfyui/s/AHKvo5UkXD - www.reddit.com/r/comfyui/s/YEAPX125Db - www.reddit.com/r/StableDiffusion/s/iZin0p4Fv9 - www.reddit.com/r/StableDiffusion/s/T5sfUsckAs - and lastly www.reddit.com/r/StableDiffusion/s/gUP5d5pgFF The last one and the unity one are what I’d like to cover if this video is well received enough. I originally wanted to release this video as a full pipeline implementing retexturing, but just generating 2D and 3D assets was a 25 minutes tutorial, I thought it better to split it into two different videos and check if there’s interest in the subject.
Nice tutorial. Would you get faster inference using a lcm model? Also the official tripo website service has a refine feature and it looks great! Do you know if that feature is available in comfyui?
Off the top of my head I can't remember what's the speed differences between LCM and Lighting models, but I guess you could speed things up even more with a Hyper model. It's also been a hot minute since I looked at new TripoSR features, but I'll take a look!
you can either find the repo in the manager, on github, or just drag and drop the workflow json file in a comfyui instance and install missing nodes from the manager. let me know if that works for you
The meshes are automatically exported in the output folder inside of your comfyUI directory, and the textures are already baked in. If you don't see them when you import the mesh, you might need to alter the material and append a color attribute node in the shader window
The output directory is the default one, comfyui -> output. I realized afterwards that it's not clear in the github docs, I should have specified it in the video. As for the image textures, I'm not sure in other programs, but in Blender it's baked into the obj's material. So inside of the shading window, you'd need to add a new material, and then hook up a "color attribute" node to the base color input.
I'm not sure you can, I've tried inspecting the viewer and the sampler node's properties and there's no field for filename. I guess you could change the line of code that gives the file its filename, but I wouldn't risk breaking it when it's far easier to rename the files after they're generated.
the requirement for Tripo are an absolute wall for me i cannot continue it's always the damn torch related things: Building wheels for collected packages: antlr4-python3-runtime, torchmcubes Building wheel for antlr4-python3-runtime (setup.py) ... done Created wheel for antlr4-python3-runtime: filename=antlr4_python3_runtime-4.9.3-py3-none-any.whl size=144577 sha256=94db4768f9c65c129ffcf3ca5ace44270622aa6fe8001270ac5a05d8106aea22 Stored in directory: c:\users\l\appdata\local\pip\cache\wheels\23\cf\80\f3efa822e6ab23277902ee9165fe772eeb1dfb8014f359020a Building wheel for torchmcubes (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for torchmcubes (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [45 lines of output] 2024-10-11 19:56:30,599 - scikit_build_core - INFO - RUN: C:\Users\L\AppData\Local\Temp\pip-build-env-_noi4nnw ormal\Lib\site-packages\cmake\data\bin\cmake -E capabilities 2024-10-11 19:56:30,616 - scikit_build_core - INFO - CMake version: 3.30.4 *** scikit-build-core 0.10.7 using CMake 3.30.4 (wheel) 2024-10-11 19:56:30,632 - scikit_build_core - INFO - Build directory: build *** Configuring CMake... 2024-10-11 19:56:30,679 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None 2024-10-11 19:56:30,687 - scikit_build_core - INFO - RUN: C:\Users\L\AppData\Local\Temp\pip-build-env-_noi4nnw ormal\Lib\site-packages\cmake\data\bin\cmake -S. -Bbuild -Cbuild\CMakeInit.txt -DCMAKE_INSTALL_PREFIX=C:\Users\L\AppData\Local\Temp\tmpx1ax6fn6\wheel\platlib loading initial cache file build\CMakeInit.txt -- Building for: Visual Studio 17 2022 -- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.19045. -- The CXX compiler identification is MSVC 19.39.33523.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.39.33519/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- NO CUDA INSTALLATION FOUND, INSTALLING CPU VERSION ONLY! -- Found Python: C:\Users\L\Documents\Analconda\envs\tripo2\python.exe (found version "3.9.20") found components: Interpreter Development Development.Module Development.Embed -- Performing Test HAS_MSVC_GL_LTCG -- Performing Test HAS_MSVC_GL_LTCG - Success -- Found pybind11: C:/Users/L/AppData/Local/Temp/pip-build-env-_noi4nnw/overlay/Lib/site-packages/pybind11/include (found version "2.13.6") -- Found OpenMP_CXX: -openmp (found version "2.0") -- Found OpenMP: TRUE (found version "2.0") CMake Error at CMakeLists.txt:51 (find_package): By not providing "FindTorch.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Torch", but CMake did not find one. Could not find a package configuration file provided by "Torch" with any of the following names: TorchConfig.cmake torch-config.cmake Add the installation prefix of "Torch" to CMAKE_PREFIX_PATH or set "Torch_DIR" to a directory containing one of the above files. If "Torch" provides a separate development package or SDK, be sure it has been installed. -- Configuring incomplete, errors occurred! *** CMake configuration failed [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for torchmcubes Successfully built antlr4-python3-runtime Failed to build torchmcubes ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (torchmcubes
That’s because auto queue does precisely that: it keeps generating automatically, as soon as the previous generation ends. We keep the seed fixed in the KSampler to “stop” new generations from continuing if nothing changes, but the generations are queued regardless of any change happening. If no changes have happened, the generation stops as soon as it starts, and a new generation starts (and will stop as soon as it starts if nothing changed). This is the logic scheme: Auto queue > generation starts > see that ksampler has “fixed” seed attribute > has something changed? If yes: finish generating > generation starts > see that ksampler has “fixed” seed attribute > has something changed? If no: stop the generation > generation starts > etc.
@@risunobushi_ai But, I couldn't Install dependenciesIt says "The "pip" item cannot be recognized as the name of a cmdlet, function, script file, or executable program."
Sounds like you don’t have pip installed! You can look up how to install pip, it’s pretty easy. Nowadays it should come with python, but if you installed an older version of python it might not have been included
I build my tutorials so that no matter the knowledge of the viewer they can follow through and understand why every node is used. I know it takes a long time to go through all the workflow building process, but I hope that by giving a disclaimer at the start of the video that one can jump ahead to see how it all works, everyone can jump to the starting point that’s best for them.
It's an interesting workflow but unfotunately the 3D side of things just isn't good enough yet. These assets would need to be completely reworked at which point you might as well be creating them from scratch anyway. I think at the moment you're far better off just modeling the asset and using SD to help with textures.
That's a future part 2 of this video. Originally I wanted to go through SD texturing as well, but once I saw that the video would be 25 minutes just for the 2D and 3D generations, I thought it would be better to record a texturing tutorial separately.
@@risunobushi_ai Ok I'll keep an eye out for that. Will be interesting seeing another workflow. The problem with texturing is that so far most workflows I've seen involve generating diffuse textures, which isn't good because you're getting all light and shadow information baked into the texture. What you really want is albedo textures. My workflow for that is long winded but it's the only way I've found to try and avoid the problem. I usually generate a basic albedo texture in Blender and then use that with img2img. I then also use a combination of 3D renders for adding in more detail and for generating controlNet guides. For texturing what we really need is a model trained only on albedo textures so it can generate images without shadow and lighting but nothing like that has been trained as far as I know. There's a few loras for certain types of textures but they don't work that well.
You can try downloading the json file for the workflow and scribble away, you just need to install the missing nodes! No need to follow me through all the steps, I spend time explaining how to build it for people who want to replicate and understand what every node does, but it’s not required.
I don't think I will ever understand the appeal of the spaghetti graph stuff over clear, reusable code. Comfy UI really is pure crap. Thanks anyways, the process itself is quite interesting.
The appeal lies in it being nothing else but a interface for node-based coding. I don’t know how to code well, but node coding is a lot easier to understand, at least in my experience. Also, in my latest video I go over a implementation of frequency separation based on nodes, and that has both a real life use case and is pretty invaluable in order to create a all in one, 1-click solution that would not be possible with other UIs (well, except swarm, which is basically a webui frontend for comfyUI)
I'm the exact opposite, the moment i see nodes and lines I immediately understood everything, it's just so clear and instinctive. Not to mention a node based system is by its very design an open and potentially unlimited system which makes it exponentially more powerful than any other UI system, so much so that I find myself increasingly unable to use any software if it's not a node-based UI.
@@kenhew4641 I'd agree it's a nice UI as long as it remains simple and you don't need too much automation or dynamic stuff. In a way, it is similar to spreadsheet systems. Yes, it is powerful, but as complexity increases, it turns into pure hell compared to a real database associated with dedicated code. The grudge I hold against Comfy UI is the same I have against excel in corporate environments. It becomes a standard, cannibalizing and eclipsing other, more flexible and powerful solutions. Just the idea of a 2D representation, by itself, can be a serious limitation if you have to abstract relatively complex data and processes.