Jarvislabs.ai provides a 1-click GPU cloud platform for your AI and DL training. Spin up modern Nvidia A100, A6000, A5000, Quadro RTX 5000, Quadro RTX 6000 GPU, and more at lower prices in seconds.
Some of the prominent features are 🚀 Get an optimized environment of the popular frameworks (PyTorch, Tensorflow, Fastai, or BYOC). 🚀 Access JupyterLab configured securely. 🚀 SSH/connect to the instance using your favorite IDE. 💰 Pay per minute. 🚀 Scale GPUs, storage, and change GPUs anytime you want. ⏸️ Automatically pause instance through code. 🚀 We offer lower pricing per hour for interruptible workloads and reduced weekly/monthly pricing options for long-term dedicated use. 💬 Talk anytime with the team who built the product. We would love to help you on your Deep learning journey. 🤝 Trusted by 10,000's AI practitioners worldwide.
Spin your DL instance today at cloud.jarvislabs.ai. Learn more at jarvislabs.ai/docs.
The Topic is interesting but (in common with most RU-vid Comfy experts, the whole presentation is confusing for the 90% of the audience that has just stumbled upon this. I think to be more successful you need to be clearer about what you want to achieve, why its a good idea. Explain how this JarvisAI fits into this, make it clear what resources need to be downloaded and exactly how in the least problematic way. I dont want to appear too negative as of course you are wanting to be helpful just trying to give some tips how to improve your presentation and hopefully consequently increase subscriber numbers
Go around the world to find good comfy tutorials and land back in India ! GREAT PRESENTATION GUYS. If you take requests...can you do one on training your own dream booth. like creating old Indian comic ? cheers and subscribed. b
@@JarvislabsAI I think it's never guaranteed...and they have already trained on Wikipedia and unreliable data like reddit,and some online blogs, websites.. it will always be useless unless u know the exact answer already.
I've heard that if you use something other than 1024x1024 for your Empty Latent image node, it will then be less likely to put a watermark. I use 1016x1016.
After two hours, still can't get Inference_Core_MiDaS-DepthMapPreprocessor added to Comfyui to use this workflow. The link associated with the missing node does not work. Where do you get this node?
Hi @DerekShenk , If you tried installing the inference core node via comfyui manager. There seems to be a download issue. No worries, you can install it manually using the following terminal commands Note: Make sure to only copy and paste the parts within the quotation marks (" ") into your terminal 1. Navigate to this file directory : (Command) "cd ComfyUI/custom_nodes" 2. Clone the repo :(command) "git clone github.com/LykosAI/ComfyUI-Inference-Core-Nodes.git" 3. Navigate into the inference core node folder: (Command) "cd /home/ComfyUI/custom_nodes/ComfyUI-Inference-Core-Nodes/" 4. Installing all the requirement : (command) "pip install ." 5. Restart the comfyui server
@@JarvislabsAI Thank you, however I had already followed those steps... git clone... and pip install. After trying three more times to install this inference node, I searched for the node and found it still failed to load when launching Comfyui. The error is " File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes\__init__.py", line 1, in <module> from inference_core_nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS ModuleNotFoundError: No module named 'inference_core_nodes' I have updated Comfyui. Any suggestions?
Hi @DerekShenk, you might be missing some dependencies, but most of them should be downloaded by running the following commands: 'cd /home/ComfyUI/custom_nodes/ComfyUI-Inference-Core-Nodes/' and 'pip install .' Ensure that you include the dot at the end of the second command to install all the dependencies correctly. If the issue persists, you can visit our website and contact us through the chat feature
Thank you 🙂 , great video. I am working on a few updates for this but meanwhile the VHS combine node is not needed for the workflow is just to display the video, since the video gets output in the output folder. I am developing a new video previewer and quality upscaler.🎉
You can double click and search for load checkpoint node. If you want the checkpoint model. You can download it from this link: huggingface.co/RunDiffusion/Juggernaut-XL-v8/tree/main
Brilliant stuff mate. I'm looking for some zero shot method for procedural inpainting of jewelry. Right now I'm training SDXL loras but they are always not accurate. Can I inpaint a necklace on a generated model with this method?
thanks, but when i star the workflow, red warning comes from IF_PromptMkr: "HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061]" how should i fix it? please
you don't need llm models in the workflow it is just wasting the very required hardware which is needed to do the real job, upscale. just remove the un-needed nodes.
I cannot get this to run on my laptop, it fails to load into comfyui in manager and manual install. I have a full update on comfy, and I also run the txt file to get all the required files needed, and it still fails. Any idea why? I am running a Asus Rog strix 2024 with 64 gigs of ram, and a 4090 16 gigs Vram card. I have all the requirements needed for ai generations.
These are all opensource software, so there is not much cost associated with it. If you need a GPU, and the base software setup then you would be paying for the compute. The pricing starts at 0.49$ an hour, and the actual billing happens per minute. jarvislabs.ai/pricing
Hai very good intro video and do you have any plan to provide advanced level comfyUI workflows and if same can be handled for RAM intensive workflows and animation workflows. This should help with people struggling with long animations to be run on their home laptop and also provide a script to see it in browser kind of UI that links to it so we can access any workflows running on server and update. secondly what is your plans on updating the comfyUI backend as very week there is tons of updates with the nodes , comfyUI itself and changes to the installation or procedures. This is critical as when one wants to work on products like TRIPO SR or higher end SDXL models it is really needs higher GPU, RAM and allocations.
Hey Vishnu, thanks for the simple explanations. I'm a designer not a dev,, but seeing this I have a doubt. Does this Ferret model work the same way 'circle something on screen in latest Samsung galaxy phones to search' ? Ultimately it recognises on what we pointed, drawn a box or sketch..right?
Thanks, great video! Could you do one on text and image input (img2img) inputs? I have a workflow set up, but am getting bad results, I think because the denoise parameter has to be set and is apparently very sensitive. I watched the one about Stability Cascade doing this, but I'd prefer to use SDXL, and the Stability Cascade (part 3) video didn't go into detail about this.