Тёмный
JarvisLabs AI
JarvisLabs AI
JarvisLabs AI
Подписаться
Jarvislabs.ai provides a 1-click GPU cloud platform for your AI and DL training.
Spin up modern Nvidia A100, A6000, A5000, Quadro RTX 5000, Quadro RTX 6000 GPU, and more at lower prices in seconds.

Some of the prominent features are
🚀 Get an optimized environment of the popular frameworks (PyTorch, Tensorflow, Fastai, or BYOC).
🚀 Access JupyterLab configured securely.
🚀 SSH/connect to the instance using your favorite IDE.
💰 Pay per minute.
🚀 Scale GPUs, storage, and change GPUs anytime you want.
⏸️ Automatically pause instance through code.
🚀 We offer lower pricing per hour for interruptible workloads and reduced weekly/monthly pricing options for long-term dedicated use.
💬 Talk anytime with the team who built the product. We would love to help you on your Deep learning journey.
🤝 Trusted by 10,000's AI practitioners worldwide.

Spin your DL instance today at cloud.jarvislabs.ai.
Learn more at jarvislabs.ai/docs.


Understanding Fastai Vision Learner
17:42
2 года назад
Understanding Pytorch Module
10:36
2 года назад
Комментарии
@nayemaffan
@nayemaffan 8 дней назад
Very nice explaination
@JustYourAverageToes
@JustYourAverageToes 14 дней назад
I'm way too stupid for this
@zdz-wr7oe
@zdz-wr7oe 16 дней назад
hello, do you have those workflow pic?
@JarvislabsAI
@JarvislabsAI 16 дней назад
Hi, we have linked the workflow in the description.
@RickySupriyadi
@RickySupriyadi 16 дней назад
is this what they call magic prompt? where ollama model refine user prompt?
@chorton53
@chorton53 20 дней назад
This is really amazing !! Great work guys !
@spiffingbooks2903
@spiffingbooks2903 21 день назад
The Topic is interesting but (in common with most RU-vid Comfy experts, the whole presentation is confusing for the 90% of the audience that has just stumbled upon this. I think to be more successful you need to be clearer about what you want to achieve, why its a good idea. Explain how this JarvisAI fits into this, make it clear what resources need to be downloaded and exactly how in the least problematic way. I dont want to appear too negative as of course you are wanting to be helpful just trying to give some tips how to improve your presentation and hopefully consequently increase subscriber numbers
@evolv_85
@evolv_85 22 дня назад
Awesome. Thanks.
@behrampatel4872
@behrampatel4872 25 дней назад
Go around the world to find good comfy tutorials and land back in India ! GREAT PRESENTATION GUYS. If you take requests...can you do one on training your own dream booth. like creating old Indian comic ? cheers and subscribed. b
@JarvislabsAI
@JarvislabsAI 23 дня назад
Sure would love to.
@behrampatel4872
@behrampatel4872 23 дня назад
@@JarvislabsAI cant wait. Cheers
@YajuvendraSinghRawat
@YajuvendraSinghRawat 29 дней назад
Its a wonderful videa, clearly and concisely explained.
@JarvislabsAI
@JarvislabsAI 23 дня назад
Glad you liked it
@Science-vt4vg
@Science-vt4vg Месяц назад
Still it makes mistake and hallocinates.
@JarvislabsAI
@JarvislabsAI 27 дней назад
They are going to make it for some more time.
@Science-vt4vg
@Science-vt4vg 27 дней назад
@@JarvislabsAI I think it's never guaranteed...and they have already trained on Wikipedia and unreliable data like reddit,and some online blogs, websites.. it will always be useless unless u know the exact answer already.
@JarvislabsAI
@JarvislabsAI 23 дня назад
thats why some call it a feature..
@Science-vt4vg
@Science-vt4vg 23 дня назад
@@JarvislabsAI just to fool people and charge money for wrong information using a glorified tool
@joyceracing99
@joyceracing99 Месяц назад
I've heard that if you use something other than 1024x1024 for your Empty Latent image node, it will then be less likely to put a watermark. I use 1016x1016.
@JarvislabsAI
@JarvislabsAI Месяц назад
Hi @joyceracing, we haven't tested that yet. BTW thanks for the information. We will test it out
@Kaoru8168
@Kaoru8168 Месяц назад
@@JarvislabsAI your control net image had a watermark
@itycagameplays
@itycagameplays Месяц назад
Exactly. This is a known issue.
@DerekShenk
@DerekShenk Месяц назад
After two hours, still can't get Inference_Core_MiDaS-DepthMapPreprocessor added to Comfyui to use this workflow. The link associated with the missing node does not work. Where do you get this node?
@JarvislabsAI
@JarvislabsAI Месяц назад
Hi @DerekShenk , If you tried installing the inference core node via comfyui manager. There seems to be a download issue. No worries, you can install it manually using the following terminal commands Note: Make sure to only copy and paste the parts within the quotation marks (" ") into your terminal 1. Navigate to this file directory : (Command) "cd ComfyUI/custom_nodes" 2. Clone the repo :(command) "git clone github.com/LykosAI/ComfyUI-Inference-Core-Nodes.git" 3. Navigate into the inference core node folder: (Command) "cd /home/ComfyUI/custom_nodes/ComfyUI-Inference-Core-Nodes/" 4. Installing all the requirement : (command) "pip install ." 5. Restart the comfyui server
@DerekShenk
@DerekShenk Месяц назад
@@JarvislabsAI Thank you, however I had already followed those steps... git clone... and pip install. After trying three more times to install this inference node, I searched for the node and found it still failed to load when launching Comfyui. The error is " File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes\__init__.py", line 1, in <module> from inference_core_nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS ModuleNotFoundError: No module named 'inference_core_nodes' I have updated Comfyui. Any suggestions?
@JarvislabsAI
@JarvislabsAI Месяц назад
Hi @DerekShenk, you might be missing some dependencies, but most of them should be downloaded by running the following commands: 'cd /home/ComfyUI/custom_nodes/ComfyUI-Inference-Core-Nodes/' and 'pip install .' Ensure that you include the dot at the end of the second command to install all the dependencies correctly. If the issue persists, you can visit our website and contact us through the chat feature
@iresolvers
@iresolvers 26 дней назад
@@DerekShenk I have the same error it's wast of time!!!
@KINGLIFERISM
@KINGLIFERISM 25 дней назад
You do not need the depth map they used. You can use any depth preprocessor.
@Empaches
@Empaches Месяц назад
thanks for the tutorial!! why the workflow operates mostly on the CPU?
@JarvislabsAI
@JarvislabsAI Месяц назад
Hi @Empaches, we have no idea on why that's happening. We'll check it out and let you know if we can find something to help you.
@impactframes
@impactframes Месяц назад
Thank you 🙂 , great video. I am working on a few updates for this but meanwhile the VHS combine node is not needed for the workflow is just to display the video, since the video gets output in the output folder. I am developing a new video previewer and quality upscaler.🎉
@JarvislabsAI
@JarvislabsAI Месяц назад
Oh thats great.
@huwhitememes
@huwhitememes 26 дней назад
Awesome
@impactframes
@impactframes 25 дней назад
@@huwhitememes thanks :D
@israeldelamoblas5043
@israeldelamoblas5043 Месяц назад
Ollama is superslow, I would like a faster version using LM Studio or similar. Thanks
@JarvislabsAI
@JarvislabsAI Месяц назад
Noted!
@RickySupriyadi
@RickySupriyadi 16 дней назад
slow or fast isn't that depends which model you are using? phi3 in ollama blazing fast
@HinduNerd
@HinduNerd Месяц назад
Is it available for Automatic1111??
@JarvislabsAI
@JarvislabsAI Месяц назад
Yes Automatic also has SUPIR
@DeanCassady
@DeanCassady Месяц назад
Thank you for the workflow.
@mufasa.alakhras
@mufasa.alakhras Месяц назад
How do i get the Load Checkpoint?
@JarvislabsAI
@JarvislabsAI Месяц назад
You can double click and search for load checkpoint node. If you want the checkpoint model. You can download it from this link: huggingface.co/RunDiffusion/Juggernaut-XL-v8/tree/main
@mufasa.alakhras
@mufasa.alakhras Месяц назад
Thank you!@@JarvislabsAI
@MrDebranjandutta
@MrDebranjandutta Месяц назад
Brilliant stuff mate. I'm looking for some zero shot method for procedural inpainting of jewelry. Right now I'm training SDXL loras but they are always not accurate. Can I inpaint a necklace on a generated model with this method?
@JarvislabsAI
@JarvislabsAI Месяц назад
Yes, you can use inpainting to add jewelries to generated models. We will also drop a video on inpainting soon.
@101worlds
@101worlds Месяц назад
thanks, but when i star the workflow, red warning comes from IF_PromptMkr: "HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061]" how should i fix it? please
@JarvislabsAI
@JarvislabsAI Месяц назад
Did you run ollama server?
@xyzxyz324
@xyzxyz324 Месяц назад
you don't need llm models in the workflow it is just wasting the very required hardware which is needed to do the real job, upscale. just remove the un-needed nodes.
@hempsack
@hempsack Месяц назад
I cannot get this to run on my laptop, it fails to load into comfyui in manager and manual install. I have a full update on comfy, and I also run the txt file to get all the required files needed, and it still fails. Any idea why? I am running a Asus Rog strix 2024 with 64 gigs of ram, and a 4090 16 gigs Vram card. I have all the requirements needed for ai generations.
@JarvislabsAI
@JarvislabsAI Месяц назад
Did you try checking the error log to narrow down the issue?
@TheGalacticIndian
@TheGalacticIndian Месяц назад
What are VRAM requirements?
@JarvislabsAI
@JarvislabsAI Месяц назад
Hi, a GPU with a minimum of 12GB VRAM is recommended to run this workflow.
@101worlds
@101worlds Месяц назад
would you please share workflow, thank you
@JarvislabsAI
@JarvislabsAI Месяц назад
Hi, here's the workflow github.com/jarvislabsai/comfyui_workflows/blob/main/SUPIR_Workflow.json we will make sure to add it to the description.
@andrewerdle
@andrewerdle Месяц назад
SUPIR!
@johnfedrickenator
@johnfedrickenator Месяц назад
I guess, I was pronouncing it as SUPER 😀
@impactframes
@impactframes Месяц назад
Hi, Thank you very much great tutorial ❤
@JarvislabsAI
@JarvislabsAI Месяц назад
Thanks for creating the node, waiting for your future works 😊
@impactframes
@impactframes Месяц назад
@@JarvislabsAI I made a super update please check it out also my other nodes for talking avatars😉 and thank again for the tutorial ❤️
@JarvislabsAI
@JarvislabsAI Месяц назад
@@impactframes Sure, we will look into it 🙌
@impactframes
@impactframes Месяц назад
@@JarvislabsAI thank you :)
@Ai-dl2ut
@Ai-dl2ut Месяц назад
Hello sir... doees this IF_AI node take lot of time???... for me its taking like 15mins to load for every queue...using RTX3060
@JarvislabsAI
@JarvislabsAI Месяц назад
It depends on what model you choose. Also try running ollama directly and see how fast it is.
@Ai-dl2ut
@Ai-dl2ut Месяц назад
@@JarvislabsAI Thanks, let me try that
@Ai-dl2ut
@Ai-dl2ut Месяц назад
Very nice explaination :)
@JarvislabsAI
@JarvislabsAI Месяц назад
Thanks a lot 😊
@aimademerich
@aimademerich Месяц назад
Phenomenal
@JarvislabsAI
@JarvislabsAI Месяц назад
Thanks :)
@Necro-wr2tn
@Necro-wr2tn Месяц назад
Hey, this looks great but i have a question. How much it costs to generate this images?
@JarvislabsAI
@JarvislabsAI Месяц назад
These are all opensource software, so there is not much cost associated with it. If you need a GPU, and the base software setup then you would be paying for the compute. The pricing starts at 0.49$ an hour, and the actual billing happens per minute. jarvislabs.ai/pricing
@goodchoice4410
@goodchoice4410 Месяц назад
lol
@Necro-wr2tn
@Necro-wr2tn Месяц назад
@@goodchoice4410 why lol?
@vigneshvicky6720
@vigneshvicky6720 Месяц назад
very very nice man.. tq💓
@JarvislabsAI
@JarvislabsAI Месяц назад
You're welcome Vicky 😃
@user-ef4df8xp8p
@user-ef4df8xp8p 2 месяца назад
Hi, you need to raise the volum of the videos during the editing.....
@JarvislabsAI
@JarvislabsAI 2 месяца назад
Sure, we missed it. We will improve it in the upcoming videos. Thanks for your suggestion. Hope you are enjoying the video?
@rishabh063
@rishabh063 2 месяца назад
Hi Vishnu, great vide9
@JarvislabsAI
@JarvislabsAI 2 месяца назад
Thanks Rishab
@mbcase
@mbcase 2 месяца назад
Volume is 50% of other RU-vid videos.
@JarvislabsAI
@JarvislabsAI 2 месяца назад
Thanks for the valuable feedback. We will try to fix it in our upcoming videos.
@jagsAImagic
@jagsAImagic 2 месяца назад
Hai very good intro video and do you have any plan to provide advanced level comfyUI workflows and if same can be handled for RAM intensive workflows and animation workflows. This should help with people struggling with long animations to be run on their home laptop and also provide a script to see it in browser kind of UI that links to it so we can access any workflows running on server and update. secondly what is your plans on updating the comfyUI backend as very week there is tons of updates with the nodes , comfyUI itself and changes to the installation or procedures. This is critical as when one wants to work on products like TRIPO SR or higher end SDXL models it is really needs higher GPU, RAM and allocations.
@JarvislabsAI
@JarvislabsAI 2 месяца назад
Yes we plan to explore more advanced workflows. We will constantly update the ComfyUI installation with the latest features.
@gthin
@gthin 2 месяца назад
Hey Vishnu, thanks for the simple explanations. I'm a designer not a dev,, but seeing this I have a doubt. Does this Ferret model work the same way 'circle something on screen in latest Samsung galaxy phones to search' ? Ultimately it recognises on what we pointed, drawn a box or sketch..right?
@JarvislabsAI
@JarvislabsAI 2 месяца назад
Yeah thats right. I have not used Samsung, but the model works in a similar way.
@mbcase
@mbcase 2 месяца назад
Thanks, great video! Could you do one on text and image input (img2img) inputs? I have a workflow set up, but am getting bad results, I think because the denoise parameter has to be set and is apparently very sensitive. I watched the one about Stability Cascade doing this, but I'd prefer to use SDXL, and the Stability Cascade (part 3) video didn't go into detail about this.
@JarvislabsAI
@JarvislabsAI 2 месяца назад
Sure, we will post a video focusing on image inputs. Thanks for commenting :)
@goelnikhils
@goelnikhils 2 месяца назад
Amazing Explanation Vishnu
@JarvislabsAI
@JarvislabsAI 2 месяца назад
Thank you.
@edumaba
@edumaba 2 месяца назад
I have data on my google drive and I want to use that instead of downloading, how do I go about that?
@JarvislabsAI
@JarvislabsAI 2 месяца назад
I would recommend downloading data and using them. As localised data can help in increasing the performance.
@Saran_2018
@Saran_2018 3 месяца назад
Great explanation 💐👏
@aravindram922
@aravindram922 3 месяца назад
Nice explanation, superb keep goin 🎉🎉🎉
@HappyHappy-tl8qm
@HappyHappy-tl8qm 3 месяца назад
🔥
@user-rr8qz3yo7v
@user-rr8qz3yo7v 3 месяца назад
👌 super
@saran_freaky
@saran_freaky 3 месяца назад
Super clear ah irukunga 🙌❤️ easy ah iruku purinchika..
@dennyfranklin9568
@dennyfranklin9568 3 месяца назад
Usefull
@barath.m2685
@barath.m2685 3 месяца назад
Clear explanations, Keep doing more🎉
@skedison5406
@skedison5406 3 месяца назад
Superb keep doing ❤
@techthief3278
@techthief3278 3 месяца назад
Awesome ❤ keep doing more!!
@user-tg5kq5ek7z
@user-tg5kq5ek7z 3 месяца назад
It is very usefull,thanks for sharing this 😊
@steffyz9154
@steffyz9154 3 месяца назад
Sprb!!🎉