Тёмный

Easy Text-to-Video in Python | Python Tutorial with Damo-vilab Model 

AssemblyAI
Подписаться 138 тыс.
Просмотров 26 тыс.
50% 1

In this engaging, hands-on tutorial, we unlock the power of Python to breathe life into text by turning it into dynamic videos! We demystify the intricacies of the state-of-the-art Damo-vilab text-to-video model, breaking it down into easily digestible parts for beginners and experts alike. From delving into the captivating world of diffusion models, extracting key features from text, to transforming abstract spaces into watchable videos, we make this cutting-edge AI technology accessible to all. This Python tutorial not only explores the theoretical workings of the model but dives straight into practical application with real code snippets and clear, step-by-step explanations. If you've been on the hunt for a comprehensive guide to text-to-video generation using Python, your search ends here. So hit that play button, and let's start generating videos from text! Remember to like, share, and subscribe for more content like this.
Damo-vilab model: tinyurl.com/mundvt8n
Get you free AssemblyAI API token 👇:
www.assemblyai.com/?...
▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT ▬▬▬▬▬▬▬▬▬▬▬▬
🖥️ Website: www.assemblyai.com
🐦 Twitter: / assemblyai
🦾 Discord: / discord
▶️ Subscribe: ru-vid.com?...
🔥 We're hiring! Check our open roles: www.assemblyai.com/careers
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#MachineLearning #DeepLearning

Опубликовано:

 

10 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 40   
@user-tt7qz1ky8w
@user-tt7qz1ky8w 2 дня назад
Its really helpful for me, thank you for sharing the valuable content.
@jstevh
@jstevh 11 месяцев назад
Was interesting. Like these short, simple and very to the point demo videos.
@khalidelgazzar
@khalidelgazzar 5 месяцев назад
Great tutorial. Short & sweet
@360withpaul
@360withpaul 11 месяцев назад
Very very interesting, thanks for sharing! 🔥
@digitalcrore3589
@digitalcrore3589 11 месяцев назад
Very good explanation Smitha .cool .
@jamesmuniu
@jamesmuniu 11 месяцев назад
Was finally able to setup the environment on my workstation (that took some doing). Went for 'Lion Dancing' generation and got a chinese dragon dance video :) Then video is automatically saved in hidden folder User/Appdata/Local/Temp/Videoname. Wierd but atleast it works. Thanks again, awesome stuff👏
@onandonline1715
@onandonline1715 6 месяцев назад
awesome. thanks
@yashshah8002
@yashshah8002 7 месяцев назад
Hi, how can we generate long vidoes? Do we need to use langchain?
@YATENDRA3192
@YATENDRA3192 11 месяцев назад
will be helpful if you guys can share the colab notebook also.
@AliceWickham
@AliceWickham 17 дней назад
❤. ❤great
@brainystuck9278
@brainystuck9278 10 месяцев назад
very good thanks, you were amazing beautiful
@jayanthkothapalli9.2
@jayanthkothapalli9.2 11 месяцев назад
mam video is not showing in the folder! after running the code, what to do?
@baazarkibaatein
@baazarkibaatein 11 месяцев назад
What's the use case of it
@zimissscameras
@zimissscameras 2 месяца назад
can you create longer videos with this ?
@kevinehsani3358
@kevinehsani3358 11 месяцев назад
Thanks for the video, one thing I do not understand that how does it know to download from? I see the model name, but where is being downloaded from and how the code knows?
@valentinsanchez9011
@valentinsanchez9011 7 месяцев назад
hi man, i had exactly the same question... Idk if you have used hugging face before but that's a website with every kind of models you could think about, when you go right there and pick a pre-trained model you have to click in the button saying "use in diffusers" so you get the model from diffusers library
@ranajigaming7172
@ranajigaming7172 3 месяца назад
Can you explain about moviepy and gtts
@HomelessinBrighton
@HomelessinBrighton 4 месяца назад
Why does the generated video display a watermark from "shutterstock". Are you sure you play the correct video?
@AssemblyAI
@AssemblyAI 4 месяца назад
The model was trained on video data with watermarks so even in its generated video, we see this watermark being replicated (although not perfectly). We highly encourage you to run it with your own prompts to test it out.
@kevinzhu9305
@kevinzhu9305 11 месяцев назад
Great one! just one thing, please provide the subtitle, failed to understand some of the words, not native english speaker, thanks
@khalidal-reemi3361
@khalidal-reemi3361 11 месяцев назад
you can enable using settings icon ⚙
@robertsutkowski3170
@robertsutkowski3170 8 месяцев назад
🤗🤗🤗🥰
@FokalAcademy
@FokalAcademy 11 месяцев назад
Video was looking from Shutterstock:))
@robertsutkowski3170
@robertsutkowski3170 8 месяцев назад
.... hmmm... python ???? mayby 😊 new experience for me🙂
@vigneshwaransr1443
@vigneshwaransr1443 6 месяцев назад
error: AssertionError: Torch not compiled with CUDA enabled
@hashidulislam2834
@hashidulislam2834 11 месяцев назад
Is the video created or was taken from the web?
@pumpkin162
@pumpkin162 11 месяцев назад
Def. Taken
@AssemblyAI
@AssemblyAI 4 месяца назад
The model was trained on video data with watermarks so even in its generated video, we see this watermark being replicated. We highly encourage you to run it with your own prompts to test it out.
@gileneusz
@gileneusz 11 месяцев назад
I don't know what to think about spiderman surfing... it's confusing
@jayanthkothapalli9.2
@jayanthkothapalli9.2 11 месяцев назад
Pls share the code also❤
@robertsutkowski3170
@robertsutkowski3170 8 месяцев назад
USA👍
@lifelonglearning1531
@lifelonglearning1531 5 месяцев назад
Your video is very good, but I have problem about TypeError: WebpageQATool._run() missing 1 required positional argument: 'question' Traceback: File "C:\ProgramData\anaconda3\envs\chat_with_website\lib\site-packages\streamlit untime\scriptrunner\script_runner.py", line 535, in _run_script exec(code, module.__dict__) File "C:\Users\Taechatuch\Desktop\LangChainLLM\ChatbotWebsite\app.py", line 82, in final_answer = run_llm(input_url, your_query) File "C:\Users\Taechatuch\Desktop\LangChainLLM\ChatbotWebsite\app.py", line 62, in run_llm result = query_website_tool.run(url, query) File "C:\ProgramData\anaconda3\envs\chat_with_website\lib\site-packages\langchain_core\tools.py", line 373, in run raise e File "C:\ProgramData\anaconda3\envs\chat_with_website\lib\site-packages\langchain_core\tools.py", line 347, in run else self._run(*tool_args, **tool_kwargs)
@user-vr5kh4wb7m
@user-vr5kh4wb7m 4 месяца назад
Sorry for this comment but for me it's not a generation something new it looks like parsing from stock
@AssemblyAI
@AssemblyAI 4 месяца назад
The model was trained on video data with watermarks so even in its generated video, we see this watermark being replicated. We highly encourage you to run it with your own prompts to test it out.
@awaisrafiquesukhera7885
@awaisrafiquesukhera7885 11 месяцев назад
What the hell is going on, No-one notices Shutterstock Watermark 🤷‍♂️
@AssemblyAI
@AssemblyAI 11 месяцев назад
The model was trained on video data with watermarks so even in its generated video, we see this watermark being replicated. We highly encourage you to run it with your own prompts to test it out.
@awaisrafiquesukhera7885
@awaisrafiquesukhera7885 11 месяцев назад
@@AssemblyAI Okay 👍
@kevinehsani3358
@kevinehsani3358 11 месяцев назад
I tried to experiment and gave prompt "make image from /tmp/img.png laugh" it runs and creates a video but the video displays "shutter stock" have you tried to feed it a picture as part of the text?
Далее
Create Your Own AI: Transformer Agents Tutorial
9:49
Zlatan embarrasses Speed 😂 #ishowspeed
00:32
Просмотров 10 млн
Text-to-Video Generation using a Generative AI Model
14:06
How to access LLMs from hugging face? (Practical Demo)
7:02
🤯 OpenAI Assistants API Python (Full Tutorial)
15:18
Prompt Engineering 101 - Crash Course & Tips
14:00
Просмотров 153 тыс.
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33