Тёмный

Build An Airflow Data Pipeline To Download Podcasts [Beginner Data Engineer Tutorial] 

Dataquest
Подписаться 59 тыс.
Просмотров 31 тыс.
50% 1

We'll build a data pipeline that can download and store podcast episodes using Apache Airflow, a powerful and widely used data engineering tool. This is a beginner tutorial, so we'll start off by installing Airflow and covering key Airflow concepts.
Along the way, we'll learn how to create our first data pipeline (DAG) in Airflow, how to write tasks using Operators and the TaskFlow API, how to interface with databases using Hooks, and how to run the pipeline efficiently.
By the end of the tutorial, you'll have a good understanding of how to use Airflow, as well as a project that you can extend and build on. Some extensions to this project include automatically transcribing the podcasts and summarizing them.
You can find the full code for the project here, along with an overview - github.com/dataquestio/projec... .
Chapters
00:00 Introduction
01:44 - Installing Airflow
07:17 - Creating the first task in our data pipeline with Airflow
17:11 - Using a SQL database with Airflow
25:30 - Storing data in a SQL database with Airflow
34:36 - Downloading podcast episodes with Airflow
38:17 - Looking at our complete data pipeline and next steps
---------------------------------
Join 1M+ Dataquest learners today!
Master data skills and change your life.
Sign up for free: bit.ly/3O8MDef

Опубликовано:

 

11 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 43   
@manyes7577
@manyes7577 2 года назад
I think you’re the best data science lecturer so far. Keep going thanks for your hard work
@Dataquestio
@Dataquestio 2 года назад
Thanks :) -Vik
@lightman2130
@lightman2130 2 года назад
What a amazing tutorial ! Thank you a lot
@demohub
@demohub Год назад
This video was a great resource. Thanks for the tutelage and your take on it.
@HieuLe-tw7qm
@HieuLe-tw7qm Год назад
Thank you very much for this amazing tutorial :D
@diasfetti8393
@diasfetti8393 Год назад
👍👍👍excellent tuto. Thks a lot
@devanshsharma5159
@devanshsharma5159 Год назад
Beautiful explanation and a great project to get me started! Many thanks vik!! One thing to add from my experience: I installed airflow on my Mac M1 and it was working fine but I couldn't run any of the tasks we performed here (not even in the get_episodes task).. to solve that I made an EC2 instance and with some tweaks everything ran :D
@lolujames7668
@lolujames7668 2 года назад
nice one @Vik
@mahmoodhossainfarsim6292
@mahmoodhossainfarsim6292 Год назад
It was very useful. Thank you. It will be really helpful if you cover Apache Hadoop, Spark, MLFlow, Flink, Flume, Pig, Hive etc. Thank you
@lalumutawalli9497
@lalumutawalli9497 Год назад
thanks you for your tutorials, let me know about your airflow version on your tutorial to practice.
@parikshitchavan2211
@parikshitchavan2211 Год назад
Hello Vikas Thanks for such a great tutorial everting you made smooth like butter thanks for that ,just one question whenever we made new DAG ( we will have to add docker-compose-CeleryExecutor, docker-compose-LocalExecutor, and Config for that particular DAG )
@kiish8571
@kiish8571 2 года назад
this is very educational thanks a lot, i was wondering if you would be making a video of the automatic transcriptions
@Dataquestio
@Dataquestio 2 года назад
Yes, I will be doing a webinar for this tomorrow, and the video should be live next week on RU-vid. -Vik
@dataprofessor_
@dataprofessor_ Год назад
Can you make more advanced Apache Airflow tutorials too?
@OBGynKenobi
@OBGynKenobi Год назад
So where is the dependency chain where you set the actual task flow? I would have expected something like task1 >> Task2, etc... at the bottom of the Dag.
@user-vy9in2xs6c
@user-vy9in2xs6c 11 месяцев назад
At 33:48, how did we get the 'Done loading. Loaded a total of 0 rows'. We haven't used this text in our code anywhere. Is this the work ok hook.insert_rows
@Funkykeyzman
@Funkykeyzman 2 года назад
Debug tip #1: If you run into error "conn_id isn't defined", then use the Airflow browser interface to instead create the connection. Select Admin --> Connections --> + Debug tip #2: If your Airflow runs fail, try logging out of the Airflow UI and restarting the Airflow server by pressing Ctrl + C and then airflow standalone.
@vish949
@vish949 10 месяцев назад
whenever i run airflow standalone (or even airflow webserver) i get the ModuleNotFound error for pwd. Im running it on a windows, how do i solve this?
@Maheshwaripremierleague
@Maheshwaripremierleague 2 месяца назад
if you are facing an issue with creating database, that your dag is running and not completing then put this line after importing the packages os.environ['NO_PROXY'] = '*' , it will work then for sure
@rohitpandey9920
@rohitpandey9920 Год назад
I am stuck at 14:50 where you try to run the task in airflow. You simply switched the screen from pycharm terminal to git master terminal without any explanation, and I am unable to connect sqlite to pycharm terminal, neither could I establish connection with airflow. Please guide me through this
@yousufm.n2515
@yousufm.n2515 Год назад
When I change the 'dags_folder' path, everything breaks in airflow. What could be the reason
@bryancapulong147
@bryancapulong147 Год назад
My download_episodes task succeeds but I cannot see the mp3 files
@thangnguyen3786
@thangnguyen3786 7 месяцев назад
hi everyone. I have configured airflow with docker in a folder which include docker yaml file. Now I want to use airflow in another folder, so How can i do it without docker yaml file ? must I configure again in that folder ?
@user-tm9ng7if3t
@user-tm9ng7if3t 8 месяцев назад
hello. I did everything as it is but it fails and no logs are visible
@investandcyclecheap4890
@investandcyclecheap4890 Год назад
really liked this tutorial. The download episodes task is freezing on me. The task is "running" but it appears to be getting held up and has not actually downloaded the first episode for some reason
@Dataquestio
@Dataquestio Год назад
That's strange - can you access the podcast site in your browser? It may be blocking you for some reason. It's also possible that the airflow task executor isn't running properly.
@sungwonbyun5683
@sungwonbyun5683 9 месяцев назад
I ran into the same issue except on the very first task "get_episodes" nothing happens and eventually times out. tested the script in python console and it returned the list of episodes just fine. @@Dataquestio
@sungwonbyun5683
@sungwonbyun5683 9 месяцев назад
Fix for me was to start the airflow server as the root user; "sudo airflow standalone"
@dolamuoludare4383
@dolamuoludare4383 Год назад
Please kindly help, when I write my DAG on vscode, it doesn't show on the WEB UI and I keep getting this DAGNOTFOUND Error
@youssefelfhayel7078
@youssefelfhayel7078 11 месяцев назад
Add these lines to your airflow.cfg config file : min_file_process_interval = 0 dag_dir_list_interval = 30 and then the dags will be updated automatically. P.S : Be sure that your DAG is in the dags file.
@user-vy9in2xs6c
@user-vy9in2xs6c 11 месяцев назад
I need help. For the final task it showed audio_path no such file or directory. So i used 'os.makedirs(audio_path, exist_ok=True)'. The dag was a success. But i couldnt find any files in my episodes folder. Please help
@Maheshwaripremierleague
@Maheshwaripremierleague 2 месяца назад
def download_episodes(episodes): for episode in episodes: filename = f"{episode['link'].split('/')[-1]}.mp3" audio_path = os.path.join("episodes", filename) if not os.path.exists("episodes"): os.makedirs("episodes") if not os.path.exists(audio_path): print(f"Downloading {filename}") audio = requests.get(episode["enclosure"]["@url"]) with open(audio_path, "wb+") as f: f.write(audio.content)
@Maheshwaripremierleague
@Maheshwaripremierleague 2 месяца назад
it will create the episodes folder if it is not created
@Maheshwaripremierleague
@Maheshwaripremierleague 2 месяца назад
it happens because your airflow might not be directing to correct folder so it will create the folder somewhere ese where it is pointing, you can then search the folder to find out where it is
@DayOneCricket
@DayOneCricket 6 месяцев назад
you're first bit on seting the environment variable didn't work
@omarhossam285
@omarhossam285 Год назад
how did you change your terminal to git:(master)
@Dataquestio
@Dataquestio Год назад
I use a terminal called zsh. There is a plugin for zsh that can show you your git branch.
@omarhossam285
@omarhossam285 Год назад
@@Dataquestio thx man
@aminatlawal21
@aminatlawal21 Год назад
How did he get the web page metadata in Xml?
@HJesse88
@HJesse88 Год назад
Look at the link in the video, type that link in a Chromium browser and it should appear. Wala ..
@rohitpandey9920
@rohitpandey9920 Год назад
@@HJesse88 it didn't appear for me
@parkuuu
@parkuuu 2 года назад
Awesome tutorial! Just had some confusion on the transform and loading function, particularly this code: stored = hook.get_pandas_df('SELECT * FROM episodes;') I thought you were querying from the episode list that was returned from the extract function, but suddenly realized that it was also the same as the Table name in the database lol.
@Dataquestio
@Dataquestio 2 года назад
Hi Park - that code is selecting from the sqlite3 database that we create. It's making sure the podcast hasn't been stored in the database yet (if it is, we don't need to store it again).
Далее
Самое Романтичное Видео ❤️
00:16
Does size matter? BEACH EDITION
00:32
Просмотров 5 млн
Don't Use Apache Airflow
16:21
Просмотров 88 тыс.