Тёмный

26 Spark SQL, Hints, Spark Catalog and Metastore | Hints in Spark SQL Query | SQL functions & Joins 

Ease With Data
Подписаться 6 тыс.
Просмотров 2 тыс.
50% 1

Опубликовано:

 

5 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 21   
@reslleygabriel
@reslleygabriel 8 месяцев назад
Fantastic, thanks for sharing this content!
@easewithdata
@easewithdata 8 месяцев назад
It will become more fantastic when you share it with your network on LinkedIn and tag us... 🤩 We definitely need some exposure ☺️
@ravidborse
@ravidborse Месяц назад
Thank you.. 👍
@TechnoSparkBigData
@TechnoSparkBigData 9 месяцев назад
Thanks for creating such an awesome content.
@easewithdata
@easewithdata 9 месяцев назад
Thanks. Please make sure to share with your network 🛜
@KishoreReddy-c3v
@KishoreReddy-c3v 5 месяцев назад
hi sir will this many topics enough to learn pyspark ..?
@easewithdata
@easewithdata 5 месяцев назад
Yes, this all should be sufficient for you to get started
@DataEngineerPratik
@DataEngineerPratik 3 месяца назад
what if both the tables are very small like one is 5 MB and other is 9 MB then which df is broadcasted across executor?
@easewithdata
@easewithdata 3 месяца назад
In that case it doesn't matter, however AQE always prefer to broadcast the smaller table.
@DataEngineerPratik
@DataEngineerPratik 3 месяца назад
@@easewithdata Thanks & I'm following you for more than a month its been a great learning experience , we want you to make End to End Project in Pyspark
@TechnoSparkBigData
@TechnoSparkBigData 9 месяцев назад
Could you please create a video on OOM exception and how to replicate it and what all scenarios we get it and how to avoid it
@easewithdata
@easewithdata 9 месяцев назад
Hello, I understand the request, but it will not be possible to capture all issues/scenarios on RU-vid sessions. I will try to create a mini series later which will cover this topic. Easiest way to create an OOM exception and the most common one - is to create a driver with smaller memory size and then read dataset with bigger size and collect() it for display. Collect will try to fit all data in driver memory which will result in OOM. And to fix this OOM to use take() in place of collect. Hope this helps.
@TechnoSparkBigData
@TechnoSparkBigData 8 месяцев назад
@@easewithdata I can understand. Thanks you are my big data guru
@nikhil6210-m1b
@nikhil6210-m1b Месяц назад
what happens to the table we saved in the storage if we implement in memory catalog. will the table files get deleted after the session
@easewithdata
@easewithdata 28 дней назад
In case you are working with in memory catalog, the metadata will be lost once the compute or cluster is restarted. This is why it is recommended to have a permanent catalog.
@nikhil6210-m1b
@nikhil6210-m1b 23 дня назад
@@easewithdata Thank you. This is the best content I have seen about spark
@yaswanthtirumalasetty7449
@yaswanthtirumalasetty7449 8 месяцев назад
Hi, where to get spark session master details in local spark. I am using local[8], I can see only driver using all the 8 cores but no executors after defining on session. I believe it could be cuz of master !
@easewithdata
@easewithdata 8 месяцев назад
Hello, Local execution only supports with single node which is driver. It uses threads in your machine to execute tasks parallely. Now if you need more executors then you have to configure a cluster and use it in your master. Please checkout the beginning of the series to understand more.
@TechnoSparkBigData
@TechnoSparkBigData 9 месяцев назад
How many videos are more to come in this course?
@easewithdata
@easewithdata 9 месяцев назад
Three more to go before a wrap up.
Далее
Песня РАСПУТИН на русском!🔥
00:56
Core Databricks: Understand the Hive Metastore
22:12
Просмотров 16 тыс.