Тёмный

Speed Up Your Spark Jobs Using Caching 

Afaque Ahmad
Подписаться 5 тыс.
Просмотров 4,1 тыс.
50% 1

Опубликовано:

 

11 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 20   
@mohitupadhayay1439
@mohitupadhayay1439 Месяц назад
Hey Afaque Great tutorials. You should consider doing a full end to end spark project with a Big volume of data so we can understand the challenges faced and how to tackle them. Would be really helpful!
@afaqueahmad7117
@afaqueahmad7117 Месяц назад
A full-fledged in-depth project using Spark and the modern data stack coming soon, stay tuned @mohitupadhayay1439 :)
@HimanshuGupta-xq2td
@HimanshuGupta-xq2td 2 месяца назад
Content is useful. Please make more video 😊
@afaqueahmad7117
@afaqueahmad7117 2 месяца назад
Appreciate it @HimanshuGupta-xq2td, thank you :)
@deepakrawat418
@deepakrawat418 11 месяцев назад
great explanation, plz create one end-to-end project also
@OmairaParveen-uy7qt
@OmairaParveen-uy7qt 11 месяцев назад
Explained very well! Great content!
@AtifImamAatuif
@AtifImamAatuif 11 месяцев назад
Excellent content. Very Helpful.
@kunalberry5776
@kunalberry5776 Год назад
Very informative video.Thanks for sharing
@hritiksharma7154
@hritiksharma7154 11 месяцев назад
Great explanation. Waiting for new videos.
@mission_possible
@mission_possible 11 месяцев назад
Thanks for the videos... keep going
@RohanKumar-mh3pt
@RohanKumar-mh3pt 11 месяцев назад
kindly cover apache spark scenerio based questions also
@ManojKumarV11
@ManojKumarV11 8 месяцев назад
Can we persist any dataframe irrespective of the size of the data it has? Or are there any limitations in caching dataframes?
@anirbansom6682
@anirbansom6682 9 месяцев назад
If we do not explicitly unpersist, what would happen to the data? Would it be cleaned by the next GC cycle ? Also what is the best practice , explicitly unpersist or leave it to GC.
@afaqueahmad7117
@afaqueahmad7117 9 месяцев назад
Hey @anirbansom6682, Data would be kept in memory until the Spark application ends, or the context is stopped or is evicted because Spark needs to free up memory to make room for other data. It may also be evicted during next GC cycle. But this process is a little uncertain as it depends completely on Spark's own memory management policies and JVM's garbage collection process. Leaving it to GC would be a passive approach over which you've lesser control and is much more like a black box unless you're well aware of its policies. The best practice, however, is to explicitly unpersist when they're no longer needed. This will give you more control over your application's memory usage and can help prevent memory issues in long running Spark applications where different datasets are cached over time.
@gananjikumar5715
@gananjikumar5715 11 месяцев назад
Thanks for sharing, small query Do we need to cache based on number of transformations being done on that dataframe or if we are doing more actions on that dataframe/using that dataframe
@afaqueahmad7117
@afaqueahmad7117 11 месяцев назад
Thanks @gananjikumar5715, transformations are accumulated until an action is called. So, it would be based on the number of actions; If you're performing several actions, better to cache the Dataframe first, otherwise Spark will re-create the DAG when executing a new action.
@reyazahmed4855
@reyazahmed4855 11 месяцев назад
Nice video. By the what device you use to write on the screen for teaching bro
@afaqueahmad7117
@afaqueahmad7117 11 месяцев назад
Thanks @reyazahmed4855, I use iPad
@narutomaverick
@narutomaverick 24 дня назад
Want to understand better, Read this: Spark CacheLLM Response: 1. **Why Use Caching?** - Caching can significantly improve performance by reusing persistent data instead of recomputing it - It helps avoid redundant computations on the same dataset across multiple actions 2. **Lazy Evaluation and Caching** - Apache Spark uses lazy evaluation, where transformations are not executed until an action is triggered - Caching can help by materializing the result of a long sequence of transformations, avoiding recomputation 3. **Spark's Lineage Graph** - Spark tracks the lineage of transformations using a Lineage Graph - Caching breaks the lineage, reducing the size of the graph and improving performance 4. **Caching vs. No Caching** - The demo shows a significant performance improvement when caching is used, as seen in the Spark UI 5. **Persist and Storage Levels** - The `persist()` method is used for caching, with different storage levels available - Storage levels like `MEMORY_ONLY`, `DISK_ONLY`, and combinations control memory/disk usage and replication - Choose the appropriate storage level based on your requirements and cluster resources 6. **When to Cache?** - Cache datasets that are reused multiple times, especially after a long sequence of transformations - Cache intermediate datasets that are expensive to recompute - Be mindful of cluster resources and cache judiciously 7. **Unpersist** - Use `unpersist()` to remove cached data and free up resources when no longer needed - Spark may automatically unpersist cached data if memory is needed If you liked it, Upvote it. NarutoLLM Response
@afaqueahmad7117
@afaqueahmad7117 3 дня назад
Good summary :)
Далее
Why Data Skew Will Ruin Your Spark Performance
12:36
Faites comme moi
00:14
Просмотров 1,1 млн
How Salting Can Reduce Data Skew By 99%
28:55
Просмотров 8 тыс.
How to Read Spark DAGs | Rock the JVM
21:12
Просмотров 23 тыс.
The TRUTH About High Performance Data Partitioning
22:18
Master Reading Spark Query Plans
39:19
Просмотров 30 тыс.
Master Reading Spark DAGs
34:14
Просмотров 15 тыс.
Shuffle Partition Spark Optimization: 10x Faster!
19:03