Тёмный

Advancing Spark - Give your Delta Lake a boost with Z-Ordering 

Advancing Analytics
Подписаться 32 тыс.
Просмотров 27 тыс.
50% 1

One of the big features of Delta Lake on Databricks (over the open source Delta Lake at Delta.io) is the Optimize command, and with it the ability to Z-Order your data... but what does that actually do? Why would you use it?
In this week's AdvancingSpark, Simon takes us through Z-Ordering, what it is and how you can enjoy the benefits of Data Skipping!
For more info, and the Databricks demo notebook used first, see: docs.databricks.com/delta/opt...
As always, for more blogs, insights and to get in touch for consultancy & training, come and visit us at: www.advancinganalytics.co.uk/

Опубликовано:

 

6 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 23   
@dheerajmuttreja
@dheerajmuttreja 9 месяцев назад
Hey Simon .. Great explanation with proper use snd demo
@katetuzov9745
@katetuzov9745 Год назад
Brilliant explanation, well done!
@nadezhdapandelieva3387
@nadezhdapandelieva3387 11 месяцев назад
Hi Simon, I like your videos, they are super useful. Can you make some videos on how to optimize jobs and reduce the performance time or how to investigate when optimization is needed on the job?
@kingslyroche
@kingslyroche 3 года назад
good explanation! thanks.
@nayeemuddinmoinuddin2186
@nayeemuddinmoinuddin2186 2 года назад
Hi Simon - Thanks for this awesome video. One quick question , Do Optimize and Z-order disturb the checkpoint in case of Structured Streaming?
@nsrchndshkh
@nsrchndshkh 3 года назад
Thank you very much
@devanssshhh
@devanssshhh 2 года назад
hey Thanks its a great video.
@DebayanKar7
@DebayanKar7 Год назад
Awesome-ly Explained !!!!
@vt1454
@vt1454 Год назад
Great Videos Simon. One suggestion on background ribbons of slides. The ribbons on your slide templates keep moving and are bit uncomfortable to eyes. Request if this can be static
@the.activist.nightingale
@the.activist.nightingale 3 года назад
Simon is back!!!! Thank you for this awesome video :) Could you make one explaining how we can profile a spark script in order to identify optimizing tuning opportunities ? I always fo the the Spark UI but I'm completely lost. I know one thing for sure, too much is swapping between nodes is bad news :)!
@AdvancingAnalytics
@AdvancingAnalytics 3 года назад
Oooh, ok, so a quick tour of the Spark UI and "some things to look out for" when diagnosing spark performance problems? I'll add it to the list - need to thing about what the top ones would be or it'll be two hours long! Simon
@the.activist.nightingale
@the.activist.nightingale 3 года назад
Advancing Analytics You’re the real MVP Simon! TY!!
@sarmachavali7676
@sarmachavali7676 2 года назад
Hi Simon, Nice video and is useful. I have a quick question, we are replicating huge data from MSSQL Datawarehouse to Deltalake using DLT(including CDC changes) with continuous mode .As part of that, i have specified my zorder is same as primary key; Does this increases the performance of merge operation in (apply statement) or not.How can i check this performance metrics.
@vishalaaa1
@vishalaaa1 Год назад
excellent
@ipshi1234
@ipshi1234 3 года назад
Thanks Simon for the great video! I'm curious if I would have to achieve Z-ordering in Delta Lake Synapse, how would I be able to? As the Optimize command is only available on the Databricks runtime? Thank you :)
@AdvancingAnalytics
@AdvancingAnalytics 3 года назад
Hey! On the file optimisation level, you could maybe achieve something similar using bucketing - but you wouldn't get the same data skipping benefits. Probably easier to just spin up a databricks cluster over the same data and use that for maintenance jobs (again, Synapse wouldn't do the data skipping part, but your files would be arranged properly) For the indexing/query performance side - Microsoft have been building "Hyperspace", which is an indexing system separate to Delta. This might be the answer for where you can't optimize tables...but it's a very early product, I've not had a go at using it yet! Simon
@dmitryanoshin8004
@dmitryanoshin8004 3 года назад
Can I have partition by date and Zorder by event name? Or partition and Z should be same columns?
@PersonOfBook
@PersonOfBook 3 года назад
Can you use both partition by and zorder by, on the same column or different columns. And if so, would it be beneficial? Also, why do you enclose spark.read with brackets?
@AdvancingAnalytics
@AdvancingAnalytics 3 года назад
Hey - so you /can/ z-order by a column you've partitioned on, but it'll give no benefit as your data is already sorted into those values by the partitioning! And brackets around the spark statement means you can span multiple lines without needing a line escape '\' for every line!
@cchalc-db
@cchalc-db 3 года назад
Can you share the NYTaxi notebook?
@preethi7674
@preethi7674 2 года назад
In production environments, do we have to zorder the tables weekly to improve performance?
@workwithdata6659
@workwithdata6659 5 месяцев назад
Yes. You will have to z order on regular basis. And there is no guarantee that only new files will be re-written. Running optimize on big tables which get good size of incremental data can be counter productive.
@AndreasBergstedt
@AndreasBergstedt 3 года назад
1st :)
Далее
Advancing Spark - Understanding the Spark UI
30:19
Просмотров 49 тыс.
Conquering fears and slippery slops on two wheels!
00:18
The Hardest Challenge!
00:37
Просмотров 2,3 млн
66. Databricks | Pyspark | Delta: Z-Order Command
14:16
Delta Lake Deep Dive: Liquid Clustering
40:54
Просмотров 4,3 тыс.
Diving into Delta Lake 2.0
29:37
Просмотров 4,2 тыс.
Advancing Spark - Databricks Delta Streaming
20:07
Просмотров 28 тыс.
Z-Order Visualized
10:24
Просмотров 626
Conquering fears and slippery slops on two wheels!
00:18