Тёмный

Comparing duckdb and duckplyr to tibbles, data.tables, and data.frames (CC279) 

Riffomonas Project
Подписаться 22 тыс.
Просмотров 2,5 тыс.
50% 1

Опубликовано:

 

7 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 43   
@Riffomonas
@Riffomonas 5 месяцев назад
People have been asking about arrow. Here are the benchmarks with get_arrow_single (line 17) and get_arrow_three (line 15) included. Code for the testing is included in the linked GitHub repository) 1 get_msparseT_three() 129724738 2 get_msparseT_single() 128000176. 3 get_tbl_three() 120202119 4 get_df_three() 83281619 5 get_dt_three() 83114421 6 get_which_three() 82046474. 7 get_tbl_single() 42185986. 8 get_msparseC_three() 20539934 9 get_msparseC_single() 20399386 10 get_which_single() 17729343 11 get_df_single() 17174326 12 get_dt_single() 17037181 13 get_msparseR_single() 8302705 14 get_msparseR_three() 8169640. 15 get_arrow_three() 7671572. 16 get_dbi_three() 5549166. 17 get_arrow_single() 4361846. 18 get_dbi_single() 2842756. 19 get_duck_three() 2413116. 20 get_duck_single() 1703386 21 get_dt_singlek() 447658. 22 get_dt_threek() 428696 23 get_mfull_three() 202766. 24 get_mfull_single() 137412.
@thespaniardinme
@thespaniardinme 5 месяцев назад
One of those much awaited videos. Thank you, sir!
@Riffomonas
@Riffomonas 5 месяцев назад
My pleasure - thanks for tuning in!
@RubenMejiaCorleto
@RubenMejiaCorleto 5 месяцев назад
Excellent work, thank you for taking my suggestion about duckdb into account.
@Riffomonas
@Riffomonas 5 месяцев назад
Absolutely! Thanks for the suggestion 🤓
@mabenba
@mabenba 5 месяцев назад
Great episode! I am starting to learn more about DuckDB as it seems a really useful tool, mostly used with dbt and large datasets.
@Riffomonas
@Riffomonas 5 месяцев назад
Yeah, it seems pretty awesome. As I understand it, they keep making it more performant. It seems like a great tool
@ColinDdd
@ColinDdd 5 месяцев назад
great video. benchmarking is such a powerful tool. of course people can game the benchmarks, but they go to show that you shouldn't get too attached to one particular tech because everything can change once a new system shows better performance!
@Riffomonas
@Riffomonas 5 месяцев назад
Absolutely, hopefully my recent benchmarkings have shown that there are a lot of factors that can impact performance. It's really important to try to be clear about the assumptions that go into the test
@vlemvlemvlem3659
@vlemvlemvlem3659 5 месяцев назад
It's between you, good sir, and Josiah Parry for the King of R-content on RU-vid. I love your stuff.
@Riffomonas
@Riffomonas 5 месяцев назад
Thanks a bunch!
@bulletkip
@bulletkip 5 месяцев назад
Thank you sir! Your channel continues to be an excellent resource. Much appreciated
@Riffomonas
@Riffomonas 5 месяцев назад
My pleasure - thanks!
@rayflyers
@rayflyers 5 месяцев назад
I learned about duckdb at posit::conf last year. It seems like a good tool, but I primarily use arrow when I need speed (for larger data) and DBI and dbplyr when I need to work with a database.
@Riffomonas
@Riffomonas 5 месяцев назад
Thanks for watching. Check out the pinned comment (be sure to expand it to see the whole thing) where I added arrow to the comparison. For this test, it is actually slower than duckdb!
@leonelemiliolereboursnadal6966
@leonelemiliolereboursnadal6966 5 месяцев назад
Great to see more of your videos!!!!
@Riffomonas
@Riffomonas 5 месяцев назад
Thanks!
@haraldurkarlsson1147
@haraldurkarlsson1147 5 месяцев назад
Pat, Nice to see a video on DuckDB! I have been playing with the arrow package (another space-saving type of approach) but it recently stopped working on my Mac (M1). It is another package worth considering.
@Riffomonas
@Riffomonas 5 месяцев назад
Thanks for tuning in! Not sure why arrow wouldn't work on an M1. That's what I have and was able to get it to work. Check out the pinned comment (be sure to expand it to see the whole thing) where I added arrow to the comparison. For this test, it is actually slower than duckdb!
@haraldurkarlsson1147
@haraldurkarlsson1147 5 месяцев назад
@@Riffomonas Pat, When I run arrow_info() I get FALSE on every item except the first (acero). I just updated R and RStudio but that did not fix the issue.
@haraldurkarlsson1147
@haraldurkarlsson1147 5 месяцев назад
P. S. The instructions on the Arrow website are of little help to me.
@haraldurkarlsson1147
@haraldurkarlsson1147 5 месяцев назад
I am running arrow inside a Quarto book by the way. But it used to work there.
@haraldurkarlsson1147
@haraldurkarlsson1147 5 месяцев назад
I get this warning: This build of the arrow package does not support Datasets. Even after updating R and RStudio and re-installing arrow.
@spacelem
@spacelem 5 месяцев назад
That is remarkably satisfying watching all of those benchmarks jostle for supremacy! I think one question that might still be good to examine (although I really don't know how you'd do it), given that your initial problem was that your data was too big to fit in memory, is how memory efficient each of these methods are? "Slow but fits in memory" might beat "fast but my machine can't handle it".
@Riffomonas
@Riffomonas 5 месяцев назад
Great point! I'll try to follow up on this once I get to the real data
@mmcharchuta
@mmcharchuta 5 месяцев назад
Exciting!
@Riffomonas
@Riffomonas 5 месяцев назад
Thanks for watching!
@joshstat8114
@joshstat8114 Месяц назад
Thank you for showing the benchmark about their performances (I still recommend you the `bench` package, though). How about `tidypolars` (in R, not in Python)?
@Riffomonas
@Riffomonas Месяц назад
I'll have to check out the tidypolars package, this was a new one to me. Thanks for watching!
@haraldurkarlsson1147
@haraldurkarlsson1147 5 месяцев назад
Very nice and thought-provoking. My understanding of DuckDB is that it is basically a way to run large datasets by storing them locally and thus not eat up RAM and slow things down (the larger-than-memory selling point of DuckDB) and only loading in what you need - not the entire dataset. So maybe asking about speed compared to a matrix-approach may bet a bit of apples-vs-oranges deal?
@Riffomonas
@Riffomonas 5 месяцев назад
Still learning about duckdb. It's an option for my project so comparing it to any other possible option seems relevant to me
@haraldurkarlsson1147
@haraldurkarlsson1147 5 месяцев назад
@@Riffomonas Climate scientists have been using NetCDF files or decades. Those are supposed to be very memory efficient. Is that an option for you? I do realize that eventually you have to pick something and move on.
@mishmohd
@mishmohd 5 месяцев назад
he understand the assignment
@Riffomonas
@Riffomonas 5 месяцев назад
thanks for tuning in !
@victorcat1377
@victorcat1377 5 месяцев назад
Hello ! Thanks a lot for you clarity and these useful tutorials ! When I have large data to process, I some time try to parallelize my script with package such as doparallel in R. Any thoughts on that ?
@Riffomonas
@Riffomonas 5 месяцев назад
I have used the future and furrr packages in the past. These are great to make it easy to work with parallelization when trying to speed things up. Thanks for watching!
@sven9r
@sven9r 5 месяцев назад
Hey Pat, great video! I see you scrolling a lot - wouldn't paragraphing help a lot since your code is getting soooooo long (and the comments still suggest more benchmarking :P)
@Riffomonas
@Riffomonas 5 месяцев назад
Yeah, well, I'd have to remember to do that then! 🤓 FWIW, we're done with benchmarking for a bit
@djangoworldwide7925
@djangoworldwide7925 5 месяцев назад
I decided not to go with duckplyr since the print output is a bit annoying. I couldn't see enough rows because of all the extra info there.. how do you silent this?
@Riffomonas
@Riffomonas 5 месяцев назад
You can suppress the output with duckdb.materialize_message (see rdrr.io/github/duckdblabs/duckplyr/man/config.html for examples)
@michaelmanti
@michaelmanti 3 месяца назад
Comparing keyed data.tables to non-keyed, non-indexed duckdb tables seems unfair, since duckdb does support keys and indices. Have you tested keyed and/or indexed tables in duckdb? If I'm not mistaken, the duckdb un-keyed versions outperformed the data.table un-keyed versions?
@Riffomonas
@Riffomonas 3 месяца назад
Thanks for watching! I'm not able to find duckdb/duckplyr documentation on setting keys. Can you point it out to me? But you are correct that dt without keys is slower than duckdb. I did this in the current (and previous episodes). The get_dt_threek function is keyed and took 421k ns, get_dt_three (not keyed) took 104941k ns and get_duck_three took 2474k ns.
@michaelmanti
@michaelmanti 3 месяца назад
@@Riffomonas I provided a direct link in an earlier comment, but RU-vid appears to have dropped it. But if you search for "indexing" on the DuckDB website, you'll find that keys are "implicitly indexed" by adaptive radix trees (ARTs). I expect that keying the duckdb table will improve performance on your query benchmarks, but I'd be interested in learning how much.
Далее
Big Data is Dead | MotherDuck
25:58
Просмотров 13 тыс.