Тёмный

Massive DELETEs | Postgres.FM 093 |  

PostgresTV 💙💛
Подписаться 3,9 тыс.
Просмотров 425
50% 1

[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT+manually! You can also try RU-vid's auto-translation of them from English to your language; try it and share it with people interested in Postgres!]
Nikolay and Michael discuss doing massive DELETE operations in Postgres - what can go wrong, how to prevent major issues, and some ideas to minimise their impact.
Here are some links to things they mentioned:
* Article based on Nikolay’s talk, including batching implementation (translated to English) habr-com.translate.goog/en/ar...
* Our episode on WAL and checkpoint tuning postgres.fm/episodes/wal-and-...
* Egor Rogov’s book on Postgres Internals (chapter 10 on WAL) edu.postgrespro.com/postgresq...
* full_page_writes www.postgresql.org/docs/curre...
* TRUNCATE www.postgresql.org/docs/curre...
* Our episode on partitioning postgres.fm/episodes/partitio...
* Our episode on bloat postgres.fm/episodes/bloat
* Our episode on index maintenance postgres.fm/episodes/index-ma...
~~~
What did you like or not like? What should we discuss next time? Let us know in the comments, or by tweeting us on @postgresfm / postgresfm , @samokhvalov / samokhvalov and @michristofides / michristofides
~~~
Postgres FM is brought to you by:
- Nikolay Samokhvalov, founder of Postgres.ai postgres.ai/
- Michael Christofides, founder of pgMustard pgmustard.com/
~~~
This is the video version. Check out postgres.fm to subscribe to the audio-only version, to see the transcript, guest profiles, and more.

Развлечения

Опубликовано:

 

18 апр 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 4   
@awksedgreep
@awksedgreep 2 месяца назад
What you need is UUID's across maybe 25 tables with FKs between each, no on delete cascade, and a need to keep the data from all 25 tables elsewhere(archive schema). Getting
@nitish5924
@nitish5924 2 месяца назад
What about massive updates ? We recently had a usecase where we have a postgres database that has 250 million rows and we introduced a new date column, we are facing so many issues in backfilling this column today. it would be great if you could share your insights on how to handle such massive updates
@NikolaySamokhvalov
@NikolaySamokhvalov 2 месяца назад
it's very similar - batching is very much needed additional complexity is index write amplification - all indexes have to be updated (unlikje for DELETEs), unless it's a HOT UPDATE
@kirkwolak6735
@kirkwolak6735 2 месяца назад
@@NikolaySamokhvalov Excellent point on indexing adding writes. I would certainly add the column. Batch some updates. And only when updates are finished would I consider adding the index on that column. Otherwise it feels like a Footgun!
Далее
Фонтанчик с черным…
01:00
Просмотров 3,2 млн
亲生女儿这样做合适吗?
00:14
Просмотров 2,3 млн
98% Cloud Cost Saved By Writing Our Own Database
21:45
Просмотров 320 тыс.
What is a Vector Database?
8:12
Просмотров 61 тыс.
Learn Database Normalization - 1NF, 2NF, 3NF, 4NF, 5NF
28:34
ПРОВЕРИЛ АРБУЗЫ #shorts
0:34
Просмотров 1,3 млн