Тёмный

Even faster than DBMS_PARALLEL_EXECUTE 

SQL and Database explained!
Подписаться 15 тыс.
Просмотров 4 тыс.
50% 1

If you have a huge set of data and you need to efficiently divide it up into subsets, what is the best way to do it? Using an index probably isn't going to work because you're still scanning lots of data. Using a full scan to get each subset probably just makes the problem even worse.
DBMS_PARALLEL_EXECUTE has some usefulness, but can we go one step further? Can we get the optimal way to separate data into chunks???
blog: connor-mcdonal...
twitter: / connor_mc_d
Subscribe for new tech videos every week
All other social media channels here: linktr.ee/connor
Are you serious? A free Oracle database forever ?!?!?!?! Hell yeah!!!
www.oracle.com...
Music: Night Owl (Broke For Free)
, Dyalla
#oracle #rowid #dbms_parallel_execute

Опубликовано:

 

6 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 13   
@praveenkumar-fx5wx
@praveenkumar-fx5wx 3 года назад
Great lesson, thanks!
@bzezinahapolania9086
@bzezinahapolania9086 2 года назад
You mention it is possible to use dbms_parallel_execute to do Alter index rebuild…can you present an example for that ?
@kaleycrum6350
@kaleycrum6350 3 года назад
Hi Connor! I don't understand how breaking it down by file helps. We're still doing table access by rowid range, right? Is the objective to ensure that multi-block reads are not interrupted by file breaks?
@DatabaseDude
@DatabaseDude 3 года назад
We guarantee that we won't ever have to scan a range of data that does not apply to this table. You only get a multiblock read breaks for the first smaller extents, but once they hit 1meg there will not be a break. And presumably you're only going to use this for a tables of some significant size.
@kaleycrum6350
@kaleycrum6350 3 года назад
@@DatabaseDude why would we be scanning data outside the current table?
@berndeckenfels
@berndeckenfels 3 года назад
You own list of chunks is not better than the parallel dunks, you still have multiple per File. It only might decrease the seeking for a given job, bu then it has much more jobs with less predictable overall size. So i am not sure it’s worth it (but the queries are neat, do they translate well to ASM and Exa?)
@DatabaseDude
@DatabaseDude 3 года назад
The number of jobs is unrelated to the number of chunks - it is governed by the job queue parameters. It is not multiple per file that is what we are trying to avoid, it is about guaranteeing that we won't ever have to scan a range of data that does not apply to this table.
@berndeckenfels
@berndeckenfels 3 года назад
@@DatabaseDude Ah I see, you mean DBMS_PARALLEL does not skip over file extends which are not part of the table. That does look like a important possible improvement.
@berndeckenfels
@berndeckenfels 3 года назад
@@DatabaseDude but it produces multiple tasks per file if they have multiple non-consecutive extends (however I guess it doesnt really matter if you access a single file in parallel or multiple, but since you explicitely mentioned that this happens with the standard method, it also happens with yours)
@laurentiuoprea06
@laurentiuoprea06 3 года назад
Will this apply if I have a bigfile tablespace?
@SheetalGuptas
@SheetalGuptas 3 года назад
Hi thanks for this session. Is it possible for you to share the script used in this session
@DatabaseDude
@DatabaseDude 3 года назад
Yes - its here github.com/connormcd/misc-scripts/tree/master/office-hours
@lizreen9563
@lizreen9563 3 года назад
Great site and scripts! I just can't find the one for this video.
Далее
My THREE rules for SQL TUNING
12:45
Просмотров 4,2 тыс.
New Parallel DML Hint - Quirks and Features
10:32
Просмотров 2,9 тыс.
The FASTEST way to unload data to CSV
12:37
Просмотров 4,6 тыс.
The KEEP clause will KEEP your SQL queries SIMPLE!
7:06
What are the PCTFREE / PCTUSED / FREELISTS settings?
10:00
How can I speed up a query if an index cannot help?
8:20
Oracle Parallel Execution Plans Deep Dive
44:23
Просмотров 7 тыс.
Become a Materialized View SUPER HERO !
7:05
Просмотров 9 тыс.
Why Rebuild Indexes? | #dailyDBA 20
30:50
Просмотров 25 тыс.