Тёмный

C++ Algorithmic Complexity, Data Locality, Parallelism, Compiler Optimizations, & Some Concurrency 

CppCon
Подписаться 151 тыс.
Просмотров 11 тыс.
50% 1

cppcon.org/
---
Algorithmic Complexity, Data Locality, Parallelism, and Compiler Optimizations, Seasoned with Some Concurrency - A Deep Dive into C++ Performance - Avi Lachmish - CppCon 2022
github.com/CppCon/CppCon2022
In C++, efficiency is usually the name of the game, so what can we do to make sure we are ahead of the game?
In this talk, we will focus on the selection of algorithms and data structures and analyze their effect on program performance.
We will discuss the importance of data locality, proper data structures, and using the stack vs. heap for our runtime efficiency.
Taking into consideration tradeoffs such as space complexity vs. time complexity and setup-time vs. run-time.
We will present benchmarks that would widen our perspective on those considerations.
Concurrency and parallelism will also be added to the mixture, making sure to conclude also for a multithreaded environment.
---
Avi Lachmish
Avi is an expert in Web and networking technologies, operating systems, and software development methodologies. Avi has extensive experience in C++, object-oriented analysis, design and distributed architectures.
---
Videos Filmed & Edited by Bash Films: www.BashFilms.com
RU-vid Channel Managed by Digital Medium Ltd events.digital-medium.co.uk
#cppcon #programming #cpp

Наука

Опубликовано:

 

30 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 6   
@assafcohen5056
@assafcohen5056 Год назад
Great talk!
@haiphamle3582
@haiphamle3582 Год назад
Nice talk! Thank you!
@potter2806
@potter2806 Год назад
Really good talk, thanks!
@mytech6779
@mytech6779 Год назад
I find a large amount of information on cache behavior on the read side, but I would like to hear more about the behavior of multilevel cache on the write-store side of the cpu. When is the new data availible to reuse in the same core or a different core on the same package, is there any need to wait for the line of data to flush to main memory before reading it from shared cache, is it purely in L1d or simultaneously copied to L2 L3?
@thewarhelm
@thewarhelm Год назад
It depends on the cache protocol: some protocols won't write data directly to main memory unless another core/thread asks for it. This is done for performance reasons: if no other thread needs that data, there's no need to waste cycles writing to main memory. The same applies for intermediate caches: when reading data from main memory, it's usually copied to L2 and L1 (L3 if present). At some point some data might be evicted from L1, but because L2 is bigger it might still be present in L2. So next time you read that same data (provided it hasn't been modified by another thread), it will be read from L2. There are other behaviours as well though: you might have a write through cache, so that when modifying a piece of data, it goes straight to main memory without touching the intermediate caches. If you need to read back that data, you will have to wait for main memory. To learn more about cache coherency protocols, this is a good starting point: en.wikipedia.org/wiki/MESI_protocol. This course also good lectures on cache coherency: 15418.courses.cs.cmu.edu/tsinghua2017/
@stormingbob
@stormingbob Год назад
where can i find the slides for this talk? they are not present in the github
Далее
Вопрос Ребром - Субо
49:41
Просмотров 1,4 млн
I Built a EXTREME School Bus!
21:37
Просмотров 7 млн
Meninas na academia
00:11
Просмотров 1,5 млн
кукинг с Даниилом 🥸
01:00
Просмотров 848 тыс.
Let's get comfortable with SFINAE (C++)
35:55
Просмотров 6 тыс.
Simple Code, High Performance
2:50:14
Просмотров 240 тыс.