Тёмный
Introduction to Parallel Programming in OpenMP
Introduction to Parallel Programming in OpenMP
Introduction to Parallel Programming in OpenMP
Подписаться
Locks
23:42
7 лет назад
Advanced Task handling
15:13
7 лет назад
Matrix Multiplication using tasks
5:03
7 лет назад
Parallel LU Factorization
14:13
7 лет назад
Understanding LU Factorization
15:18
7 лет назад
Assignment 1 Solutions
20:23
7 лет назад
Recursive task spawning and pitfalls
11:09
7 лет назад
Accessing variables in tasks
11:13
7 лет назад
Task queues and task execution
10:54
7 лет назад
Introduction to tasks
7:08
7 лет назад
Race Conditions
6:15
7 лет назад
OpenMP: About OpenMP
5:14
7 лет назад
OpenMP: Basic thread functions
9:53
7 лет назад
Context Switching
12:12
7 лет назад
Program with Single thread
13:57
7 лет назад
Комментарии
@KendriyaVidyalaya-e4z
@KendriyaVidyalaya-e4z 21 день назад
Great job explaining the concepts clearly!
@user-oz9fi4qr3h
@user-oz9fi4qr3h 6 месяцев назад
Smart Aliens is a project with great potential feelings excited to be part of this community #ETH, #ETHEREUM, #ARBITRUM, #ARB, and #ALTCOIN.
@datasmith4294
@datasmith4294 10 месяцев назад
Turn down the decibels pls.
@alsgzika6977
@alsgzika6977 Год назад
Amazing thanks
@tmp544
@tmp544 Год назад
So what is the solution if sequential consistency model dictate so much constraints on the compiler? Will we just live with the decrease on performance?
@rajeevgupta4058
@rajeevgupta4058 Год назад
what about the cache data that is needed in a certain tast? when tast is free to be executed at any time, the caches might not have required data.
@samaelcode4094
@samaelcode4094 Год назад
sir my question is can we avoid a for loop and call the omp_get_num_threads() just once? , I was trying to do it, but i failed, or it is explained in upcoming videos?
@saicharan4669
@saicharan4669 Год назад
Thank you sir , the video was very informative
@andrejrockshox
@andrejrockshox Год назад
which language is this?
@hampazu
@hampazu Год назад
Very helpful 👍
@yuqingcui258
@yuqingcui258 Год назад
Really helpful!!! Thanks a lot.
@dexashish
@dexashish Год назад
what is the intro music its so addictive !
@Irshadgbl
@Irshadgbl 2 года назад
ru-vid.com/show-UCdBufqFbamF48hdZ-dmSgnw 🍁🚀Make education easy learn more ❤️
@zhaonanmeng7625
@zhaonanmeng7625 2 года назад
Impressive video!
@chandanhegde5183
@chandanhegde5183 2 года назад
Nice lecture!
@sharminsultana4643
@sharminsultana4643 2 года назад
Thank You sir
@shahoodamir3829
@shahoodamir3829 2 года назад
please share the source code
@SumriseHD
@SumriseHD 2 месяца назад
#include <omp.h> #define ARR_SIZE 600 #define STEP_SIZE 100 #include <stdio.h> int main() { /* Computing Array sum using tasks */ int i; int sum = 0; #pragma omp parallel { int a[ARR_SIZE]; #pragma omp for for (i = 0; i < ARR_SIZE; i += STEP_SIZE) { int j, start = i, end = i + STEP_SIZE - 1; printf("Computing Sum(%d,%d) in thread %d of %d ", start, end, omp_get_thread_num(), omp_get_num_threads()); #pragma omp task { int psum = 0; printf("Task computing Sum(%d,%d) in thread %d of %d ", start, end, omp_get_thread_num(), omp_get_num_threads()); for (j = start; j <= end; j++) { psum += a[j]; } #pragma omp critical sum += psum; } printf("Sum=%d ", sum); } } }
@inglesjest
@inglesjest 2 года назад
Does this you tube channel has all subjects of cse in gate exam???
@techhub9314
@techhub9314 2 года назад
Thanks Sir.
@harishsubramaniangopal6463
@harishsubramaniangopal6463 2 года назад
Superb !!
@atulavhad8198
@atulavhad8198 2 года назад
Where do we find the slides for the course?
@samiksharamteke2855
@samiksharamteke2855 2 года назад
I thought it would be difficult but it is damn easy.
@shivammehta9661
@shivammehta9661 2 года назад
I have a doubt, dot is a shared variable so I don't think the program will produce the required output as there will be race condition involved, If somebody could clear this, that would be very greatful
@gunjankumar1943
@gunjankumar1943 2 года назад
Yeah. reduction(+:dot) should be put up as a clause for the code to work.
@aarav3184
@aarav3184 2 года назад
Explained well
@whynesspower
@whynesspower 2 года назад
Same thumbnail colors represent same week lectures.
@Abhraneil_Bhattacharya
@Abhraneil_Bhattacharya 2 года назад
yes
@tarunpahuja3443
@tarunpahuja3443 2 года назад
This is how concurrenthashmap is implemented in java by having lock per segment.
@tarunpahuja3443
@tarunpahuja3443 2 года назад
Amazing Lecture series. IF you could provide more details on cache coherence. IT would be cherry on the top.
@tarunpahuja3443
@tarunpahuja3443 2 года назад
And thanks to all students for asking good questions. That answers many of my questions too.
@tarunpahuja3443
@tarunpahuja3443 2 года назад
How to have different number of threads across different parallel region. If i have 4 threads in first parallel region and 3 threads in second parallel region, which one going to be killed (first or fourth one).
@tarunpahuja3443
@tarunpahuja3443 2 года назад
Didnt get why it took 100 ns for first 4 bytes but just 5 ns for next 4 bytes. Should not it be same for all bytes ?
@hrs7305
@hrs7305 2 года назад
You are pipelining the data transfer through the bus so essentially all the memory 4 byte memory transfers took 100ns but 2nd memory transfer occurs 5ns after the 1st and so on
@Aditya-ot7ib
@Aditya-ot7ib 2 года назад
wow this is amazing.
@Aditya-ot7ib
@Aditya-ot7ib 2 года назад
again great lecture!
@Aditya-ot7ib
@Aditya-ot7ib 2 года назад
Excellent lecture Sreries, If you have knowledgw of OS, Computer Architecture then learning Prallel programming is very intresting
@wandersongomes8405
@wandersongomes8405 2 года назад
Conteudo incrivel. Nao entendo o idioma, mas o codigo e muito show.
@abelashenafi6291
@abelashenafi6291 2 года назад
One of the most interesting lectures I've seen in a while. I could only imagine how lucky your students are. Kudos!!
@chchand1092
@chchand1092 2 года назад
really awesome
@ahmadshahba3340
@ahmadshahba3340 3 года назад
Just a word of caution! #pragma omp master does not have an implied barrier either on entry to, or exit from, the master construct.
@blueninja42069
@blueninja42069 3 года назад
thank you for the subtitles!
@bendoe408
@bendoe408 3 года назад
Very good lecture. Thank you very much.
@mohdzain1741
@mohdzain1741 3 года назад
What is a branch instruction?
@tarunpahuja3443
@tarunpahuja3443 2 года назад
Branching statements are like if statement, for, while loop etc. Think of it as if(flag1) {flag2 = true}. The second statement "flag2=true" is dependent on the execution of flag1 instruction. if flag2 is in pipeline and flag1 turns out to be false, second statement will be thrown away. Which basically means wastage of cpu cycles spent on processing flag=true.
@Shelly-kx2wz
@Shelly-kx2wz 3 года назад
Really helpful explanation.
@Bardhan02
@Bardhan02 3 года назад
you are the Best teacher sir , make some more videos on computer organization and architecture please sir
@smwikipediasmwikipedia5762
@smwikipediasmwikipedia5762 3 года назад
Great intro, thanks!
@smwikipediasmwikipedia5762
@smwikipediasmwikipedia5762 3 года назад
@Micheal Kleyman You look like an ad bot.
@smwikipediasmwikipedia5762
@smwikipediasmwikipedia5762 3 года назад
@Aaron Ronald And you are even worse.
@irocx8745
@irocx8745 3 года назад
Great explanation sir 💯
@aniketmane1742
@aniketmane1742 3 года назад
Thank you sir, this really helps me a lot.
@ramkrishansharma9821
@ramkrishansharma9821 3 года назад
Awesome way of teaching , Thanks a lot !!
@ritikpratapsingh9128
@ritikpratapsingh9128 3 года назад
The problem is with recursion. How brilliant the student must be. I thought all the 3 same problems that might occur but recursion was no where in my mind.
@harinijeyaraman8789
@harinijeyaraman8789 3 года назад
Isn't the bisection bandwidth of a 2D Torus sqrt n ?
@raj_aditya
@raj_aditya 3 года назад
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-vgR4gymXuuo.html
@harinijeyaraman8789
@harinijeyaraman8789 3 года назад
This was so helpful !
@cagr21
@cagr21 4 года назад
#include<omp.h> #include<stdio.h> #include<unistd.h> int tid; #pragma omp threadprivate(tid) int main(int *argc,char *argv) { //omp_set_dynamic(0); omp_set_num_threads(8); int numt; #pragma omp parallel default(shared) { tid=omp_get_thread_num(); if(tid==0) { sleep(1); numt=omp_get_num_threads(); } } #pragma omp parallel default(shared) printf("Hello World from thread %d of %d ",tid,numt); return 0; }