MIT 6.006 Introduction to Algorithms, Fall 2011 View the complete course: ocw.mit.edu/6-006F11 Instructor: Srini Devadas License: Creative Commons BY-NC-SA More information at ocw.mit.edu/terms More courses at ocw.mit.edu
Wow! Never seen someone explaining why we need BST this clearly. The approach was very nice discussing all data structures for runway problem and filtering out one by one.
01:50 motivation behind BST 08:30 shoot down known data structures 21:35 BST introduction 26:00 example BST insert 35:00 other BST operations 36:45 augmented BST example 45:30 example compute rank t (How many planes are scheduled to plane in time less than or equal t)
yep, he's doing a good job at keeping it simple at the level of the datastructure and simply walk you through it by using notation and help you visualize how the operations affect the datastructure and it's mental model representation :)
@@marsille0986 The lecture is extremely easy as a whole, just carry on and finish the video and you'll be able to put the pieces together. In other words, everything makes sense as a whole. While doing these MIT lectures, I often found myself blank for 10-15 minutes of a given portion of a lecture. But everytime I bit the bullet and hung on, I was able to understand everything. If not, you can search for particular keywords to understand bits and pieces. Hope that helps!
For peeps that may be initially confused by why insertion was constant for the unsorted array, and linear for sorted array, it's because in an unsorted array of [4, 2, 1, 6], if you want to insert 3, you simply add it to the end, so that's constant time. Then, you have to iterate through the array to see which number is next to go on the runway (array.min) which costs linear time. In the sorted array, using the binary search method, you can find where to insert your new number in log time, but the act of inserting it costs linear time because the worst case scenario is that you have to add it in the very beginning, which will shift every element over one space. For example, [2, 3, 4]. Inserting 1 to the very front will require the shifting of all 3 elements.
@@manikanth2166 because adding the element at the start forces you to move all the other elements in the array, thus adding more time to the time complexity.
Fantastic teacher! It's so great to have access to this content. At 31:17 he mentions there's no constraints to a BST. Other than the "left / right" rules one should also ensure all values are unique.
my question is if i want too add one data to sorted binary tree, the only rule is that left side is for minimum range data,and right one is for maximum data,and i follow tree carefully,so in real code i will just adding plus 1 until i am finsihed?
@@TrabelsiMarwen You have got to be kidding. He is explaining why you use BSTs vs. other data structures. It is brilliant in that it seems so basic because he explains it so well, but then all of these concepts become very, very clear.
@@TrabelsiMarwen Knowing why we should use BST is as important as being able to use BST. He is a good teacher an is really talking about important things.
Well, how will you check that if there is any element which lies in range t-k or t+k ? As heap have a property by which we can only say with surety that top element is min or max ( minheap/maxheap) but it does not say any other things. We will have to iterate through heap to check for the elements in range.
He is a beast. Bst cannot explain better than this. Im suprised people asked super easy questions around min 32 that there were harder questions to ask
Can someone tell me why the BST method is more efficient? I know the BST insert have h complexity but before that, shouldn't we build this BST model first? Adding to this, that insertion costs a lot of time.
one of them asks at the end of the lecture what is the value of k in a problem. but the professor does not panic at all again explain the problem what is the meaning of 'k' in the problem very nice professor.
nice lecture but if there was just integer precision, I'd implement it as a fixed size array or 2d array runways by minutes for a size complexity of just n x 1400 and a constant time complexity
33:18 mentions duplicate values with a multiset. However, two 46 values would have to be differentiated, say, as 46a and 46b. But the nodes themselves would be unique.
I dont quite understand: at 20:57 the professor expresses that the algorithm is good, but not good enough. and that the problem with the alg is that the insertion is now fast enough. he says that we must find a solution for faster insertion. I dont see him providing the solution. the rest of the lecture from that point is a description on how to find place to be inserted (which we already know how to do?) ????
I knew what BST in general were. I just came here to revise. But it is today I understood why BST exist in the first place and how they evolved and what problems do they solve which is not possble through other data structures in best time..
The lecture notes and other materials are available on MIT OpenCourseWare at ocw.mit.edu/6-006F11. They are with each video in a tab labelled Lecture Notes.
i do not have much data strucures knowledge like queue stack and everythinhg. so should i first learn them from somewhere and then come here or im okay to learn these first????
on 15:36 he said "So the list does one thing right, but doesn't do the other thing right. The array does a couple things right, but doesn't do the shifting right." so what is the difference between array and a list here. does he mean a sorted list by array..???
If you go to the video lectures page and click on a video link you will find notes for the respective lecture at the bottom of the page. If this is what you mean by video notes then I'm afraid theirs nothing else apart from recitation stuff. The subsequent course on Algorithms (Design and Analysis of Algorithms, 2015) has better notes.
Might be a bit late but I'll comment anyways for other readers. Although the elements in a min-heap / max-heap are ordered, you can't look up specific key values without going through the entire tree. E.g. Min tree: 4 / \ 9 89 / \ / \ 10 90 92 100 Find(90), starting from the root, could be anywhere. You only know that if you go down the values will increase, but there is no difference between going to the left and going to the right. Because of that, you will need to go through almost the entire tree to find a specific element in said tree. In the worst case scenario, that is O(n)
auch, hope you're making progress... there's also programiz website - it's very good as well if you want to take a look at it i usually combine programiz with MIT 6.006 and Rivest book
lists have pointers - for theta (not big O), your average case would be that you'll always find a mid point somewhere (where mid point here means no extremes: at either ends depending if it's a doubly linked list or not) this means, that there'll always be an element k that is s
i have a question for getting the nodes of a subtree we need to go to each and every node atleast one time then whats the point if we solve the question after wards in logn
You can search for a particular key in O(1) with high probability, but you cannot search for the existence of keys that meet some criteria in O(1). So let's say you want to search for any key within distance 3 of 50. You can find 50 in O(1) time with high probability. But then you'd miss all the keys between 47 and 53, and if fractions are allowed there are infinitely many such keys to check. The only way then would be to iterate through all the existing keys and match them one by one to the criteria of being distance 3 from 50, which would be O(n).
The condition in the algorithm that won't change. For example in max heap, the loop invariant would be: element y is always larger than either of its children