I've been listening to this guy teaching algorithms for over 10 years seems his age algorithms is super efficient where his age is always constant O(1)
I've been programming on and off since high school (class of 2003) and recently decided to catch back up with the MIT OpenCourseWare lectures... 10:05 to 11:17 is the first time I've ever had zero indexing explained in a way that just makes it clear that's why it's being done. Never in my entire life had anyone just said it was an offset in memory and I'm kind of disappointed. That single simple fact of something I thought I knew well just made a whole lot of memory addressing knowledge grok immediately.
I experienced the fact that when you listen to passionate people, you would love to become passionate. This is the ultimate benefit I am taking away right now from this lecture.
Oh, oh, I'm so happy. Years ago I downloaded the old videos and tried watching on metros. I quit at around 1/3. It was too hard for me. Now I'm back. Let me try learning this again.
There is difference between ADT (Abstract Data Type) and DS (Data Structure). ADT is the specification. It answers questions "what data can be stored" and "what can I do with them". DS is the concrete implementation of ADT. DS specifies how data is stored (its layout) and what kind of algorithms process them. Single ADT (array, e.g.) can be satisfied using several DS (linked list, static array, etc.)
There are 2 main ADT: Set and Sequence. Set does not allow to receive an item via index; Sequence does. (what can I do with them?) Notice that the absence of indexes entails the inability to distinguish between two identical elements. Set does not allow to store dublicates; Sequence does. (what data can be stored?)
There are 2 main approaches how to construct DS: using an array or using pointers. In the array, data store in continuous part of memory. In the pointer-based approach, each item has links to some of others; physical addresses of items is generally unknown.
Static Sequence Interface (SSI) is an ADT, the variant of the Sequence. This interface maintain fixed number (aka length) of items x0, x1, ..., x(n-1), but these items are able to be rewritten. The list of operations of the Static Sequence Interface: build(X): make new DS. X is the something that may yield items one by one. len(): returns the n. iter_seq(): outputs the items in its order. get_at(i): returns the item number i. set_at(i, x): set x as the item number i.
Static Array is a DS, an obvious, natural way to implement the Static Sequence Interface Here and forth, we assume that our model of computation contains RAM with w-bit cells, where w=length of the word, group of bits the processor can to process per one step. The access to each cell takes equal time. Also, our model allows to allocate n sequential words in RAM in a Theta(n) time. Static Array is the consecutive, continuous part of RAM with constant length. array[0] = memory[address(array)], array[i] = memory[address(array) + i] for i from 0 to n-1. len, get_at, set_at operations have Theta(1) time complexity build, iter_seq have Theta(n).
Dynamic Sequence Interface (SSI) is an ADT, the variant of the Sequence. This interface maintain number (aka length) of items x0, x1, ..., x(n-1). The list of operations of the Dynamic Sequence Interface: [all of SSI operations] insert(x, i): transforms the sequence to y0, ..., y(n), where y0 = x0, ..., y(i-1) = x(i-1), y(i) = x, y(i+1) = x(i), ..., y(n) = x(n-1). delete_at(i): transforms the sequence to y0, ..., y(n-2), where y0 = x0, ..., y(i-1) = x(i-1), y(i) = x(i+1), ..., y(n-2) = x(n-1). So-called convenient operations insert/delere_first/last may be considered an implemented via special algorithms.
please do it for 6.046 and 6.854 the algorithm trilogy by the way good to see you pro eric my lockdown i spend time with your course 6.006 6.046 and the advance data structure and in the last lecure online algorithms . please mit ocw it is my request
Working with Python and C++ counting from 0 to n, 0 would be your 1, 1 would be your 2...etc. His counting would give you n+1 (0,1,2,3,4), you want n-1 (0,1,2,3).
I find 39:30 amusing in that im imagining the blackboard area as the array being talked about. The blackbored is of static size however, so to store new elements old elements have to be overwritten. Yet, many of the points being made do still apply.
The instructor goes so fast! I can't actually get the idea or the theory. It looks like I should have a prior knowledge. Thank you for the content, though. It's really appreciated.
my native language is not english but actually I did understand a lot from this. You can always rewatch, to understand more. Also to see/hear things you didnt notice previously
this will help: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-2T-A_GFuoTo.html . Here is the whole list for the course: ru-vid.com/group/PLhQjrBD2T382_R182iC2gNZI9HzWFMC_8 to have some basic idea how to use pointer in c. ideally, you need to manage the memory block in ram to create your own data structure. You can allocate memory statically or dynamically.
Yes. Many colleges seem to list prerequisites only as a way to make more money on tuition costs. Example, my local college requires an introduction to CS class before all other CS classes, but then they waste time in the following classes to duplicate all of the lessons in the introduction class. They also require english 101 without regard for score on your placement test, mainly to give money to the English department, while in math you may skip to the level of your test score. ) MIT is different, when they list a prerequisite for a class it is because the class is truly designed with the assumption of specific existing knowledge. I ignored this one time and I needed to take an emergency class in calculus on the side so that I could keep up with my primary class.
@@georgejetson9801 Very good, I was in secondary 1 in 1980, when I learned Technical Drafting skills, which I used for my flowcharting and other diagrams designs in 1992 to this day. I scored 100% on it then.
🎯 Key Takeaways for quick navigation: 00:28 🧠 *Today's focus is on data structures, specifically sequences, sets, linked lists, and dynamic arrays.* 01:27 🗂️ *Interface defines what to do; data structure defines how to do it. Data structures involve storing and manipulating data with specified operations.* 02:51 🔄 *Two main interfaces: sets and sequences. Multiple data structures can solve the same problem, each with different advantages.* 05:43 📊 *Static sequence interface includes build, length, iteration, get, and set operations. Focus on static arrays as a natural solution.* 08:34 🧮 *Static array relies on the word RAM model, allowing constant time access. Memory allocation model assumes linear time for array creation.* 17:04 ➕➖ *Dynamic sequence interface adds insert and delete operations. Introduces the concept of insert_at to maintain indexing consistency.* 19:56 ⏮️⏭️ *Special cases like insert_first, insert_last, get_first, set_first, get_last, and set_last are introduced and can be more efficient to solve.* 21:20 🔗 *Linked lists, composed of nodes with item and next fields, are introduced as a data structure to implement dynamic sequences.* 23:17 📚 *Arrays and pointer-based data structures were discussed, highlighting the use of pointers as indices into the memory array.* 25:33 ⏭️ *Dynamic sequence operations were explored on static arrays and linked lists, revealing the challenges of insertion at the beginning for both.* 29:51 🔄 *Linked lists excel in insert and delete operations at the front but struggle with random access, making operations like get and set inefficient.* 33:51 🔄 *The lecture introduces dynamic arrays, aiming to combine the advantages of linked lists and static arrays for efficient operations.* 35:14 🧠 *Dynamic arrays relax the constraint that the array size equals the number of items, allowing for efficient insertions at the end in constant time.* 40:12 📏 *The lecture discusses resizing strategies for dynamic arrays, emphasizing the importance of choosing a constant factor larger than 1 to avoid frequent resizes.* 43:31 ⏱️ *The amortized analysis of resizing dynamic arrays is explained, revealing a geometric series summing to roughly linear time, emphasizing the efficiency of the strategy.* 45:51 🔄 *Geometric series are dominated by the last term, allowing for simplified analysis using theta notation, such as theta of the last term like 2 to the log n, which is theta n.* 46:45 📈 *Amortization is introduced as a way to analyze the average time of operations over a sequence, considering that while some operations may be expensive, they are balanced by cheaper ones, resulting in amortized constant time for certain operations.* 47:42 📊 *Amortization is described as an averaging concept over a sequence of operations, allowing for high-cost operations like resizing to be distributed across the cheaper ones, achieving almost constant time on average.* 49:38 🔄 *Dynamic arrays achieve constant amortized time for operations like insert_last and maintain constant time for get_at and set_at, showcasing a balance between the strengths of arrays and linked lists.* Made with HARPA AI
9:20 Modern computers (x86, x64) are byte-addressable, not w-bit (32 or 64). There is misinformation about word size and byte addressing, at least for Intel/AMD CPU
I looked for so many ways to start in programming ans find my self in this course. I love the enthusiasm. I hope to work through this class. Does someone has recommondations for me where I can train myself to become a programmer? Where I can do basic stuff?
There are many great resources to learn programming. From our materials, we recommend you start with MIT 6.0001 Introduction to Computer Science and Programming in Python: ocw.mit.edu/6-0001F16. There is an edx version starting January 26: www.edx.org/course/introduction-to-computer-science-and-programming-7
I am programming since a teen in the 80s and there were no online resources back then. You had to be really motivated and push yourself to get books from bookstores and libraries. I have taught programming too. Best advice I can give you is to take an idea and start building it. Every time you hit a problem, research a solution. Yes, you need a certain baseline of knowledge, but don't spend years on that before starting to build something. All the school in the world is useless unless you can create something useful in code.
So great lecture in really giving you why and how, not just a bunch of hows. One question: delete_last() of array seems to me to only take O(1) constant time if choose to do so. I understand insert_last(x) would take O(n) time as a new array has to be created and copy all old elements (and inserted x) to the new array. But deleting the last one would still maintain the old array untouched for the first n-1 elements, and what only needs to be done is to update the len(static array) now to be n-1. Do I miss anything?
In min 27:00 to 28:00 he explained this, but to sum it up basically though it seems like a O(1) operation it's not because a static array has a fix length and if you remove the last element then you are changing the space of memory that you were assigned meaning that the computer has to reallocate the memory to satisfy the new length of your array for that reason it's not convenient to use a static array for dynamic operations.
Thank you MIT for your generosity. On a side note, is it still true that there is a special thick chalk great for fast writing on boards that Professor Demaine seems to be using that is no longer made and needs to be hoarded?
I hereby slap my brain for every bad thought it’s ever had about Prof. Demaine. This is peak American: aware, unpretentious, brilliant! also this is uncalled for but screw it the world is ending he has a sweet fashion sense 🤘😤
What are the prerequisite for this course. I found the concept very hard to understand. Fyi, i am doing java programming and have basic knowledge of programming.
The purpose of the recitations is to expand upon course materials covered in lecture and allow students to practice working with the material in an interactive setting.
Here is the playlist for the series: ru-vid.com/group/PLUl4u3cNGP63EdVPNLG3ToM6LaEUuStEY. The ones labeled problem sessions are the recitations. See the course on MIT OpenCourseWare for more info and materials (Lecture notes, recitation notes, problem sets, etc.) at: ocw.mit.edu/6-006S20. Best wishes on your studies!
Why is the time complexity for linked list for insert_last(x) and delete_last(x) linear? Shouldn't it be constant since we are able to access the tail?
you don't store the tail on regular linked lists. Storing the tail is considered an "augmentation", and does reduce insert_last(x) to linear. delete_last(x) isn't so easy as you would need to fetch the second-to-last and update its pointer to null I think, therefore you would also need to go through the whole linked list. Doubly linked lists might solve this by storing tail on every element (see 32:34).
pretty late but yeah it's an attribute (idk if it's computed in build() though). python's len() will return a variable that gives the size of whatever object len() is called on, and that is an O(1) operation so python has to maintain that size variable as it updates, which is (some) overhead