I really enjoy his speaking rate. Anyone who doesn't? That's why RU-vid has playback rate adjustment. I use it to slow certain presenters, others can use it to speed him up.
At 21:00 log 1'000'000'000'000 = 40 tests (in search) each of these is 200 times slower than cache-friendly linear test. So 8'000 is faster then 500'000'000'000 (average number of tests in linear search). Big O notation does make sense or I didn't get the idea of the example.
He meant that a cache friendly nlogn algorithm might run faster than a non cache friendly linear algorithm linear, cache unfriendly (200x comes from slow memory access) 1'000'000'000'000 * 200 = 200'000'000'000'000 linearithmic, cache friendly (40x comes from the log factor) 1'000'000'000'000 * 40 = 40'000'000'000'000 200'000'000'000'000 / 40'000'000'000'000 = 20 (nlogn could end up being 20 times faster) (or just 10 times faster on average)
Your perspective is bizarre. He most likely makes a million or more a year programming. His viewpoint that he has developed over decades from real-world experience, reading books, studying code beyond what he needed, etc. is both highly valuable and justified. In a situation like this, people jealously associate success from hard work with stuff like arrogance. Sometimes, a person just knows what they are talking about.
In the windows hierarchy example if a unique pointer was used instead of a shared that would make it a composite object no? This then makes class hierarchies perfectly fine. Am I missing something?
How about he uses some of these good data structures on Photoshop so that it doesn't take forever to load when it doesn't do any work that's useful to the user on startup?
This is one of the best talks I've heard in a while. you can put 1.5x or 1.25x if you think he speaks a pace that's too low for you. Love the statement at 20:50 . It's been something I've been arguing with big O notation puritans a while back.
So I feel a bit dumb after watching this. I did understand most of his ideas but up to a point in each case. Something gives up in my brain and i lose the feeling of comprehensive understanding.
Yea his use of vocabulary was a bit bothersome in some places. I'll have to re-watch with google in another tab ... will be good thing however and bring me to another level though
These types of things aren't understood for free. He has done as he recommends, so he has dedicated hours into thinking about STL algorithms and their implementations. He also references an entire book he has carefully read as the source of many concepts he brought up. People often see a challenge and give up if things don't immediately click. The only way to be like him is through hard work. Many people who picked up programming on its own never studied data structures and algorithms for hours. I'd recommend getting an introductory book on those topics and diving right in if you are one of these people. It takes a good chunk of time and effort.
Just chiming in against the "too slow" complaints. I prefer the considered approach; x10000000 better than _some_ talks where they haave death by powerpoint (too many slides) and so much to say that they say, "Comments and questions at the end, please". Often, they never get to the end. Just compare this talk to any Lakos talk. Heaven!
Each node has two in-edges, leading (in the picture left side pointing down), and trailing (right side pointing up). `begin(f)` points to the leading edge of `A`. `trailing_of(begin(f))` returns an iterator pointing to the trailing edge of `A`. After inserting `B, C, D` the result is the image shown.
I liked this talk, but apparently this goes against many well stablished and useful Design Patterns. Self-referential classes are really nice sometimes and I think we should not make an explicit effort to not use it when the abstract model of the problems is clearly self referential.