Good job. Personally, I prefer real use cases combined with building projects. That’s how to solidify the knowledge. That’s exactly what am doing on my new channel.
So using “yield” seems to track the state of an Iterable? Is that understanding correct? But when are the main cases we would need to use it? Still not understanding.
When you are applying your iterable to an expensive function, have a very large iterable and/or need a very low memory low, yield is a good candidate to consider for your solution. I use it often when I do large number of simulations such as Monte Carlos.
I have seen generators in a card game as a state machine, but I still don't grasp it. Is there a way to restart the generator or skip items programmatically? It just doesn't make sense to me. I rather continue to use a function which returns me a value I expect to get.
no, that's good. you shuffle the deck into a collections deque, wrapped in your personal Deck class with a dunder next that calls deque's popleft(). that's nice. You can't go back in a deal. So for hold em you;d have "burn" method which calls next(self), while the deal(self, n, m) (n=players, m=hole cards which is 2 for hold em, 4 for PLO), which would deal n*m into some zip() construction.
@@y2ksw1 maybe just write some code that uses them? Like a toy card game for instance. Learn by using? Then you 'll see where they work better than containers (list, tuple..). Also: itertools "tee" function is pretty cool, if you have terms in a series that go to two different formula (e.g. sum(x^2) and sum(x)^2 calculations, it's very cool. It makes a difference when the computations are hard and the terms in the series a huge messes, e.g some big data file or something). Actually that's a good case--say you want read 1000 100kB data files and compute a metric from it. Your not going to open 1000 files and load them into a list and then loop over that, rather: a generator that yields each file into a metric computing function, so the user just see "file name" turned into 1000 element array, and dev sees each step separated into. The file read can be an iterator to, so you yield file reading iterators straight into a function of it's contents. idk...if that makes sense at all in a yt comment. Need a white board.
Basically it's lazy evaluation as someone else's function. You can get the same behavior in your own code if you write the loop yourself, it's just that sometimes you want to call it at different intervals and the lazy evaluation aspect means you might use less memory and not take an inordinate amount of time. To use an example from C, it's the difference between repeatedly calling strtok() to split a string and having a function that you write which creates an array of strings and splits it all in one go. I'm not as versed in Python and don't know if you can change inputs to a generator on the fly, but in C you can change the separators you pass to strtok() with any given call and it'll work correctly, whereas if you had an all in one function you'd either have to account for an array of multiple separator strings, or only get one set of separators. If that's not clear enough, then it might be worth it to ask @Indently to make a video explaining the implementation details of the generator construct, if he hasn't already, and he may have, I just haven't watched all of his videos.
really? I just don't ever want to initialize and empty list and fill it with a loop (or thing that are two complex for a comprehension, and things that are computationally hard).
@@tirthankarsarkar4206 for example if a list is super long and takes up a huge chunk of memory, but you only need to access the elements once at a time, you should use yield, since it’s only take up the memory space of that element
@@tirthankarsarkar4206let’s say you are looping through file names in a directory. Instead of loading all file names onto memory, you can instead compute and load one at a time. This will greatly reduce your memory overhead. Or simply when reading a file, you don’t load the whole thing onto the memory and process it, you process it chunk by chunk only when you need to do it
This is nowhere near an easy explanation... Only the ones who knows this will follow this... Not a good channel to learn python... Watch Corey Schafer instead
@@Indently honestly it feels like you are not trying to simplify it with the intention of teaching someone but rather intentionally overcomplicating things to show off your skills... I am not a fanboy but check Cory Schafer's video on the same topic and see the difference