Great to have you back. You said there is a very small probability that the “next” offset is >255 and therefore cannot be stored in a single byte. But however small there is a positive probability it will happen. Indeed, with a carefully constructed, albeit terrible, hash function you should be able to contrive this situation. So it seems to me that it’s an edge case that needs to be handled. What is the strategy in this case? It doesn’t need to be efficient, but I think it does need to be handled.
I guess you could assign a special meaning to next=255, such that when looking up a value, if you don’t find a hashcode-matching entry at bucket[currentIndex+255] then you need to continue with a linear search.
@@mattbecker3066 My plan is to add code to track whether a rollover has happened and then call a resize once the Key/Value pair has been inserted. You have to be careful to not have that be called to many times which is why I haven't added the handler yet. It will be in an epilogue episode.
Great video! This was really cool. Could we mix this with Robin hood eviction to minimize the jump sizes? Does that help? And in general, I saw the talk that you reference, and I was wondering if you think there should be multiple standard dictionaries in like a collections.specialized package in the BCL?