One of the things you forget to mention or perhaps don't know is those Layers of instruction to Censor Slow down normal performance and add to Hallucination
Hey bro, they don't add to hallucinations anymore. The latest approach is to fine-tune the model to censor it, there's no "layers of instructions" anymore. I haven't mentioned all this because I haven't yet taught finetuning. Request you to watch all videos till the end of this series and not jump to conclusions. I'm only revealing information, that doesn't confuse a viewer who's new to LLMs (all of my audience is golang or rust, they are new to LLMs)