Very insightfull test, inspired by this I also asked the list of words to openhermes2.5 7B and Dolphin 2.6 mixtral 8*7B with the same results. Brings these smaller LLM models back to earth. ChatGPT 3.5 was OK. Thanks
Thanks for the detailed informative video. Recently many ppl are talking about the new features of new LLM's benchmarking. It will be worth to discuss the real architecture design problems. Many videos explain basic which is really informative however the enterprise ll not play with 2 pdfs and 1 GB of data.. it ll be way beyond that. It will be good if you cover those area as well. Thanks again.
You can create your own prompt template using langchain's prompt template. Check that article "Microsoft PHI-2 + Huggine Face + Langchain = Super Tiny Chatbot" (you can lower the temperature and max length initially for faster testing)
Many thanks for the video and colab, good Sir! Note and question: I put in "prompt = """Give me a list of 13 words, each of which are 5 letters long.""" and got a decent answer at the start, but then strange output after that. Then I tried "prompt = """If five people give each give you one box, how many boxes do you have?""" And I got back a fragment of a python function which looked decent for giving the asnwer, but then a series of random other python functions. Question: Why is the initial correct (in my case) output followed by nonsense ?