In the terms of finetuning, what's the benefit of doing this fine-tuning process opposed to just using vanilla Gemini and prompting it: "For a real estate agency, give me a caption with an emoji and 2 hashtags"? After all, using fine tuning models via the API is typically more expensive, right?
I just plain don't get it, maybe I am misunderstanding with fine-tuning means, maybe I don't even need this for my case usage... in the end I have one folder on my desktop with a measly 1.4 GB of markdown files that are totaling over 3 million words in research, that I want google pro 1.5 to represent as a mouthpiece. I guess the quickest way to explain it would be, master levels of needle in a haystack representation, I want it to be able to take all the files into macro context per each question, and give me a higher order perspective between the files that only artificial intelligence could possibly keep a wrangle of, comprehend? How on earth can I achieve this please!? Thank you.🙏
Is there a limit to the tokensize of the output? Im thinking about training a model to output json files based on my input to control a 3rd party software but jsons might be kinda big
Thanks for sharing 👏🏻 I do the same but I want it use my tuning model on colab but I got this error 403 POST You do not have permission to access tuned model tunedModels. Can you provide a video for that 😢
This is easy for simple use cases where the output is always similar in structure. Corbin, have you used AI studio to fine-tune based on unstructured data? IE: inputting data to capture a person’s unique writing style. I have 500 articles and writing examples from wiki pages I've written on software engineering. Each item has a category and a label that I've curated but the copy in the body for each item varies widely. It may be the wrong approach to try to fine-tune based on a persons writing style using AI studio. Maybe, I just need to create embeddings and store them in a vector DB, first. Thoughts? Thanks Corbin, love the channel!