I’ve watch over 10 tutorials and this is the first one that actually provided an explanation and step by step. Thank you.. Can you do open agents next?
Great job Tyler, will you do a video using LM Studio or a vLLM server to host a local version of a open source large language model, and then configuring Open Interpreter to interact with this local server instead of the OpenAI API?
Thank you, I appreciate that! Yes, I have a couple videos lined up, 1 being LM Studio server connecting to AutoGen using open source LLMs. But it does use openai api. I will do some research on getting open interpreter interaction!
Thanks for the clear, step by step instruction. Quick question though: what's the big advantage of autogen over, say, just chatGPT. The latter can also code pong, generate graphs (with the plug-ins) or solve math problems? Excuse me if my question seems a little simple, I'm just curious about the upcoming AI tools and eager to learn more.
Hey thanks for watching! I appreciate that 🙏. Well that is true, Chatgpt can do those things. I think with AutoGen it allows you to use other LLMs and have them work together with or without human input. I would say besides the actual response difference you may get, it’s the idea of having “workers” and assigning tasks and then communicating together. You can also have agents correct each other, have “rounds”, so with human input you can offer suggestions. It’s a good question! The ability to integrate other LLMs…albeit through another software or git repo, and multiple agents interacting instead of us with ChatGPT makes for more interesting use cases. Hey I’m learning too!
Great video. I know it's just an example, but maybe use environmental variables for things like API keys, instead of hard coding them. Thanks for the great walk through.
Thank you! And absolutely, having a nice way to inject properties into an AutoGen file (like app.py) and then just swap out properties would be easier. Thank you for suggestion!
@theit-unicorn1873 thanks I appreciate hearing that! Well I started off teaching beginner to advanced data structures and algorithms with a sprinkle of tech interview questions. Did that for about 1.5 years then took a break then this summer tried again but then decided I wanted to get into AI so that’s what I’m doing now! I actually have a data structures course I’m gonna be giving out to anyone who’s interested for free this holiday season. It’s already on Udemy but will give away the free coupons
I tried to subclass the autogen agent to store some processing result locally or eg to let autogen fulfill some external task like web research. Do you plan a video on this as well?
I have never gotten one of these autogpt things to work for me, it either saves the first file and never saves again, or just never saves a file to begin with and plays dumb giving me snippets instead of actual code
The difference with AutoGen and ChatDev is human input at any point, is that correct? If the description was "Build a content generator for writing like Writesonic" could it generate the framework, with human input to craft the direction?
Hey, so they did create something called human-agent-interaction, but it does seem there is more custom code in the phases.py file that allows for the different phases you can use. There is an example in their github repo I believe for the configuration called Human. I will say that AutoGen makes it easier and the difference I liked is that you can have groups of agents talk to each other whereas in chatdev, its 1-on-1, then you move to the next conversation between 2 agents and so forth.
@@TylerReedAI Thank you, I will research the information. I am new to coding. If I provided a detailed sitemap description of every feature, would that be the best method? Could I then send the code to a UI designer? Your videos explain things very well. For example, this video titled "Smart AI Agents, Stardew Valley for coders, AI for Simcity, and training" has excellent sections on AutoGen vs ChatDev at 6:22-7:32, 7:33-8:05, 8:13-9:47, and 13:34-19:25. The section on DeepMind's Continuous Diffusion model at 22:04-28:03 is also great. However, nobody has shown a complete application with UI yet. It would be fantastic if you could demonstrate that in a video.
I will look at those videos and see what they say. I think one of the bigger things is how to properly prompt for what we need. Even if we use a different model that’s made for different applications, we would need to understand how to prompt well for that use. I haven’t seen a complete UI yet but somebody out there may have, I haven’t watched enough videos probably. But yes it does seem possible it’s making the correct agents telling them what you need and a good amount of testing. And welcome to coding! Glad you’re doing it :). Even if it gave us the complete UI, it would need looking over for potential changes to code or what you wanted the look for. TL;DR…yes I think it would be, but a decent amount of agents and prompt testing
i am sorry to find you like 8 months after you making this ones. i am confused about what is the whole autogen and workflows etc agents do. is there a diagram. i am an old programmer with pretty dead languages but i love coding
Hey well thats awesome to think that way, using AI to help in these processes is what makes it so powerful. Yes I can explore that! I've been testing avenues and SaaS is an exciting one
Yes! I’m going to make a video on using different models, such as open source. But for now you can use something called LM Studio and download llama model there, then we just connect that to AutoGen with a server (hosted inside LM studio with a button click).
Yes! You can install pyautogen locally in a python environment or use a code space like I have in the video and develop in the browser. I didn’t pay for any of it. Even the openai key, I don’t pay for Chatgpt.