Тёмный

Multi-Agent Sys: NEW SUPER AGENT & MIT FutureYou 

Подписаться
Просмотров 3,6 тыс.
% 144

This new AI research introduces a novel agent-oriented planning framework for multi-agent systems (MAS), focusing on task decomposition, allocation, and execution in environments involving multiple LLM-empowered agents. The framework centers around a new SUPER-agent that decomposes user queries into sub-tasks, assigns them to appropriate agents, and manages the collaborative process of generating a complete solution.
The key innovation lies in the integration of three design principles-solvability, completeness, and non-redundancy-which guide the decomposition and allocation processes, ensuring that sub-tasks are not only solvable by individual agents but also collectively sufficient to address the full query. A reward model is introduced to efficiently predict the performance of agents in handling sub-tasks without requiring the actual execution of tasks, which significantly enhances the framework’s scalability and efficiency.
The study further incorporates mechanisms for dynamic refinement of the decomposition process through re-describe and plan-in-detail methods. These methods handle unsolvable sub-tasks by either refining their descriptions or breaking them down into smaller, more manageable components. Extensive experiments demonstrate the superiority of the proposed framework over baseline methods in both single-agent and multi-agent systems, with a notable improvement of over 10% in accuracy compared to single-agent systems.
The inclusion of a feedback loop ensures ongoing enhancements to the SUPER-agent’s decision-making process, promoting robustness and adaptability in real-world problem-solving. This approach opens new pathways for more flexible, adaptive, and efficient multi-agent collaboration in complex, dynamic environments.
All rights w/ authors:
AGENT-ORIENTED PLANNING IN MULTI-AGENT SYSTEMS
arxiv.org/pdf/2410.02189
Future You: A Conversation with an AI-Generated
Future Self Reduces Anxiety, Negative Emotions,
and Increases Future Self-Continuity (MIT, Harvard)
arxiv.org/pdf/2405.12514
00:00 Complex queries and the new SUPER-AGENTS
03:49 Hard-to-Decompose Queries
05:20 SUPER-AGENT allocates sub-tasks to agents
07:35 NEW Reward Model evaluates Agent-subtask fit
10:32 What do do with unsolvable tasks
14:48 Hidden dependencies of Agents and SUPER-AGENT
18:51 Reduce the complexity (if possible)
21:20 Agent-oriented Planning in multi-agent systems
22:40 AI and Psychology (MIT and Harvard)
25:00 My live session w/ MITFutureYou (personal data)
#airesearch
#chatgpt
#aiagents

Наука

Опубликовано:

 

6 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 13   
@TheZEN2011
@TheZEN2011 2 дня назад
I definitely appreciate all your incredible lessons and insights!
@IvarDaigon
@IvarDaigon 2 дня назад
It's an interesting solution but I also agree that it seems to be a little over engineered because it relies on the "Allocation" layer to be a pretrained model which means that if you want to add another Agent, you need to retrain the Allocation model which also means that you have to generate reams and reams and reams of synthetic data just for the Allocation model to work as intended. Another simpler method would be to just pick the 3 most likely Agents to complete the task and then evaluate the result and go with the one that is the most plausible. Sure it means you are doing 3x the amount of Inference at question time but you aren't wasting 1000x inference at training time and however many man hours to get the Allocaiton model to work properly and then having to repeat the process whenever a new Agent is created.
@dennisestenson7820
@dennisestenson7820 День назад
25:00 hey, why not? mental health is all about lying to yourself and rejecting reality just the right amount.
@cmw3737
@cmw3737 6 часов назад
If a task is not solvable because of ambiguity then surely the answer is to ask for the unknown parameters like the size of the passenger plane in the early example. Either that or, along the lines of functional programming, the output is a function that takes the unknown parameters and outputs an answer. In English the correct answer would, like a lot of questions, start with 'it depends...' 'Re-describing' it to fit a nearest previous known answer is literally assuming things and akin to stereotyping and then imagining/hallucinating a hunch. Not great for removing biases.
@micbab-vg2mu
@micbab-vg2mu 2 дня назад
thanks :)
@bubbajones5873
@bubbajones5873 2 дня назад
Great stuff as always!
@tiagotiagot
@tiagotiagot День назад
Is it just my impression or does this come down to the Halting Problem?
@MusingsAndIdeas
@MusingsAndIdeas День назад
This sounds suspiciously like how our brains are wired...
@alanblitzer744
@alanblitzer744 День назад
You will be such a positive and care person when u get 60s xD
@uberpixzels
@uberpixzels 2 дня назад
The MIT bot pushing "diversity" lol. It looks like something made by marketing students.
@mrd6869
@mrd6869 День назад
Feel free to do better...oh u cant...gotcha😂
@strangereyes9594
@strangereyes9594 2 дня назад
What the MIT Bot says is basically "Don't pay attention to reality. Create a magical picture of yourself and if you believe hard enough, everything will be fine". Funny enough, that's woke philosophy at its core and makes terrible life advice. How far MIT has fallen.