Тёмный

Inside the Black Box of AI Reasoning 

code_your_own_AI
Подписаться 36 тыс.
Просмотров 2,3 тыс.
50% 1

See inside the "thought process" of AI /LLM when tackling complex questions. Based on new research, we explore a simple but elegant way of exploring cross-level complexities of multiple open source LLMs.
Cyber Thoughts: A Peek Inside AI's Mind.
Inside the Thought Process of an AI.
This video primarily investigates the complex reasoning capabilities of large language models (LLMs) using a novel graph-based framework designed to assess and enhance the depth and accuracy of model reasoning across different knowledge levels. The study introduces DEPTHQA, a dataset that decomposes complex, real-world questions into a hierarchy of simpler sub-questions categorized into three distinct depths: factual and conceptual knowledge (D1), procedural knowledge (D2), and strategic knowledge (D3). This hierarchical structuring allows for a detailed evaluation of how well LLMs can escalate their reasoning from basic facts to intricate, analytical problem-solving. It quantitatively measures the model's performance on each level using forward and backward discrepancies-metrics that respectively capture the model's struggles with escalating complexity and its proficiency in handling complex queries relative to their simpler constituents. This dual assessment helps identify specific areas where LLMs fail to integrate or transition between different types of knowledge, illuminating gaps in both foundational understandings and advanced reasoning capabilities.
Through rigorous experimentation with several state-of-the-art LLMs, the study explores the relationship between model capacity and reasoning discrepancies, revealing that larger models generally exhibit fewer discrepancies in both forward and backward dimensions, suggesting a better overall integration of knowledge layers. The research further delves into the impact of model training and architecture on discrepancy outcomes, indicating that models with more extensive training data and sophisticated architectures are more adept at bridging the gap between different knowledge depths.
Additionally, the introduction of a "predict solution" strategy, which involves using the model's own predictions as inputs for subsequent questions, underscores a method to enhance self-referential consistency and depth in reasoning. This approach not only tests the model's ability to utilize its previously generated outputs but also its capacity for self-correction and adaptive learning over multiple turns.
The insights garnered from this study highlight the critical importance of structured reasoning paths and the potential benefits of iterative, context-aware processing in improving the overall effectiveness and reliability of LLMs in complex problem-solving scenarios.
All rights w/ Authors:
Investigating How Large Language Models
Leverage Internal Knowledge to Perform Complex Reasoning
arxiv.org/pdf/2406.19502
#airesearch
#reasoning
#ai

Наука

Опубликовано:

 

17 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1   
@algoritm3034
@algoritm3034 11 дней назад
It's a great video, but I still haven't figured out how to apply it... Do you need to use their work to evaluate the model, see what types of questions it is wrong on, and, for example, tune it to improve it for them, or how? Can anyone clarify?
Далее
5 Easy Ways to help LLMs to Reason
50:37
Просмотров 3,9 тыс.
Q* explained: Complex Multi-Step AI Reasoning
55:11
Просмотров 7 тыс.
ЛУЧШАЯ ПОКУПКА ЗА 180 000 РУБЛЕЙ
28:28
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 669 тыс.
GraphRAG or SpeculativeRAG ?
25:51
Просмотров 4,2 тыс.
LLM - Reasoning SOLVED (new research)
47:51
Просмотров 15 тыс.
NEW TextGrad by Stanford: Better than DSPy
41:25
Просмотров 10 тыс.
Can AI code Flappy Bird? Watch ChatGPT try
7:26
Просмотров 9 млн
10 weird algorithms
9:06
Просмотров 1,1 млн
How Cohere will improve AI Reasoning this year
1:00:23
Просмотров 21 тыс.
iPhone 15 Pro в реальной жизни
24:07
Просмотров 355 тыс.