Join us for a continuous exchange of ideas about research in the era of general AI. *Register for the Microsoft Research Forum series:* aka.ms/ForumRegYT
Building AI applications with agentic workflows introduces significant challenges, especially when these workflows rely on large language models (LLMs). The problem is that LLMs aren’t deterministic-they don’t always produce the same output given the same input. This unpredictability becomes even more problematic when we try to layer additional complexity, like autonomous agents, on top of an already unstable system. In many enterprise settings, where reliability and consistency are key, these complex agentic workflows can cause more harm than good. Most tasks can be handled without the need for this extra layer of abstraction. By using LLMs in a more controlled and straightforward way, you can get the job done without introducing unnecessary risks. Instead of over-complicating the architecture, focusing on simpler, tightly managed LLM-based solutions can lead to more stable and reliable outcomes-exactly what’s needed for critical enterprise operations. This presentation is very high level and does not deal with actual uses cases and how they can be solved .
WELL SAID PinPointTest14p... ... but I'm inclined to NOT equate "Unpredictability" with "INSTABILITY". It IS possible to be UNPREDICTABLE but STABLE (within bounds). Please know I DO understand your point, and you & I ARE in agreement on what I think you're TRYING to say - I only think the term "unstable" ISN'T the most accurate way to describe a multi-layered system built using non-deterministic components. Fuzzy x Fuzzy x Fuzzy DOESN'T demand "UNSTABLE" as in "CATASTROPHIC FAILURE" Fuzzy x Fuzzy x Fuzzy simply means "POTENTIALLY VERY FUZZY". But it ALSO sets the stage for a multi-stage agent to actually achieve PERFECTION as well... so there's that. ;-) -Mark Vogt, Principal Solution Architect/Data Scientist [AVANADE]