Each new generative UI example from you guys is implemented in a different pattern. From manually intercepting response types to the latest streamui+tools. Does the team feel this present pattern is mature or are they unhappy with it and will be redoing it next month?
Hey, AI SDK maintainer here! I'm not sure what you're referring to regarding "intercepting response types", but `streamUI` is pretty stable. It's the same as the previous experimental `render` function, but with a more consistent name as other APIs like `streamText` and `streamObject`. Also worth mentioning that the Generative UI APIs are designed to be general enough and fit into any UI patterns and AI pipelines, which means that there isn't only one way to do Generative UI. For example, you can just use `streamUI` + tools to handle LLM + UI, or combine low-level utilities like `createStreamableUI`/`createStreamableValue` and your existing pipeline for flexibility. Happy to answer any questions!
@@shuding What are your recommendations for maintaining type-safety in the application, especially surrounding the server actions and use of the `useActions` hook? Unless I'm missing something, it seems like the required use of the hook defeats one of the major benefits of server actions: end to end type safety. Is there maybe a lower-level approach that I could take to bypass the hook?
Could you please do a tutorial on the combined use of ai ask with long chain adapter that utilizes eg. simple RAG? From your documentation it is not clear how to implement it properly.
it is me only or this tutorial and the repo does not work? I got Error: `useUIState` must be used inside an provider. to resolve it add import { AI } from "./action"; to root layout: export default function RootLayout({ children, }: Readonly) { return ( {children} ); }
If you are calling getAiState or useUIState, it has to happen inside of So you could create a new component and do something like and inside of Chat, that's where you would use uiUIState
This is cool and all, but it seems that around every corner in this SDK (especially with the rsc stuff) all the types are just 'any'. In my opinion, you can't really call your library "The AI Framework for TypeScript" and then not have strong types. This is especially annoying because in my eyes it defeats one of the major benefits of server actions: end-to-end type safety. Is there a way to bypass some of the abstractions, like the useActions hook?
I added $10 to test open ai and ai sdk and I had 100% of “unknown error” calls and $8 used?! What in the world it wasnt like this before. Loads of retries in the background (should be opted out from the start)
Hey, awesome demo - thanks. We're using llamaindex in Python for our LLM backend that uses RAG. I want to use tools that pass react components to the frontend - how would I accomplish this? Thank you
I could not find an example in the Docs where the model can use Tools and return RSC while also being able to Stream a response when no Tool is used. All the example I could find use generateText() or streamUi() so the text response Is not a Stream. Should I use a combination of streamText() + Tools + createStreamableUi() to Stream text and have Tools that can return RSC?
Hey! With `streamUI`, if no tool is used, the text response is streamed via the component returned from the `text` function. Is that what you're looking to do?
@@nicoalbanese10 Yes, that's correct. I would like to stream the text token by token when the model does not use a tool. Can I achieve that using streamUI?
You can run any asynchronous javascript code within a tools' execute function. So you would first want to find the exact location based on the search query (eg. openstreetmap). Then pass that to a weather api (eg. open-meteo) and return the resulting temperature 😊
For example, a Ai has a getWeather tool The context of the conversation achieves the following effects: user: hello ai: Hello! How can I assist you today? user: How's the weather today? tool: getWeather("local") tool_result: {“weather”:"sunny","maxTemperature":35,"minTemperature":0} ai: The weather is good today, but the temperature difference is a bit large, so please keep warm. I hope that on the client page, users can see in sequence: Ai wants to use the getWeather tool, the call result of the getWeather tool, and Ai's answer based on the call result. How can I achieve this?
Someone knows how to Improve the response using the API of openAI? Seems like the chat-gtp web app the results are a lot better. Using the API returns very similar responses, in this case, asking "tell me a joke" answer the same thing over and over again. Another great API btw
As always by Next over engineered and over complicated. We just need 4 functions. streamUI, receiveUI, streamText, receiveText. Everything else much easier to do without your helper functions.