Тёмный

OpenAI Assistants in Bubble - A first look 

Launchable AI
Подписаться 1,4 тыс.
Просмотров 3,8 тыс.
50% 1

In this tutorial, we integrate the new OpenAI Assistants API into our Bubble app using API Connector; no third-party plugins needed.
We look at creating and modifying assistants, uploading files, starting threads, and running agents.
This was my first look at the API, some there are some rough spots along the way.
Leave a comment below if you want to see more content like this, or leave suggestions for topics you'd like to see covered!

Наука

Опубликовано:

 

13 ноя 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 29   
@PubgSpeed-wl8yo
@PubgSpeed-wl8yo 7 месяцев назад
Bro, thanks for the tutorials, you are the only one on youtube who has studied this issue in depth, keep it up, you have no competition. Trust me, I've been gathering information for 2 months, I've been gathering information about artificial intelligence and linking them to websites and apps. And there are only a few people like you and not everyone is as deep as you.
@LaunchableAI
@LaunchableAI 5 месяцев назад
Thanks for the kind words!
@malekaimischke2444
@malekaimischke2444 8 месяцев назад
thank you very much for making this video @Launchable AI. Seeing how you use and think about these APIs is really helpful (particularly for largely non-technical folks like me). Appreciate it!
@syhintl
@syhintl 8 месяцев назад
Thanks for the great contents! Particularly on the uploading file and creation of assistant file. Would like to see some deep dive of creating multiple assistant files from the Bubble frontend soon!
@luminrabbit9488
@luminrabbit9488 8 месяцев назад
Fantastic Video, Thank You!
@user-sv9hj6gf2s
@user-sv9hj6gf2s 6 месяцев назад
Well done folk, you are a legend!
@vcapp.
@vcapp. 8 месяцев назад
Great explanatory video @LaunchableAI - Thank you
@LaunchableAI
@LaunchableAI 8 месяцев назад
Glad it was helpful!
@MrJackywong8712
@MrJackywong8712 8 месяцев назад
Great sharing. Thanks
@sitedev
@sitedev 8 месяцев назад
Thanks for making this video - I would have struggled over the whole 'threads' bit. It makes sense now.
@LaunchableAI
@LaunchableAI 8 месяцев назад
Glad it was helpful!
@sitedev
@sitedev 8 месяцев назад
@@LaunchableAI I've been experimenting with RAG quite a bit (Bubble/Pinecone/Flowise) - I see these assistants only allow for a max of 20 files to be attached to a given assistant and I believe each file has a max of 100k characters. My initial thinking is that the implementation of RAG inside an assistant isn't ideal in that there does not appear to be any method of controlling or directing the retrieval process (as compared to Pinecone/metadata for instance). I'm keen to know what your thoughts are on this. I'm tending toward experimenting with creating a tool that an assistant can use where it 'hands off' the user queries to a 'Pinecone tool' along with a prompt explaining the tools role in the whole RAG precess (it simply returns relevant chunks and document references) which the assistant then uses to synthesise the response as per a typical RAG process.
@LaunchableAI
@LaunchableAI 8 месяцев назад
​@@sitedev You make some excellent points. I was talking to a client about exactly this this morning. We've built a bunch of pinecone-based storage & pre-processing bits, and are thinking about the Assistants + Files APIs as replacements. My thoughts currently are in line with yours. There's not quite enough flexibility with the current OpenAI options, for some more complex use cases (e.g., we're doing database and S3 retrievals with LangChain, passing to Pinecone; this sort of thing isn't an option yet. I can imagine other cases too that wouldn't work). That being said, I suspect they'll increase the file limit and the per-file size limit over time, so perhaps its not the right option for some projects now, but will be more viable soon enough. There's also some concern about vendor lock-in, if you do all your data storage and indexing via OpenAI, it makes it tougher to use other models / platforms. So depending on your industry / use case, something else to keep in mind. And lastly, you're point about passing some helper context to the assistants API, and using pinecone as a "tool", is probably an excellent idea. I hadn't though of turning pinecone etc. into a tool that the ChatGPT could call directly, but it sounds like a topic that's ripe for a tutorial video ;) If you try it, please let us know how it goes!
@charliekelland7564
@charliekelland7564 6 месяцев назад
Great content, thank you. I don't need this yet but may well do at some point and it's good to know it's here. I'm currently using a plugin but I don't think it does everything I'm going to need so... thanks again - subbed 👍
@LaunchableAI
@LaunchableAI 5 месяцев назад
Glad it was helpful!
@guillaume6761
@guillaume6761 8 месяцев назад
Cool!
@link0171
@link0171 24 дня назад
Incrivel, você possui uma didadica muito foda, sou do brasil e as vezes fica um pouco dificil de acompanhar o video mas com paciencia consigo compreender bem. Não sou um esperiente em api, mas queria saber se é possivel construir esse sistema todo integrado com n8n, isso usaria menos WU? Depois como eu faria pra conectar no bubble?
@OutTitan
@OutTitan 7 месяцев назад
Hey Korey, thank so much for the video. I don't know if the documentation has changed or something. But when I try to use the "Get Threads" endpoint like you showed in the video, I'm hit with this error. "error": { "message": "Your request to GET /v1/threads must be made with a session key (that is, it can only be made from the browser). You made it with the following key type: secret.", "type": "invalid_request_error", "param": null, "code": "missing_scope" } But when I pass in the thread id, it works fine.
@LaunchableAI
@LaunchableAI 7 месяцев назад
Yeah I ran into this problem too and spent awhile figuring it out. Thought initially the call had to be made client-side, so tried that. Later on in the tutorial (might be part 2), I think I discuss that you can't actually make the GET threads request; it's not supported. You need to store your thread IDs on your own, and use those. Last I looked, there was an open issue on OpenAI's developer/community forum of people discussing (ahem complaning about) this.
@lukekoletsios3236
@lukekoletsios3236 4 месяца назад
@@LaunchableAI The same issue still exists :( Just gonna continue with the video and hope you say how to fix it lol
@lukekoletsios3236
@lukekoletsios3236 3 месяца назад
I think I fixed the issue. Simply change it from GET to POST.
@Olwen89
@Olwen89 3 месяца назад
Thanks for the video! Wondering if you have an update video on how to stream the responses from open ai (seems there's some recent updates that allow streaming)
@LaunchableAI
@LaunchableAI 28 дней назад
Yep, the latest plugin versions and recent tutorial videos cover streaming. May also be releasing a tutorial on how to build streaming from scratch
@guillaume6761
@guillaume6761 8 месяцев назад
Is that the start of a series?
@LaunchableAI
@LaunchableAI 8 месяцев назад
If there's interest in expanding, maybe I'll make some more on the topic, sure.
@kashishvarshney2225
@kashishvarshney2225 7 месяцев назад
i want to create chatbot with dynamic data and gpt-3.5 how can i do that with bubble please reply
@LaunchableAI
@LaunchableAI 5 месяцев назад
You can try using a plugin. Our plugin "ChatGPT Toolkit" has various functions for extracting text from files and websites. Maybe that would do the trick? It's a paid plugin ($10/mo), but you may be able to find some free alternatives if that price is too high. If you don't want to use a plugin, you'll probably want to find an API that can accept files or websites and return text. Then you''d pass that content to ChatGPT when you make a request.
@user-sv9hj6gf2s
@user-sv9hj6gf2s 6 месяцев назад
Just one detail: GPT-3 in not compatible and GPT-4 subscription plan cost a fortune. U$D22 is too much to run assistants.
@LaunchableAI
@LaunchableAI 5 месяцев назад
Yep, it's kind of pricey, esp. if you're outside North America or Europe. I use gpt-4 pretty much every day for my work, so it's worth it for me, but I can see that it wouldn't be worth it if you're only using it occasionally or only for this one feature.
Далее
Воскресный утренний стрим!
1:00:16
Java Is Better Than Rust
42:14
Просмотров 159 тыс.
Connect the New ChatGPT 4o API to Bubble.io - Tutorial
16:01
Open AI Function Calling | Explanation & Demo
19:41
Просмотров 14 тыс.
Replacing my Agency with an Agent Swarm…
11:56
Просмотров 33 тыс.
OpenAI Assistant API Tutorial With Code Examples
21:52