Тёмный

Berlin Unstructured Data Meetup September 5 2024 

Zilliz
Подписаться 2,8 тыс.
Просмотров 125
50% 1

🎥 Once a month, we'll meet, socialize, and hear speakers present topics on unstructured data and generative AI. This event was sponsored by Zilliz. Thank you Hello Fresh for hosting us!
Timeline:
00:06 - Introduction to Milvus/Zilliz by Stephen Batifol
04:18 - Welcome from Hello Fresh
08:35 - Speaker Stephen Batifol, Multi-Agent Systems with Mistral AI, Milvus and llama-agents
46:08 - Speaker Meghana Satish, The Journey of Large Language Models at GetYourGuide
01:08:21 - Speaker Bo Wang, From CLIP to JinaCLIP: General Text-Image Representation Learning for Search and Multimodal RAG
~~~~~~~~~~~~~~~ CONNECT ~~~~~~~~~~~~~~~
🎥 Playlist • Unstructured Data Meetup
🖥️ Website: www.meetup.com...
X Twitter - / milvusio
🔗 Linkedin: / zilliz
😺 GitHub: github.com/mil...
🦾 Invitation to join discord: / discord
~~~~~~~~~~~~~~ MEETUP VIDEO CONTENTS ~~~~~~~~~~~~~~
1. Host & Speaker: Stephen Batifol, Developer Advocate at Zilliz
LinkedIn: / stephen-batifol
Title: Multi-Agent Systems with Mistral AI, Milvus and llama-agents
Abstract: Agentic systems are on the rise, helping developers create intelligent, autonomous systems. LLMs are becoming more and more capable of following diverse sets of instructions, making them ideal for managing these agents. This advancement opens up numerous possibilities for handling complex tasks with minimal human intervention in so many areas. In this talk, we will see how to build agents using llama-agents. We’ll also explore how combining different LLMs can enable various actions. For simpler tasks, we'll use Mistral Nemo, a smaller and more cost-effective model, and Mistral Large for orchestrating different agents.
Slides: www.slideshare...
2. Speaker: Meghana Satish
Title: The Journey of Large Language Models at GetYourGuide
Abstract: Integrating Large Language Models (LLMs) into our workflows at GetYourGuide has been quite the adventure. In this talk, I’ll share our experience with LLMs, focusing on the products we’ve built , the challenges we faced, and the impact on our business.
Slides: www.slideshare...
3. Speaker: Bo Wang
Title: From CLIP to JinaCLIP: General Text-Image Representation Learning for Search and Multimodal RAG
Abstract: CLIP (Contrastive Language-Image Pretraining) is commonly used to train models that can connect images and text by representing them as vectors in the same embedding space. These models are crucial for tasks like multimodal information retrieval, where you need to search and match across both images and text. However, when it comes to purely text-based tasks, CLIP models don’t perform as well as models that are specifically built for text. This causes inefficiencies because current systems often need to maintain separate models and embeddings for text-only and multimodal tasks, which adds complexity. In this talk, Bo will explain the multi-task contrastive training scheme behind JinaCLIP, discuss the modality gap between different data types, and introduce JinaCLIP V2-our latest and most capable multilingual multimodal embedding model.
Slides: www.slideshare...

Опубликовано:

 

3 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии