Тёмный
Lucidate
Lucidate
Lucidate
Подписаться
A channel focused on applications of Artificial Intelligence and DeFi to Capital Markets.

Software used:
Audio: GarageBand
Video Editing: FCPX
Animations: Manim
No code AI: Orange Data Science

Ambient Music: Muzyka Do Medytacji
pixabay.com/users/relaxingtime-17430502/
'Cosmic Bloom',. Available under creative commons non-commercial, no derivatives license (c) Ketsa freemusicarchive.org/music/Ketsa/cosmic-blossom

9404



Witness AI Magic: Risk Insights in Seconds
21:40
10 месяцев назад
Build your own Finance AGI!
16:21
Год назад
Комментарии
@atulbhardwaj90
@atulbhardwaj90 13 дней назад
You are leagues apart when it comes to explaining complex concepts! Thanks and please never stop :)
@lucidateAI
@lucidateAI 13 дней назад
Thank you for your kind words and compliments. Greatly appreciated!
@may4081
@may4081 13 дней назад
This is an underrated video. Matching the solution to the right use case and perhaps the right combinative strategy towards a use case are the most interesting/promising approaches IMO.
@lucidateAI
@lucidateAI 13 дней назад
Thanks @may4081. Glad you found the video informative and the content compelling. Greatly appreciated!
@Mari_Selalu_Berbuat_Kebaikan
@Mari_Selalu_Berbuat_Kebaikan 26 дней назад
Let's always do alot of good ❤ Nam myoho renge kyo
@lucidateAI
@lucidateAI 26 дней назад
@@Mari_Selalu_Berbuat_Kebaikan Agreed
@HuwAllen
@HuwAllen 29 дней назад
This could be so valuable. I start the day going through my email inbox, and because it's such a cumbersome, disorganised process, it can take up to 3 hours to complete, well, practically complete to use a construction term. The depressing thought is that I have to action a proportion of them in the afternoon and like Groundhog day, start the whole process again the next day.
@lucidateAI
@lucidateAI 29 дней назад
Glad you found it useful! Appreciate the comment. GenAI can add huge value here. Right now you have to ‘roll your own’ but my assumption is that this type of technology will soon be baked into all the popular email apps.
@ahishverma181
@ahishverma181 Месяц назад
hi , i had built a similar product for a client . But what should i charge to them
@lucidateAI
@lucidateAI 29 дней назад
What is it worth to them? What is their next best alternative to what you have built? If you know the answers to these questions then pricing for any product is straightforward. Without answers to these questions you are just guessing.
@Ony_mods
@Ony_mods Месяц назад
it would be better without animations from the 90x tobe honest
@lucidateAI
@lucidateAI Месяц назад
Nice!
@The...0_0...
@The...0_0... Месяц назад
This was great thanks 🎉
@lucidateAI
@lucidateAI Месяц назад
Glad you found it useful.
@ocin3055
@ocin3055 Месяц назад
Thanks, that was super helpful!
@lucidateAI
@lucidateAI Месяц назад
Glad to hear you found it useful.!
@zengxiliang
@zengxiliang Месяц назад
Wow this is stunning ❤
@lucidateAI
@lucidateAI Месяц назад
Glad you found it useful! Do you have additional use-cases of your own for this text-to-code (in this case text-to-SQL) approach?
@zengxiliang
@zengxiliang Месяц назад
@@lucidateAI absolutely, I am focused on building AI agents that automates the bi dashboard creations, currently working on private equity fund investments space
@NithinDinesh-l3h
@NithinDinesh-l3h Месяц назад
What an awesome video. Probably the best video on the internet for positional encodings. Loved every bit of it.
@lucidateAI
@lucidateAI Месяц назад
Glad you enjoyed it!
@whodat8528
@whodat8528 Месяц назад
What’s good
@lucidateAI
@lucidateAI Месяц назад
It’s all good
@stevenicfred
@stevenicfred 2 месяца назад
The Lucidate series are excellent - so rare to find the combination of Richard's deep understanding harnessed with his excellent communication skills. Highly recommended to follow.
@lucidateAI
@lucidateAI 2 месяца назад
That is extremely kind of you to say so. I’m glad you are enjoying the materials and finding them useful. Any suggestions for material that Lucidate hasn’t covered yet?
@anthonyzeal6263
@anthonyzeal6263 2 месяца назад
Most cop out and assume there enough content for step one for people To look into. Thanks for being comprehensive.
@lucidateAI
@lucidateAI 2 месяца назад
You are welcome. Appreciated. Glad you found it informative and comprehensive.
@hoangnam6275
@hoangnam6275 2 месяца назад
Great source of knowledge. Used to register to ur channel with tier 2 membership since first/second quarter of 2023 due to ur comprehensive knowledge for my case. But gonna re-register in the next few months due to job requirement. Great works🎉, thanks for ur contribution
@lucidateAI
@lucidateAI 2 месяца назад
Thanks. I’m glad you find the content on the channel useful. Best wishes in utilizing AI productively in your job.
@thegoldenvoid
@thegoldenvoid 2 месяца назад
Really well explained! Thanks!
@lucidateAI
@lucidateAI 2 месяца назад
Glad you enjoyed it!
@gmax876
@gmax876 2 месяца назад
😐
@CharlesOkwuagwu
@CharlesOkwuagwu 2 месяца назад
I left a comment on the provided repo. I've been unable to extend it for classes beyond NEGATIVE, POSITIVE. seems 'distilbert-base-uncased-finetuned-sst-2-english' model is designed for just these two classes.
@lucidateAI
@lucidateAI 2 месяца назад
You are correct. The classification head on this model is for binary classification. At 14:00 or so in this video one of the suggestions for extending this app is to look at a classification head for multiple classes. This discussion thread on HF has some links that you might find useful discuss.huggingface.co/t/multilabel-classification-using-llms/79671. If you are happy to share your results and experiences I’d live to hear how you get on.
@CharlesOkwuagwu
@CharlesOkwuagwu 2 месяца назад
@@lucidateAI seen the discussion. Clearly text-classification is the domain of encoder based models, or BERT-like models. I'll keep Searching. But I'm studying the hugging face tutorials from scratch. I'll revisit this once I have a good grounding of the hugging face pipelines and their intended use from the ground up
@lucidateAI
@lucidateAI 2 месяца назад
Makes sense. I think the HF YT tutorials are excellent. Time well spent.
@CharlesOkwuagwu
@CharlesOkwuagwu 2 месяца назад
Hi, please can you inform us on the effects of imbalanced data on fine-tuning
@lucidateAI
@lucidateAI 2 месяца назад
Hi @CharlesOkwuagwu clearly each model and training set will have its own idiosyncrasies, so it is naturally impossible to say for certain, but you would expect top see bias in the results at inference time where the model performs well on the majority data that it has seen, but poorly for the minority classes. It will also generalise poorly to real-world unbiassed examples as the model has been trained on biassed data that does not reflect the distribution in the real world. Also the performance metrics will likely be skewed - check out this video ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-a2oZwdwo0M0.html on performance metrics which explains how performance metrics like accuracy can be misleading. In imbalanced datasets, a model can achieve high accuracy by simply predicting the majority class most of the time. This does not mean the model is performing well on all classes. More informative metrics such as precision, recall, F1-score, and the area under the receiver operating characteristic (ROC-AUC) should be used to evaluate the model's performance more effectively.
@CharlesOkwuagwu
@CharlesOkwuagwu 2 месяца назад
@@lucidateAI thanks for the response. I have a curated dataset of customer chats, each labeled by a human, a total of 130 classes. The number in each class goes from 5 to over 6000 . Total records over 30,000. I'm trying to see if maybe I should use an LLM to synthesize chat samples where we have less real samples, to get a better balance. Your thoughts🤔💭
@lucidateAI
@lucidateAI 2 месяца назад
I guess the first question to ask is - “Is the statistics of the dataset representative of the real world?” (And I’m certain that you have already posed that question!!). Clearly if it is then there is little value in generating synthetic data. If not then I’d first ask is there a way to get a more representative dataset before generating synthetic data. While generating synthetic data is a common approach, and with the right controls reasonably safe I’m leery of it. The problem is that AI models will find patterns, whether there is a pattern there or not. If there are biases in the synthetic dataset production that introduces some artificial artefacts in the synthetic dataset then the LLM (or frankly any other AI/ML system) will almost always “discover” it. This can massively contaminate performance during inference. In capital markets many firms generate synthetic prices, and unless you are very careful models trained on synthetic prices perform poorly on real world data. Then consider LLMs themselves. Trained on vast corpora of data sourced from the public Internet. At first it is a reasonable assumption that the idioms if language they were learning were genuine human language. As more and more content becomes LLM generated there is clearly a chance that all LLMs learn is “LLM-ese”. So a long-winded answer (but you did ask!) is to use synthetic data as a last resort and be very careful with its construction. Far better to try and source a representative data set if you can. Good luck!
@BABEENGINEER
@BABEENGINEER 2 месяца назад
How do you generate these graphics/animations in your videos? They're too good!
@lucidateAI
@lucidateAI 2 месяца назад
Thanks @BABEENGINEER I use Manim (MAthematics ANIMation) docs.manim.community/en/stable/tutorials/quickstart.html. This video ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-WoittT72pgA.htmlsi=euGXRqlxBjGuaa1e at 8:51 shows how I’ve fine-tuned an LLM, CodeLlama in this case, to help write the classes to produce the animation a little faster (and in some cases much better!) than I can craft by hand
@lucidateAI
@lucidateAI 2 месяца назад
Did you get a chance to check out manim?
@BABEENGINEER
@BABEENGINEER 2 месяца назад
Really good explanation and the background music kept my focus flowing 👏👏👏
@lucidateAI
@lucidateAI 2 месяца назад
Glad you found it insightful!
@nickwang4777
@nickwang4777 2 месяца назад
Does this project have GitHub repo?
@lucidateAI
@lucidateAI 2 месяца назад
Hi @nickwang4777. No it does not.
@volkerengels5298
@volkerengels5298 3 месяца назад
Without seeing the video yet -> *create real problems for the world Rob Miles isn't an idiot. seen it now "More co2 is good for plants" is similar half-true
@lucidateAI
@lucidateAI 3 месяца назад
I’m probably guilty here of a misleading title. In this video the “problems” that are being solved are problems that pertain only to the Big Tech companies themselves. A more representative title might be “Unleashing the Power of Machine Learning: How Big Tech Solves _their own_ Real-World Problems.” But this is perhaps a bit too long. Or have I misunderstood the point you are making?
@volkerengels5298
@volkerengels5298 3 месяца назад
@@lucidateAI In short - my dog wants out :) "their own" is more fitting. I also think it makes sense to make people realize that AI, ML can actually work, make money and solve problems. On the really grim side: Have you seen Rob Miles' new video? It's worth it... We cannot live in a world where 5 companies hold all the power and money - with no control whatsoever. We cannot implement AI in THIS world in peace. imo Thanks
@lucidateAI
@lucidateAI 3 месяца назад
Thanks. I’ve not yet seen Rob Miles’ new video. I’ll check it out.
@adamelkhanoufi6126
@adamelkhanoufi6126 3 месяца назад
Is there any chance you can share the source code to your streamlit app. I've been looking to create my own LLM benchmarking tool on streamlit as well and when I saw you pull out your benchmarking app I got super excited. But unfortunately no link in description :(
@lucidateAI
@lucidateAI 3 месяца назад
github.com/mrspiggot/LuciSummarizationApplication With thanks and apologies. I've just updated the description. Enjoy the repo!
@adamelkhanoufi6126
@adamelkhanoufi6126 3 месяца назад
@@lucidateAI No, thank you for the rapid response. You sir just earned another subscriber👍
@lucidateAI
@lucidateAI 3 месяца назад
Thanks! I hope you enjoy the other videos on the channel as much as this one.
@lucidateAI
@lucidateAI 2 месяца назад
How have you got on with the code in the repo? Have you been able to use it as a platform to add your own functionality?
@Barc0d3
@Barc0d3 Месяц назад
@@lucidateAI❤
@hoangnam6275
@hoangnam6275 3 месяца назад
Nice to have u back
@lucidateAI
@lucidateAI 3 месяца назад
Did you miss me?
@hoangnam6275
@hoangnam6275 2 месяца назад
@@lucidateAI ur video provide a comprehensive knowledge in this field with technical details, and ur viewer must be someone who very passionate in AI
@lucidateAI
@lucidateAI 2 месяца назад
Thanks!
@pmatos0071
@pmatos0071 3 месяца назад
Great video, thank you for the share
@lucidateAI
@lucidateAI 3 месяца назад
Glad you enjoyed it
@Eltaurus
@Eltaurus 3 месяца назад
10:15 - This is not true, though. Eucledean distance does not only depend on the lengths of the vectors added, but also on the angles between the added encoding vectors and the original embedding vectors, which won't be the same if words are swaped. That can easily be checked with a direct computation: In the first case the distance between vectors corresponding to words "swaps" and "are" is equal to √[(-35.65-19.66)² + (59.47+61.65)² + (35.25-34.55)² + (-21.78-88.36)² + (33.44-50.35)² ] = 173.627 while in the second case it equals √[(-36.65-20.66)² + (60.47+62.65)² + (35.25-34.55)² + (-21.78-88.36)² + (33.44-50.35)² ] = 175.671 So with one-hot positional encoding the distances just as well depend on the positions of words in a sentence. The reason for not using one-hot encodings for positions is actually a completely different one.
@ricardofernandez2286
@ricardofernandez2286 3 месяца назад
Hi, this is the first time I watch one of your videos, and I've found your explanations mind opening. In this video you mention another videos that are recommended in order to better understand some complex concepts. I searched your channel for a sort of "series" but I could not find one that glues all these videos together. As a newbie, however eager to learn on the topic, I was unable to determine that myself. Would you be so kind to mention which videos and in which order should we watch them in order to get a comprehensive understanding of the topic, from the most basic concepts to the current state of development? It will be much appreciated!! Best regards! Ricardo
@lucidateAI
@lucidateAI 3 месяца назад
Thanks @ricardofernandez2286 for your kind words. I'm glad you enjoyed the video. This particular video is part of this larger playlist -> ru-vid.com/group/PLaJCKi8Nk1hwaMUYxJMiM3jTB2o58A6WY
@lucidateAI
@lucidateAI 3 месяца назад
You can find a list of all the Lucidate playlists here -> www.youtube.com/@lucidateAI/playlists
@lucidateAI
@lucidateAI 3 месяца назад
Take a look at these as well ru-vid.com/group/PLaJCKi8Nk1hzqalT_PL35I9oUTotJGq7a&si=cDgVTll8TiWNK4RV and
@ricardofernandez2286
@ricardofernandez2286 3 месяца назад
@@lucidateAI You deserve!! And thank you very much for your comprehensive and fast response. I will certainly look at the playlists you recommended! Best regards!!
@lucidateAI
@lucidateAI 3 месяца назад
I can't wait to hear what you think!
@jayhu6075
@jayhu6075 3 месяца назад
What a great explanation about this topic.
@lucidateAI
@lucidateAI 3 месяца назад
You are welcome! Glad you enjoyed it!
@zengxiliang
@zengxiliang 3 месяца назад
Exciting to see the potentials of specialized and enhanced LLMs!
@JoshuaCunningham-vg7xg
@JoshuaCunningham-vg7xg 3 месяца назад
Agreed!
@lucidateAI
@lucidateAI 3 месяца назад
Me too and I think that the potential is going to increase exponentially. Appreciate the comment as well as your membership and subscription. Richard.
@lucidateAI
@lucidateAI 3 месяца назад
Glad you agree with @zengxiliang. I agree too (naturally...). Are there any areas of focus you are interested in? Mine is predominantly capital markets - which is probably evident. But in my consulting business I see interest from a wide variety of industry sectors outside of finance, and I'm always curious and excited to see where people are utilising generative and agentic AI. Appreciate the comment and the support of the channel. Richard
@zengxiliang
@zengxiliang 3 месяца назад
@@lucidateAI Thanks Richard! I work at a pension fund, we are actively exploring applications of LLMs now, your content is very inspiring and helpful!
@lucidateAI
@lucidateAI 3 месяца назад
Glad you are finding the material useful.
@joshuacunningham7912
@joshuacunningham7912 3 месяца назад
So good! Thank you for educating in a way that’s easy to understand. 👏
@lucidateAI
@lucidateAI 3 месяца назад
You are welcome. Delighted you found the content useful.
@Blooper1980
@Blooper1980 3 месяца назад
CANT WAIT!!!!!!!
@lucidateAI
@lucidateAI 3 месяца назад
Glad you found it useful. Videos 2 and 3 are already complete and should be on general release next week. (Currently they are available to Lucidate members at the VP, MD or CEO levels). I'm just finishing of the LoRA video as I type. That should be out the week after next. Appreciate the support and I hope you found the content insightful.
@AbdennacerAyeb
@AbdennacerAyeb 3 месяца назад
you are a jem stone. Tank you for sharing knowledge.
@lucidateAI
@lucidateAI 3 месяца назад
Thanks @AbdennacerAyeb! Greatly appreciated. I'm glad you enjoyed the video!
@jon4
@jon4 3 месяца назад
Another great video. Really looking forward to this series
@lucidateAI
@lucidateAI 3 месяца назад
You are welcome. Really glad you found it useful.
@encapsulatio
@encapsulatio 4 месяца назад
Which LLM from all you tested up to now(in general, not only the ones you talked about in this video) is the best at this moment at breaking down subjects that are at a university level using pedagogical tools? If I request the model to read 2-3 books on pedagogical tools can it properly learn how to use these tools and actually apply them on explaining clearer and better the subjects?
@lucidateAI
@lucidateAI 4 месяца назад
This video is focused on which models perform the best at generating source code (that is to say Java, C++, python etc.). On the other hand the subject of this video -> Text Summarisation Showdown: Evaluating the Top Large Language Models (LLMs) ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-8r9h4KBLNao.html is on text generation/translation/summarization etc. Perhaps the other video is more what you are looking for? In either event the key takeaway is that by all means rely on public, published benchmarks. But if you want to evaluate models on your specific use-case (and if I correctly understand your question, I think you do) then it might be worth considering setting up your own tests and your own benchmarks for your own specific evaluation. Clearly there is a trade off here. Setting up custom benchmarks and tests isn’t free. But if you understand how to build AI models, then it isn’t that complex either.
@encapsulatio
@encapsulatio 4 месяца назад
@@lucidateAI I reformulated a bit my inquiry since it was not clear enough. Can you read it again please?
@lucidateAI
@lucidateAI 4 месяца назад
Thanks for the clarification. The challenge with reading 2 or 3 books will be the size of the LLMs context window (the amount of tokens that can be input at once). Solutions to this involve using vector databases - example here -> ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-jP9swextW2o.html This involves writing Python code and development frameworks like LangChain. You may be an expert at this, in which case I'd recommend some of the latest Llama models and GPT-4. Alternatively you can use Gemini and Claude 3 and feed in sections of the books at a time (up to the token limit of the LLM). These models tend to perform the best when it comes to breaking down complex, university-level subjects. They seem to have a strong grasp of pedagogical principles and can structure explanations in a clear, easy-to-follow manner. That said, I haven't specifically tested having the models read books on pedagogical tools and then applying those techniques. It's an interesting idea though! Given the understanding these advanced models already seem to have, I suspect that focused training on pedagogical methods could further enhance their explanatory abilities. My recommendation would be to experiment with a few different models, providing them with sample content from the books and seeing how well they internalize and apply the techniques. You could evaluate the outputs to determine which model best suits your needs.
@sandstormfeline3664
@sandstormfeline3664 4 месяца назад
I was looking for a video to help get my head around tree of thought with a working example, and I found it. great work thanks :)
@lucidateAI
@lucidateAI 4 месяца назад
You are very welcome. I’m glad you found it insightful. ru-vid.com/group/PLaJCKi8Nk1hyvGVZub2Ar7Az57_nKemzX&si=JwiUaQ-UojUXoOwA here are some other video explainers on other Prompt Engineering techniques that I hope you find equally informative.
@joshuacunningham7912
@joshuacunningham7912 5 месяцев назад
This is one of the most underrated AI RU-vid channels by far. Thanks Richard for another phenomenal video.
@lucidateAI
@lucidateAI 5 месяцев назад
Appreciate that! Thanks! Glad you found this video and other content on the channel insightful.
@paaabl0.
@paaabl0. 5 месяцев назад
Well, you didnt explain a thing about autogpt here :/
@lucidateAI
@lucidateAI 5 месяцев назад
Sorry @paaabl0, but thanks for leaving a comment. Let me try, if I may, from another angle. The inputs and outputs to LLMs are natural language. Human text. (Yes, literally they are vectors of subword tokens, but I hope you will forgive the abstraction). If you type text into an LLM, you get text out. AutGPT works by using this feature of LLMs and putting an LLM into a loop. As the inputs and outputs are both natural language you can use clever prompts to control and direct this loop. While there are many prompting techniques you can use 'Plan & Execute' as well as 'ReAct' (REasoning & ACTion) are popular choices here. They work by first instructing. the LLM to go through a sequence of steps - such as 1 Question, 2 Thought, 3 Action, 4 Action Input, 5 Observation (repeat previous 5 steps steps until) 6 Thought == 'I now know the answer to the original question', 7 Divulge answer. See an example of this type of Prompt here: Answer the following questions as best you can. You have access to the following tools: {tools} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: {input} Thought:{agent_scratchpad} This is authored by Harrison Chase, founder of Langchain and you can access it at the LangChain hub under 'hwchase17/react'. This is the heart of AutoGPT (and other similar attempts at AGI). Buy using the 'input is language / output is also language / prompt LLM into a loop where early stages are about thinking and planning, middle stages are about Reasoning and action and final stages are about conclusion and output', you achieve the type of behaviour associated with tools/projects like AutoGPT. Perhaps this different explanation helped a little, perhaps not. Clearly there a a good many great YT sites on AI and I hope one of them is able to answer your questions around AutoGPT better then I'm able. With thanks for you taking the time to comment on the video.
@SameerGilani-zy6sf
@SameerGilani-zy6sf 5 месяцев назад
I am not able to install langchain.experimental.plan_and_execute. Can you plz help me
@joshuacunningham7912
@joshuacunningham7912 5 месяцев назад
Dear @LucidateAI, Pay no attention to @avidlearner8117. They obviously lack a fundamental understanding of business and public social interaction. I am very appreciative of your content and always look forward to it.
@avidlearner8117
@avidlearner8117 5 месяцев назад
OK, so you went from analysis to pushing your product on every new videos? SMH...
@lucidateAI
@lucidateAI 5 месяцев назад
Don't break your neck!
@avidlearner8117
@avidlearner8117 5 месяцев назад
@@lucidateAI Oh, I hit a nerve. Get it?
@lucidateAI
@lucidateAI 5 месяцев назад
Then I'd stop shaking if I were you!
@avidlearner8117
@avidlearner8117 5 месяцев назад
@@lucidateAI You thought I was talking about my neck! Ah well.
@lucidateAI
@lucidateAI 5 месяцев назад
And a beautiful neck it is I'm sure!@@avidlearner8117
@DannyGerst
@DannyGerst 6 месяцев назад
That is nice! Will you be interested in share the code? Your videos seeming promising, but it is only talk without anything that I can play with. That would be really great!!
@lucidateAI
@lucidateAI 6 месяцев назад
Hi Danny. Yes and No. The code for this video is not currently available, but the video in the works is a code walkthrough with a link to the GitHub repo that contains the code. However, this will be paywalled and only available to members of the Lucidate channel at the Managing Director and CEO levels. So while it will be available, it will not be “freely available “ (which is I think what you were asking).
@abenjamin13
@abenjamin13 6 месяцев назад
This is fantastic blueprint for creating a “quality” output document 📄. I appreciate you 🫵
@markettrader911
@markettrader911 6 месяцев назад
Good shit man
@lucidateAI
@lucidateAI 6 месяцев назад
Thanks dude!
@banzai316
@banzai316 6 месяцев назад
What about creating videos or creating docs from videos (& summary).
@lucidateAI
@lucidateAI 6 месяцев назад
Is this Fine Tuning GPT-3 & Chatgpt Transformers: Using OpenAI Whisper ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Qv0cHcfFHM8.html useful?
@banzai316
@banzai316 6 месяцев назад
@@lucidateAI , yes, completely forgot about this transformer. I will look again.
@neurojitsu
@neurojitsu 6 месяцев назад
Quick question: does Claude2 have a similar capability to turn text into a vectorised docstore in order to do what it does? If so, then is the added value of your app the the eradication of the context window limit, or better 'tuning' of the workflows for this purpose, or some other magic sauce?! Trying to get my head round the value of your MD tier beyond the tools I'm learning to use at the moment. Thanks in advance.
@lucidateAI
@lucidateAI 6 месяцев назад
Hi @neurojitsu All LLMs will use embeddings, transforming words (or more precisely sub-words called tokens) into vectors. If you are unfamiliar with this process then these videos will get you up to speed - ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-6XLJ7TZXSPg.html and ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-RAIUJ3VFXmI.html. But that is different from taking a document, vector using it and putting it into a docstore or vector database. I use FAISS in this video but other vector databases include Pincone, Weviate and Chroma. So Claude2 (nor GPT, Gemini, LLama2, Coral etc.) natively creat a docstore. This is separate action and you can link the docstore to the LLM using an AI framework like LangChain or AutoGen. With large context windows; currently GPT has a 125k token context window, Claude2 200k and Gemini 1MM, there are a lot of documents that you can load into the prompt of an LLM for zero-shot or one-shot learning. ZSL and OSL are simple techniques whereby you temporarily “train” an LLM with content in its prompt. Think of it like a short term memory. So you are 100% correct with a large enough context window you would not need to use a docstore. However if the size of your documents in tokens exceeds the size of the context window your LLM will “forget” some of the material. Furthermore if you are using a chat model and repeatedly querying and questioning the corpus of data in the prompt then the context window will fill up and again the LLM will forget some of the earlier content. Both of these problems are eliminated by using a docstore which acts as a longer term memory for crucial information. Whether my MD tier is worthwhile is a tough question to answer as I’m biased. Sadly the only way to find out for real if it is useful for you is to try it out. With over 7Bn people on the planet and only a tiny, tiny fraction signed up as MDs the overwhelming vote from humanity is that the Lucidate MD tier is useless and a waste of time. So if you want to go with the herd then my advice is to avoid it like the plague. But the good news is that you can cancel at any time and only pay up to the month you have cancelled. So if you want to take a chance to find out if there are useful pieces of information in there then my advice might be different and I’d say give it a go! What have you got to lose other than one month’s subscription? (But as I said, I’m biased). Probably not the answer you were looking for, and 100% unhelpful, but honest.
@neurojitsu
@neurojitsu 6 месяцев назад
@@lucidateAI many thanks for the detailed answer, that's very helpful. Happy to give your MD service a whirl I think, but it's not the cost so much as the utility/time savings in research and accelerated learning that I'm weighing up; are there other small company MDs in the community? I'm guessing frankly - since we're being honest - that I'm unlikely to become a consulting client of yours, so the value to me is also in learning from a community of others like me. And accessing help when I get stuck, plus some inspiration/guidance for how to adopt the right AI tools as things are moving so fast. My background field professionally is learning/talent and organisational change, and I'm currently researching and working on product development for my own business. I'll take a look at your tiers info...
@lucidateAI
@lucidateAI 6 месяцев назад
Frankly I do not know what the make up of the Lucidate membership is. But you raise a good point and perhaps I should ask a question / set up a poll on the Lucidate Discord and find out. Thanks for the motivation to set up a poll!. The key benefit of being an MD over a VP is access to some sample code in private GitHub repos along with some exclusive content (largely code walkthrough videos). If your prime motivation is to learn from others in the community then the VP level grants access to the Discord, no need to be a VP. You can of course game the system a little. Join as a VP, and if you want access to the videos and code you can join as an MD for a month, clone the latest code from the repos, watch the MD-only videos and then downgrade your account to a VP at the end of the month, still retaining access to the community discussions on the Discord.
@neurojitsu
@neurojitsu 6 месяцев назад
@@lucidateAI many thanks again! I'll give it a whirl...
@lucidateAI
@lucidateAI 6 месяцев назад
See you on the Discord!