Тёмный

How Cohere will improve AI Reasoning this year 

Machine Learning Street Talk
Подписаться 134 тыс.
Просмотров 16 тыс.
50% 1

Aidan Gomez, CEO of Cohere, reveals how they're tackling AI hallucinations and improving reasoning abilities. He also explains why Cohere doesn't use any output from GPT-4 for training their models.
Aidan shares his personal insights into the world of AI and LLMs and Cohere's unique approach to solving real-world business problems, and how their models are set apart from the competition. Aidan reveals how they are making major strides in AI technology, discussing everything from last mile customer engineering to the robustness of prompts and future architectures.
He also touches on the broader implications of AI for society, including potential risks and the role of regulation. He discusses Cohere's guiding principles and the health the of startup scene. With a particular focus on enterprise applications. Aidan provides a rare look into the internal workings of Cohere and their vision for driving productivity and innovation.
cohere.com/
x.com/aidangomez
Check out Cohere's amazing new Command R* models here
cohere.com/command
Pod: podcasters.spotify.com/pod/sh...
TOC:
00:00:00 Intro
00:01:48 Guiding principles of Cohere
00:02:31 Last mile / customer engineering
00:04:25 Prompt brittleness
00:06:14 Robustness and "delving"
00:10:12 Command R models and catch up
00:12:32 Are LLMs saturating / specialisation
00:16:11 Intelligence
00:21:28 Predictive architectures, data vs inductive priors
00:25:55 Agentic systems
00:28:11 Differentiation
00:33:35 X-Risk / Bostrom
00:39:30 Changing relationship with technology
00:45:08 Policy
00:49:01 Startup scene
00:52:44 Biggest mistake?
00:53:50 Management style
00:56:38 Culture in different Cohere offices?
Disclaimer: This is the second video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview.

Наука

Опубликовано:

 

2 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 89   
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 дня назад
What is the future in the LLM enterprise space?
@snarkyboojum
@snarkyboojum 3 дня назад
Probably a trough of disillusionment in about 12 months. The industry has overhyped the capability too much at this point so it will need a correction. It’ll probably be a slow correction too. I don’t see any personnel reduction happening any time soon. More business process augmentation, and human augmentation. There’ll be more humans in the process I think, not fewer. Also, more use cases with LLM and classical ML hybrids for broader intelligent apps use cases. Enterprises will be looking to quantify ROI for their initial investments and will struggle to find it. There will be more marketing around it but I don’t think boards or shareholders will be satisfied with the current marketing numbers.
@stevengill1736
@stevengill1736 3 дня назад
I think so too, a lot or hysteria up till now, but it'll steady out. Sounds like a great outfit, thank you kindly for the interview..
@toadlguy
@toadlguy 2 дня назад
@@Peter-dd3br And all you have to worry about is your customers getting a little glue in their pizza 🤣
@snarkyboojum
@snarkyboojum 3 дня назад
Seems like a nice fellow. Really interesting how hand wavy he gets when you put him on the spot. For example, he reaches for “synthetic data” a lot when you push him on future model improvements. Data augmentation with synthetic data will solve all their problems except where it won’t, like the short discussion about video and physics data. He doesn’t provide the detail or show the intellectual horsepower I’d expect of someone who is leading the way with their models. The answer to your question about what mistakes he had made and advice for others is another good example. His answer was ‘everything’. No insights, no detail. As a result he comes across as doing the same things as every other company in the domain. Thanks for the discussion!
@XShollaj
@XShollaj 3 дня назад
I agree with your sentiment, however even though Aidan undoubtedly has the intellectual horsepower (he is one of the OG writers of the Transformers paper) as a CEO he cannot keep up with all the technical details - pretty sure his day consists more on him doing executive choices, partnerships, tons of meetings & traveling. Cohere engineers seem solid & their models are quite capable :)
@snarkyboojum
@snarkyboojum 3 дня назад
@@XShollaj Fair point.
@Enedee007
@Enedee007 3 дня назад
@@XShollajto me, Cohere is one of those foundation models builders who have clearly defined their niche and optimising towards that. Very developer-friendly. They allow you to test and pilot as much as you want and support you to go to production. Very competent engineers and great customer support. IMHO, they are too quiet.
@JohnSmith-gp9ut
@JohnSmith-gp9ut 3 дня назад
tbf neither does elon musk when you hear him talk
@drmartinbartos
@drmartinbartos 2 дня назад
Soo... I tried using LLMs for a judgement call.. (context at end of next post.. for which you may take the below comment as a tl;dr warning..) "YES or NO. Given the choice of posting or not posting (no further alternative actions available), is it worth posting the following RU-vid comment?" GPT4o: No. The comment is very long, detailed, and complex, which might deter readers from engaging with it. Simplifying and focusing on key points could make it more effective. Claude 3.5: NO. While your comment contains thoughtful analysis and interesting points, it is excessively long and unfocused for a RU-vid comment. RU-vid comments are typically brief and to-the-point. This extensive commentary would be better suited for a blog post, forum discussion, or personal essay. Posting such a lengthy comment is unlikely to engage most viewers and may be overlooked or seen as overwhelming in the context of a RU-vid comment section.​​​​​​​​​​​​​​​​ Human-(Me): Ach, go-on.. YOLO.. YES. OK, so, it's a bit all over the place and it's a shame (though I'm biased) that for brevity(!) you've dropped points like the Mixture of Cameras model concept... and the apt-but-utterly-tangential rant on UK medical training following Calmanisation reforms 15+yrs ago likely impact on consultant domain expert breadth of knowledge from the original.. but at least you will have parked most of the material contextually somewhere if you want to come back to it for eg a blog post. And it may get it into training data and make sense to an LLM.. And the potential human audience where you plan to post it is narrow, peculiarly relevant, possibly thoughtful/amused enough to read it, or intelligent enough to give up quickly if uninterested, or kind enough to forgive if they read the end bit first.. so maybe won't be too annoyed... Why not ask your critics to sense check? Howabout: "Are there any obviously stupid or erroneous comments in the following:" Claude 3.5: After carefully reviewing the text, I don't see any obviously stupid or erroneous comments. The content appears to be a thoughtful, nuanced discussion of several complex topics related to AI development, climate change, and risk assessment. While the author acknowledges that their writing may be influenced by their current circumstances (being in a hospital bed), the points made seem reasoned and well-considered, even if somewhat rambling in style. The author touches on various subjects, including: 1. AI model development and training data strategies 2. The challenges of assessing and communicating existential risks 3. The difficulties in bridging different thinking styles and domain expertise when addressing complex issues 4. The potential for AI to help policymakers better understand complex, multi-domain problems While some of the opinions expressed may be debatable, they don't appear to be obviously erroneous. The author also demonstrates self-awareness about the length and potentially meandering nature of their comment. If anything stands out, it's the casual mention of using an AI assistant (Claude 3.5) to help reduce the character count of their comment. This isn't erroneous, but it's an interesting meta-commentary on AI use within a discussion about AI development. Overall, the text presents as a sincere, if somewhat stream-of-consciousness, exploration of complex ideas rather than containing any clearly stupid or erroneous statements. GPT4o: {long irascible conversation omitted by human} chatgpt.com/share/9881f0fb-6241-4440-8ffd-27f53283bb02 Human-(me) writes: GPT4o's response was so off-topic and irritating that after a protracted angry discussion I put the whole chat into Claude3.5 for independent comment which came back as follows: Thank you for sharing this interaction and asking for my observations. Here are my key takeaways: 1. The original comment was quite long, detailed and covered multiple complex topics related to AI development, data usage, and potential risks. It was written in a somewhat informal, stream-of-consciousness style. 2. The AI assistant (not me) initially focused heavily on issues of clarity, structure, and potential ambiguity in the comment, rather than strictly addressing factual errors as requested. This seems to have frustrated the human user. 3. The human had to repeatedly clarify and emphasize that they were looking specifically for known factual errors, not issues of style or clarity. They expressed frustration that the AI was not directly addressing their actual request. 4. Even after multiple clarifications, the AI assistant continued to include opinions and subjective assessments in its responses, rather than strictly focusing on verifiable factual errors as requested. 5. The human pointed out several instances where the AI assistant seemed to misread or misinterpret parts of the original comment, such as the discussion of game engines. 6. There was a discussion about potential bias related to Microsoft funding, which the AI denied but is an interesting point to consider regarding potential influences on AI systems. 7. Overall, this interaction highlights some challenges in AI-human communication, particularly around following specific instructions, distinguishing between fact and opinion, and accurately interpreting nuanced human writing. The AI assistant seemed to struggle with precisely addressing the human's request, leading to a somewhat frustrating exchange where the human had to repeatedly clarify and redirect the conversation. This demonstrates the importance of AIs being able to accurately interpret and follow specific human instructions, especially when dealing with complex, nuanced topics.
@toadlguy
@toadlguy 2 дня назад
That was a really great interview. I like the guys at Cohere. They seem a bit more grounded than a lot of people working in AI. I certainly appreciate that someone is concerned with the problem of LLMs becoming boring and pedantic. That may be a greater danger than Superintelligence 😬
@itsSandraKublik
@itsSandraKublik 2 дня назад
So proud to be working at Cohere under Aidan's leadership. What a great interview, Tim, they just keep betting better! ❤‍🔥
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 дня назад
Thank you Sandra!
@redacted5035
@redacted5035 3 дня назад
TOC: 00:00:00 Intro 00:01:48 Guiding principles of Cohere 00:02:31 Last mile / customer engineering 00:04:25 Prompt brittleness 00:06:14 Robustness and "delving" 00:10:12 Command R models and catch up 00:12:32 Are LLMs saturating / specialisation 00:16:11 Intelligence 00:21:28 Predictive architectures, data vs inductive priors 00:25:55 Agentic systems 00:28:11 Differentiation 00:33:35 X-Risk / Bostrom 00:39:30 Changing relationship with technology 00:45:08 Policy 00:49:01 Startup scene 00:52:44 Biggest mistake? 00:53:50 Management style 00:56:38 Culture in different Cohere offices?
@tomenglish9340
@tomenglish9340 3 дня назад
The best part was gaining some acquaintance with Aidan as a person. He seems like a great guy.
@dataops1468
@dataops1468 3 дня назад
I think delivering substantial progress on 'System 2' thinking and longer term planning in the next ~24 months will be required to justify the continued level of investments we are seeing in the space.
@hunterkudo9832
@hunterkudo9832 3 дня назад
Yup. And I doubt it's that difficult. I think these AI companies are dragging their feet with it in order to get more investments from investors.
@TheReferrer72
@TheReferrer72 3 дня назад
Not true. This has legs, even with little increment improvement from now on! Just getting a computer to read in a natural voice is a huge achievement, its those little improvements that would have been huge a decade ago that will improve productivity for years to come.
@lupf5689
@lupf5689 3 дня назад
18:20 "In the same way that hallucination used to be an existential threat to this technology ..." Past tense? Why? The models I've been playing around with still have that problem very much. Is he talking about stuff that hasn't been released to the public? Every couple of months or so, I try to use these models for software related stuff and usually it doesn't take long until they start making up things, like telling me to set certain configuration parameters, that are available in a different library than the one it suggested to use a moment ago. Maybe it's me being to stupid to prompt these things, but hallucinations don't seem to be a solved problem.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 дня назад
I think he means that while they do still hallucinate, it's a relatively small amount of cases when the model knows a lot about the topic the prompt is querying. App builders still need to do a tonne of engineering to address this i.e. RAG and other stuff, but it's not a deal breaker for many practical applications.
@simianbarcode3011
@simianbarcode3011 3 дня назад
I appreciate Aidan's perspective and approach
@____2080_____
@____2080_____ 3 дня назад
I one of those fellows have been working on programs that combine several different large language models with other machine learning tools to create something similar to this inner monologue conversation as well as modeling dynamic systems interaction. Is exciting to see one of the packages we are using for this with its own ecosystem is looking intothis for its own LLM.
@XOPOIIIO
@XOPOIIIO 2 дня назад
I think the repetitive behavior of LLM have a different cause. It makes the text boring, because the algorithm required to predict the next word. You have more chances to predict thoughts that are mundane and told many times before. Generating original and creative thoughts so are discouraged by the algorithm.
@BrianMosleyUK
@BrianMosleyUK День назад
You have the most amazing guests, Aiden seems very grounded. Thank you so much.
@sebby007
@sebby007 День назад
Great conversation to listen to! Thanks for posting this.
@CipherOne
@CipherOne День назад
My new favorite team of AI engineers! 🎉
@TheNerd484
@TheNerd484 3 дня назад
While this seems like an interesting project, I would categorize it as just another llm/feed forward multimodal model, especially since his answer for innovation boils down to "throw more training data at it". What I'm looking forward to, and think will be an actual innovation in the space is when we can create a model that can be trained to produce useful behavior on minimal data/examples/instruction similar to a human or other animal. In my opinion, the only time we'll see true machine reasoning is when we get something like an RNN that can run for an arbitrary number of iterations, or if the symbolic ml guys figure out something cool.
@diegocalderon3221
@diegocalderon3221 2 дня назад
I like that the thought is to make new models smaller, anything bigger than 32b is overkill and only goes toward data center profit. Algorithms for condensing training datasets (similar to what Microsoft did for Phi 3) os the way. Dimension reduction (PCA or some equivalent). You might lose some accuracy just prompting the LLM, but scaling an Agentic workflow will be robust.
@dr.mikeybee
@dr.mikeybee День назад
The logits created by an LLM's weights are not log-odds. They are not the result of a statistical function. The function is far more complex. In fact, it's intractable. We do treat these as statistical results, however, via the softmax, and that's appropriate. These weights are modeling human reason in subnetworks, things like comparison functions. Yes these models are reasoning. It's rudimentary reasoning nevertheless. BTW, most synthetic training data that is being created is coming from chat logs where people work with models to create.
@dr.mikeybee
@dr.mikeybee День назад
I believe that LLMs are modeling something like Platonic forms. They have a better error function than surprise. It's purer. They necessarily model the best systems with the fewest errors. They are free from the cognitive dissonance of chemical context injections.
@ragnarherron7742
@ragnarherron7742 3 дня назад
Time and again the hard lesson of the Enlightment is we only make grudging progress through conjecture and criticism.
@XOPOIIIO
@XOPOIIIO 2 дня назад
I never saw anybody seriously challenged the concrete x-risk arguments. All the criticism against dooms-day scenario was limited to "they are cultists, they are ideologically brainwashed" instead of hearing and answering the concrete arguments they present
@ragnarherron7742
@ragnarherron7742 3 дня назад
Right track seeing problem solving as DSL market ecosystems. Llms that produce synthetic markets that leverage divide and conquer techniques that transform prompts into generated RAG DSL pipelines.
@stevengill1736
@stevengill1736 3 дня назад
"How do you teach these devices common sense", I was thinking as you both started talking about it, thanks.
@robbiero368
@robbiero368 3 дня назад
I've noticed the interviewer model often repeats the phrase "lots of people say" i wonder what data set it was trained on. Maybe videos of MLST?
@DelandaBaudLacanian
@DelandaBaudLacanian 3 дня назад
43:10 - What is this enfeeblement pie chart and why is enfeeblement a difficult topic to search for
@ej3281
@ej3281 3 дня назад
24:00 I think Gödel and Turing might disagree a little... Best of luck tho 😅
@mfpears
@mfpears 3 дня назад
Yes
@tomk.9803
@tomk.9803 3 дня назад
Dr. Tim, thank you very much for this interview! Personally I use Cohere's Command-R model pretty often and it is very good, totally comparable to other SOTA models that we currently have. The fact they share openly including weights is something I really like about them.
@420_gunna
@420_gunna 3 дня назад
Will there be a third episode with Ivan? 🤔
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 дня назад
I certainly hope so! We haven't filmed with him yet
@aveenof
@aveenof 3 дня назад
Such a breath of fresh air. Grounded, not hype or scare-mongering, dedicated to the craft, taking reasonable steps at a time and learning from the process.
@minhuang8848
@minhuang8848 3 дня назад
also saying absolutely nothing
@Enedee007
@Enedee007 3 дня назад
@@minhuang8848he said so much. Just listen. He’s a whole ocean technically sounder and more competent than many a high-sounding bamboozlers in this space.
@SLAM2977
@SLAM2977 3 дня назад
Nice no hype no BS business model aimed at solving real problems companies have
@kanalarchis
@kanalarchis 3 дня назад
What's this "EAK" or "YEAK" countermovement, or whatever the spelling is? I have heard of the EA cult (effective altruism) but I want to look into the other thing he mentioned. Where do all these "movements" happen and nobody tells me about it?
@arunkumar_ra
@arunkumar_ra 3 дня назад
It's called e/acc. You'd probably find people talking about it on Twitter or reddit
@user-mv3or2mc7w
@user-mv3or2mc7w 3 дня назад
'Augment'
@sirius-ai
@sirius-ai 3 дня назад
An amazing company with some amazing people! I am officially a fan! Thanks to MachineLearningStreetTalk for another great episode!
@covle9180
@covle9180 3 дня назад
It's concerning that people can say their company is trying to influence policy and no one is like "hey ehh maybe don't?"
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 дня назад
See it from their perspective, there are those who are trying to kill their business with irrational fears of "AI existential risk". There *are* really important risks though, this is nuanced.
@toadlguy
@toadlguy 2 дня назад
I think it is important that smaller entrepreneurs are speaking with regulators. You know that the largest players (who are also the largest companies in the world) have major lobbying efforts. All you need to do is look at the Sam Altman playing up the risk of an “existential threat” before congress to see their disinformation. They would like to see a regulatory scheme that would allow only the largest players to play (like we already have in many industries) while ignoring real risks like misinformation, economic disruption and wholesale misappropriation of IP.
@Peter-dd3br
@Peter-dd3br 3 дня назад
They used to teach reasoning in universities so there are textbooks, like Logic by Isaac Watts. I wonder if such texts could provide a useful framework for Ai reasoning🤔
@mikezooper
@mikezooper 3 дня назад
They tried that for years and it failed.
@hunterkudo9832
@hunterkudo9832 3 дня назад
​@@mikezooperit's because they are trying to derive logic from data, rather than let the model build its own logic little by little through experience.
@ronilevarez901
@ronilevarez901 2 дня назад
@@hunterkudo9832 experience, as in living day by day, which takes too long. What entrepreneur has the time to wait 30 years to have a full grown AI trained on "experiences"? They want money NOW. And that's all this is about: money. Progress is a side effect.
@andrewsilber
@andrewsilber 3 дня назад
I haven’t finished watching the video so I don’t know if this is addressed later on, but.. With respect to agents which employ Mixture of Experts approach, is there not a concern that by specializing models and reaching out to them by some general purpose orchestrator, that you might lose cross-disciplinary creativity ? Perhaps engineering borrowing some idea from nature, etc? Mightn’t it be the case that the really out of the box creativity would only come from all the knowledge together in one latent space so it can intermingle to identify patterns which might not be achieved by a specialized MoE?
@Enedee007
@Enedee007 3 дня назад
A true MOE involves a combination(mixture) of specialised models with deep domain knowledge(Experts) not of different generalists.
@andrewsilber
@andrewsilber 3 дня назад
@@Enedee007 Right but if they only have knowledge in their own domains, then how will there be true cross-discipline “inspiration”? The engineering expert might not think to borrow an idea from the animal kingdom, for example. Why? Because whatever is coordinating the agents and routing the questions to the appropriate expert will not itself have the breadth of understanding to connect the dots between two highly distinct disciplines. Like maybe a structural engineering expert might not think to look to honeycomb structures from bee hives. But if they were all in the same monolithic model then the associations could be made at some layer of latent representation.
@Enedee007
@Enedee007 3 дня назад
@@andrewsilber that’s correct. I think our points are more similar than they differ. The coordinating agent should be the generalist in that architecture. That’s the agent that’s able to carry out cross-discipline activities. That’s also a specialisation, it affords the opportunity to deeply train a specific model for that exact purpose, it should be well grounded in its ability to learn from different areas of specialisation and make critical connections.
@toadlguy
@toadlguy 2 дня назад
@@Enedee007 @andrewsilber There is not just one MOE implementation and I think as currently used, it is done to only activate certain weights (the experts) in some of the layers probably not so much to silo the information but to reduce parameters and increase performance without any loss of acuity. The difficulty with cross discipline inference is actually baked into the models themselves unless it is in the training data or in an expression like “honeycomb structure”. No amount of “temperature” will get the probabilities to point just anywhere in the model.
@Enedee007
@Enedee007 2 дня назад
@@toadlguy i totally agree with you there just not one implementation of MOE. However, the underlying principles are basically the same. MOE I’m this case however is in the context of multi-agents.
@oculusisnevesis5079
@oculusisnevesis5079 3 дня назад
Despite his calm voice, this guy doesn't know nothing
@ragnarherron7742
@ragnarherron7742 3 дня назад
Gomez misguided here. Hilbert's decision problem sought a perfect problem-solving machine that could answer any yes-or-no question failed because: Gödel's Incompleteness Theorems and .Turing's Halting problem.
@bladekiller2766
@bladekiller2766 3 дня назад
Nothing new, other than Francois Chollet all the other "AI Experts" are running in circles, spitting out the same thing.... "It needs to have reasoning, we don't need general AI, we can have modular architecture with specialized domain experts, on-the-fly learning" Yeah we will have those and still will not reach AGI, not a single expert is addressing the question about what we gonna do about representing abstractions. They are literally running in circles.
@ronilevarez901
@ronilevarez901 2 дня назад
Maybe some problems already have solutions, but those solutions aren't quite ready yet. Too much compute, to low profits for now.
@bladekiller2766
@bladekiller2766 2 дня назад
@@ronilevarez901 Yeah that is correct, but the stupid thing is to make claims that those approaches will lead to AGI, but until we reach architecture that learns quickly like children with very few examples, we have long way AGI
@mr.e7379
@mr.e7379 3 дня назад
Refreshmg! Intelligent, mature professional..no " ums", " ahs", artificially accelerated speech or ANY of the typical, bay area speech pathologies which are getting unbearable. We need more grown up, deliberate professiinals like this in the field. Thank ypu, Aiden.
@gatsby66
@gatsby66 3 дня назад
"How Cohere SAYS IT will improve ...." We don't know it will improve anything. All CEOs say things about their product(s) that never happen.
@MarkDStrachan
@MarkDStrachan 3 дня назад
They should have titled this "San Francisco sucks."
@netscrooge
@netscrooge 3 дня назад
Sounds as if he lacks sufficient background in psychology to understand the risks of automated, individualized persuasion.
@Dr.UldenWascht
@Dr.UldenWascht 3 дня назад
I trust and pray for Mr. Jesus.
@420_gunna
@420_gunna 3 дня назад
"Is [the modal collapse of LM personalities...] because they're eating eachother's poop?" "Yeah, it's some sort of human centipede effect... Everything collapsing into [GPT-'4's] personality."
@threeNineFive0
@threeNineFive0 2 дня назад
seems very overconfident
@Grahfx
@Grahfx 3 дня назад
And what if reasoning is linked to consciousness, which isn’t computational? In that case, these models may never achieve human-level intelligence.
@bobhanger3370
@bobhanger3370 День назад
Consciousness is still a computation
@Grahfx
@Grahfx День назад
@@bobhanger3370 No. Consciousness is quantum. It is the zero point of calculation. Computers are actionable; you are not. You will not have free will otherwise.
@nanotech_republika
@nanotech_republika 3 дня назад
which part of the conversation deals with the "AI Reasoning this year"? I watched the first 10 minutes and I gave up watching.
@bossgd100
@bossgd100 2 дня назад
The title 😂
@alexanderbrown-dg3sy
@alexanderbrown-dg3sy 3 дня назад
The increment progress is hard to define? Super cap. Can it generalize to different reasoning length? Not at all…there aren’t small gaps. They’re gaping holes in current capabilities. Solvable yes. This isn’t reflective of the true reality…at all. It can’t even tell me that given a=b that b=a…again solvable….but we have to stop the cap. No true signs of compositional generalization..again I do believe it is solvable. It isn’t just a data issue, it’s an architecture problem too..RoPE is trash for modern LM’s , all that uniformity in high dimensional space, let’s start there 😂.
@robertmunger
@robertmunger 8 часов назад
This needs to be far more clearly labeled as a sponsored video. Viewers will lose trust if you are not transparent with them. You also harm the entire RU-vid ecosystem. People need to know when they are watching a paid advertisement.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 8 часов назад
Did you not see the clearly marked "paid promotion" label at the start of the video? How much clearer could it be? Also we added a disclaimer "This is the second video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview." - and if you look at our back catalog we have been interviewing Cohere and their researchers for years without any commercials. I honestly couldn't think of a better possible revenue model for the channel, unless you want us to spam you with ads about NordVPN? To demonstrate our genuine interest in Cohere look at back catalog with their team: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-i9VPPmQn9HQ.html ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Dm5sfALoL1Y.html ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-7oJui4eSCoY.html ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ooBt_di8DLs.html ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-sQFxbQ7ade0.html
@robertmunger
@robertmunger 7 часов назад
@@MachineLearningStreetTalk Thank you for taking the time to respond. I'm afraid I missed it that, as I listen to interviews like this on my phone while I do chores, so I didn't see the visual cue. Ideally there would be an audio cue for people like me, but you're following RU-vid policies, so I'm all good. Apologies if I brought you down!
@djcardwell
@djcardwell 3 дня назад
tries to talk like Sam. A wannabe Sam
@TheReferrer72
@TheReferrer72 3 дня назад
This guy has knowledge, and a sane worldview so no.
Далее
Chollet's ARC Challenge + Current Winners
2:15:02
Просмотров 41 тыс.
It's the opposite! Challenge 😳
00:12
Просмотров 3,7 млн
3M❤️ #thankyou #shorts
00:16
Просмотров 8 млн
Why This New CD Could Change Storage
14:42
Просмотров 629 тыс.
Bill Gates Reveals Superhuman AI Prediction
57:18
Просмотров 69 тыс.
#80 AIDAN GOMEZ [CEO Cohere] - Language as Software
51:51
10 People + AI = Billion Dollar Company?
38:24
Просмотров 67 тыс.
Mapping GPT revealed something strange...
1:09:14
Просмотров 201 тыс.
ИГРОВОВЫЙ НОУТ ASUS ЗА 57 тысяч
25:33
Choose a phone for your mom
0:20
Просмотров 4,7 тыс.
Развод с OZON - ноутбук за 2875₽
17:48