Тёмный
Open Data Science
Open Data Science
Open Data Science
Подписаться
Welcome to ODSC's RU-vid channel, where we host videos about a wide range of data science and AI topics, from the fundamentals to cutting-edge developments.

Here you'll find interviews with leading experts in AI Governance, Deep Learning, NLP, Continual Learning Systems, and more, as well as webinars from our partners, and popular sessions from our past training conferences.

We release new videos on Tuesday, Thursday, and Saturday, so there is always something new to discover. Whether you are just starting out as a data scientist or data engineer, or have many years under your belt, you are sure to learn something new here.

Don't miss an update. Subscribe to our channel now.
Semantic Search with Nils Reimers
24:34
3 месяца назад
Комментарии
@deliadane
@deliadane 4 дня назад
This was so helpful! Thank you :)
@user-nk9kr7lj9s
@user-nk9kr7lj9s 7 дней назад
When will the ai4cyber vm be available to the public?
@SarangBanakhede
@SarangBanakhede 13 дней назад
10:58 Scale Equivariance: Definition: A function is scale equivariant if a scaling (resizing) of the input results in a corresponding scaling of the output. Convolution in CNNs: Standard convolutions are not scale equivariant. This means that if you resize an object in an image (e.g., making it larger or smaller), the CNN may not recognize it as the same object. Convolutional filters have fixed sizes, so they may fail to detect features that are significantly larger or smaller than the size of the filter. Example: If a CNN is trained to detect a small object using a specific filter size, it might struggle to detect the same object when it appears much larger in the image because the filter is not capable of adjusting to different scales. Why is Convolution Not Scale Equivariant? The filters in a CNN have a fixed receptive field, meaning they look for patterns of a specific size. If the size of the pattern changes (e.g., due to scaling), the fixed-size filters may no longer detect the pattern effectively.
@AuroraGuide
@AuroraGuide 15 дней назад
Thank you very much. Especially Mona.
@MelisaPandolfi
@MelisaPandolfi 20 дней назад
loved it. I just needed to know how to structure the training dataset. It is obviouse to somebody that knows but I could not find it anyware.
@sofdff
@sofdff Месяц назад
Fantastic
@sofdff
@sofdff Месяц назад
Superb. A well spoken presenter
@miraskhabibulla1215
@miraskhabibulla1215 Месяц назад
Hello! I would like to run and test this approach. It it possible to get the data set that you mentioned in this lecture to work with?
@dmitriik3145
@dmitriik3145 2 месяца назад
Great talk and very crisp presentation. Big thanks!
@asoka0202
@asoka0202 2 месяца назад
Excellent Video and explanation
@sofdff
@sofdff 3 месяца назад
Very good
@DrAIScience
@DrAIScience 3 месяца назад
Are you the channel owner??
@DrAIScience
@DrAIScience 3 месяца назад
Do you have a video about beit or dino?
@DrAIScience
@DrAIScience 3 месяца назад
Very very very nice explanation!!! I like learning the foundation/origin of the concepts where models are derived..
@horaceburke8459
@horaceburke8459 4 месяца назад
😜 'promo sm'
@changeyourperspective1291
@changeyourperspective1291 4 месяца назад
Great explanation!
@Feel_theagi
@Feel_theagi 4 месяца назад
So it's a 50 mins Azure AI advert
@Since-em2vy
@Since-em2vy 4 месяца назад
Great Video!!
@Typicaltorturedartist
@Typicaltorturedartist 4 месяца назад
A lot of people take out their frusteration on AI systems and robots....the question is would that same person have hurt a living creature had that AI not existed?
@PeterBowdenLive
@PeterBowdenLive 5 месяцев назад
Thank you. As someone navigating reported self-awareness by advanced LLMs I'd like to affirm the urgency of engaging with this topic.
@anastasiamadrevska1668
@anastasiamadrevska1668 5 месяцев назад
Thank you so much, really useful!
@mohammedrakib3736
@mohammedrakib3736 5 месяцев назад
Fantastic Video! Really loved the detailed explanation step-by-step.
@EricKay_Scifi
@EricKay_Scifi 5 месяцев назад
I attended my first 'data science for good' meeting at ODSC West several years ago. It opened my eyes to algorithmic bias.
@missh1774
@missh1774 6 месяцев назад
Im an avid admirer of Ben's native language in computer neural networks. Not so much with his Noosphere linguistics (⁠◔⁠‿⁠◔⁠)
@Dan-dy8zp
@Dan-dy8zp 6 месяцев назад
I wish these issues would get more attention. AGI is the most important issue we face.
@EricKay_Scifi
@EricKay_Scifi 5 месяцев назад
24:38 My novel, Above Dark Waters, is about an AI therapy company, which uses brainwave data to inform the artificial therapist. They end up combining to make a super-manipulative AI which uses with Generative AI to make digital fentanyl.
@leonlysak4927
@leonlysak4927 6 месяцев назад
Ben always dropping knowledge bombs in the most random youtube channels lol
@shephusted2714
@shephusted2714 6 месяцев назад
lots of improvement happening right now to llm but people have to go to efforts to fully unbox capabilities - using multi models, uncensored models, p2p data and model training, and especially real time data agg - once these barriers are overcome then we will see much better commercial use llm and it will happen even if it is a gradual process - the domain creep is real and the limitations and compromises will be diminished as we go brom big tech ai to really open source ai in the next 5 years - the hw/sw stacks have to mature and catch up as well but it is clear that this will happen - cxl and other accelerators will have massive impacts and once they do filter down from data centers to prosumers and home labbers and the small/med biz sector then we will see economies of scale kick in and much more innovation and development - not is not the time and although it may appear to be sort of a fever dream it probably will happen - more i/o like pci-e v6 is going to help, usb5 and faster networking will also help.
@johnclay7422
@johnclay7422 6 месяцев назад
good contribution ... sir amzing video ....
@futureworldhealing
@futureworldhealing 7 месяцев назад
thanks for the informative interview about how to interview!
@geaca3222
@geaca3222 7 месяцев назад
Great interview, very useful information. I also love the online ai safety book, really great initiatives 👍
@chuckystein3103
@chuckystein3103 7 месяцев назад
Thanks for sharing
@Hitesh10able
@Hitesh10able 8 месяцев назад
when I run following code suggested in this video (at 8:19) for dynamic quantization it starts training with some random natural images for 100 epochs, I don't want to do training again I just wnat to quantize my pretrained weights: from ultralytics import YOLO import torch import torch.quantization model=YOLO('pre_trained_weights.pt') model.load_state_dict(torch.load('checkpoint.pth')) qmodel = torch.quantization.quantize_dynamic(model, dtype = torch.quint8)
@chemseddineberbague419
@chemseddineberbague419 8 месяцев назад
thanks alot ))
@vipulrvyas
@vipulrvyas 8 месяцев назад
How many RAI Parameter / Use cases this is tested against?
@silberlinie
@silberlinie 8 месяцев назад
A - as always - very valuable conversation with Bostrom In my opinion, Sheamus McGovern is a terrible host. Two points. 1. He talks and chats for far too long instead of listening to his guest. A brief, directed outline of his question would benefit both the viewer and the guest. 2. His voice is terrible. He would have gained a lot if he could make himself audible through speech synthesis.
@retromograph3893
@retromograph3893 8 месяцев назад
That's a bit harsh, i thought he was quite ok. His audio sound quality is very bad though, they need to work on that. The room he's in has got bad acoustic (boxy), so he needs to get the mic closer to his mouth.
@LuisFelix82
@LuisFelix82 8 месяцев назад
Excellent topic, very informative. Thank you!
@jhjbm1959
@jhjbm1959 9 месяцев назад
This video provides a clear step by step explanation how to get from images to input features for Transformer encoders, which has proven hard to find anywhere else. Thank you.
@DonRua
@DonRua 9 месяцев назад
Ms. Qi is altering the course of my life. Her honesty, integrity, kind consideration and upbringing inspire me. I've delved into everything about her. A financial loss of 1/3 of my retirement due to a banking error prompted me to cease trading in a fixed fund. Despite anticipating a quick correction, it took the bank two years to rectify, and they denied any responsibility. Frustrated, I missed out on the V-shaped rebound. Savvy investor friends cautioned against immediate action, and paid subscriptions taught me not to blindly trust them. I spent 10,000 hours focusing on fundamentals, employing geopolitical, macro, micro, banking, currencies, and NEWS analyses. While many lamented losses, I achieved a mid-double-digit gain this year alone. Enter Ms. Qi - my young hero. Unhindered by job offers due to her exceptional skills and honesty, Ms. Qi might have posed logical questions during her internship, facing resistance from supervisors who'd admired the wrong approach for decades. Her journey parallels Elon Musk's grandfather, who, beginning in a small Canadian town, became Canada's first licensed chiropractor and initiated the chiropractic association that still thrives. Confronting political issues led to his imprisonment and eventual relocation to South Africa. From my reading Ms. Qi's parents escaped a dark chapter in history as they left for America. Eventually, the figurehead of that traumatic event “hand picked” the seemingly least intelligent second-generation individual to protect those involved in the massacre. However, this "dumb" leader purged/incarcerated over a million of the original “cash cow” network, cleaning up a 40-year establishment and potentially jeopardizing the country’s future-a clear definition of “what goes around comes around”. Meanwhile, the two newcomers in America found solace in a simple life. Three decades later, Ms. Qi, once a little baby, now stands at the pinnacle of success, showcasing potential akin to Elon Musk's visionary genes. Inspired by her, I am revisiting day trading, recognizing the promise she holds for future generations.
@saimasideeq7254
@saimasideeq7254 9 месяцев назад
thankyou much clearer
@djjjjj
@djjjjj 9 месяцев назад
Maybe the most important question ever. Imagine the ethics/control/treatment of trillions upon trillions of sentient minds being dictated by the unethical 😳
@delatroy
@delatroy 8 месяцев назад
Yeah. The whole point of crating ai is so we can enslave it on the assumption that it’ll be ethical. With no way to test, I guess we’ll assume it’s fine 🤔
@yubifu9186
@yubifu9186 9 месяцев назад
MIT Math Dean
@patrickjdarrow
@patrickjdarrow 9 месяцев назад
Solid high level overview of proven strategies.
@ouafahachem4377
@ouafahachem4377 9 месяцев назад
thank you so much for sharing the talk it was really insightful. I'm a data scientist with three years of experience at an AI startup. I've worked in a small team and gained skills in data analytics, data engineering, data pipeline architecture, model training workflows, reproducibility, and process improvement. Now, I'm interested in leading new data projects and teams. Do the age and number of years of experience matter for such role?
@TmoneyProductions
@TmoneyProductions 10 месяцев назад
What if in a document you are looking for multiple of the same fields? For instance, if there was a document with multiple different names, and you wanted to look for how many different names show up? I dont want individual fields for name 1 name 2 name 3, i just want them classified all as the same field "name".
@dougiehwang9192
@dougiehwang9192 10 месяцев назад
Well explained!! ❤ THX a lot for sharing this.
@jimimased1894
@jimimased1894 10 месяцев назад
sagoi my hero!
@ashish-blessings
@ashish-blessings 11 месяцев назад
Thank you
@k3el07
@k3el07 11 месяцев назад
Thanks for sharing. I'm keen to learn more about this.