Тёмный

🧠 Mind-Reading Stable Diffusion Paper 

koiboi
Подписаться 14 тыс.
Просмотров 7 тыс.
50% 1

A walkthrough of a recent research paper which had participants view images, and then reconstructed those images using Stable Diffusion and fMRI readings of the participants brains. This was made possible by excellent work from the Neural Scenes Dataset Team.
======= Links =======
High-resolution image reconstruction with latent diffusion models from human brain activity: www.biorxiv.org/content/10.11...
Neural Scenes Dataset: naturalscenesdataset.org/
Good intro to Neural Scenes Dataset: • 2021-02-11 Kendrick Kay
Yatharth's twitter: / reporterweather (get on it)
A nice diagram: / the_stable_diffusion_m...
======= Music =======
Music from freetousemusic.com
‘Butter’ by LuKremBo: • lukrembo - butter (roy...
‘Affogato’ by LuKremBo: • lukrembo - affogato (r...

Наука

Опубликовано:

 

19 мар 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 45   
@roman-tdv
@roman-tdv Год назад
In 5 years, we'll be able to watch our own dreams as if they're movies. Besides all the stuff that will be possible with this technology.
@mariokotlar303
@mariokotlar303 Год назад
Freaking insane! Not only does this prove it's going to be possible to control stable diffusion by literally imagining, but it also implies we'll be able to record and watch our dreams.
@EddieAdolf
@EddieAdolf Год назад
Good luck with your NeuroLink BCI surgery!
@danfroal8057
@danfroal8057 Год назад
Awesome vid. "Still frame from Full Metal Alchemist" made me cry tho.
@msampson3d
@msampson3d Год назад
Really happy to see you cover this one! When I saw the news I was super skeptical, figuring there was some major gotcha they were cheating with to get such good results. For example I was positive they would have been feeding the text description to the model and only using the brain scan for the latent noise portion. To see it explained and learn that this wasn't the case truly is impressive! Thanks once again for providing an entertaining and informative vid on a fascinating topic!
@zaadworks
@zaadworks Год назад
I always thought of Stable Diffusion like this, the latent space steps looks like when you close your eyes and there's just noise on your mind... Mind blowing
@user-pc7ef5sb6x
@user-pc7ef5sb6x Год назад
this is why ai images look like images from your dreams.
@YVZSTUDIOS
@YVZSTUDIOS Год назад
Oh boi remember when this technology was in his infancy when someone claimed to do this with dreams and he had a very similar looking fold out 2d map of the brain 🧠 But the tech just wasn't there yet to produce images well
@Qubot
@Qubot Год назад
Now invert the process and give vision to blind people.
@lewingtonn
@lewingtonn Год назад
wait what? hahaha I don't think that's what this research is about
@ludgi1
@ludgi1 Год назад
Holy smoke, never thought about that. One step closer to Matrix simulation
@kingpin4152
@kingpin4152 Год назад
​@koiboi if you would put the model inside the brain connected to the right parts of the brain Idk why it shouldn't work. (I am no brain expert tho)
@Qubot
@Qubot Год назад
@@kingpin4152 The optical nerve maybe ? Just guessing, i am a dumb compared to those genious researchers
@lucamatteobarbieri2493
@lucamatteobarbieri2493 Год назад
The problem is that there is no way to activate specific cortical areas without electrodes. Brain surgery to insert electrodes is dangerous and electrodes dont integrate well into the brain for long times. It is possible to do in principle but very invasive. Fmri just reads indirectly oxygen consomption in brain voxels. Also keep in mind that some blind people have damaged visual cortices and perfect eyes, so not every blind person would be suitable. But still a possible outcome in the long run.
@magejoshplays
@magejoshplays Год назад
Imagine training a model to use that reverse prediction method to substitute the human in RLHF so that linear model setup would be self training the base model with every creation it makes as an extension for stable diffusion.
@takif8756
@takif8756 Год назад
Thanks for the video, a bit harder to understand whats going on with this one! :)
@hakuhyo174
@hakuhyo174 Год назад
Imagine predictor of text embedding and latent given MRI image is not linear but specifically crafted model with billions of parameters 😳
@vickanis2234
@vickanis2234 Год назад
you have so many good vids explaining concepts, when will you do one on the basics about how the base stable diffusion actually works for beginners.
@MilesBellas
@MilesBellas Год назад
Team Emad
@lucabonomo241
@lucabonomo241 Год назад
Hi man, can i ask you if does it possible exist a super computer that decoding the mind as we thinking? What signifique predict, decodind and see the mind ? So, i will grate for answer!!! Thanks again Cheers😮
@yomaze2009
@yomaze2009 Год назад
This is nuckin futs! Imagine if neuroscientists could use this to find out if you have brain damage.
@MilesBellas
@MilesBellas Год назад
point-e next ?
@lupinsensei7456
@lupinsensei7456 Год назад
All great, but that is not a screenshot from FMA, it's from One piece.
@alterego1509
@alterego1509 Год назад
time to wear a colander hat .. for real
@lewingtonn
@lewingtonn Год назад
Like I needed an excuse
@Laszer271
@Laszer271 Год назад
I'm impressed about doing brain activity to image mapping. However, image to brain activity mapping is overrated imho. Showing results that are better than random shows nothing. I bet you could train a pigeon to predict some voxels with accuracy that is better than random. Check, even if you predict all voxels randomly, there is a high possibility that you will still get some outliers that will show as "significantly better than random" from the statistical test. That's because of the sheer number of those voxels you are predicting. Btw there was a study about training an ensemble of pigeons to predict cancer from a photo and they had pretty good results, better than the average doctor. So training pigeons is actually not as stupid as it may sound :P
@lewingtonn
@lewingtonn Год назад
Bahahaha thats so goood: www.google.com/amp/s/www.bbc.com/news/science-environment-34878151.amp
@lucamatteobarbieri2493
@lucamatteobarbieri2493 Год назад
birds are smarter than you think
@KingTI192
@KingTI192 Год назад
They can read your thoughts in real time. This is nothing new
@RikkTheGaijin
@RikkTheGaijin Год назад
You look exactly like my childhood family doctor. make of that what you will.
@lewingtonn
@lewingtonn Год назад
Oh yeah, I also wanted to tell you: you have measles
@RikkTheGaijin
@RikkTheGaijin Год назад
@@lewingtonn I thought I had ligma
@HB-kl5ik
@HB-kl5ik Год назад
​@@RikkTheGaijin ligma balls
@amiththomas3884
@amiththomas3884 Год назад
@@RikkTheGaijin LIGMA ? HMMMM WHAT'S THAT ?
@janerikjakstein
@janerikjakstein Год назад
This tech is crazy! Can they read my thoughts now?
@BobaFit
@BobaFit Год назад
😂 Haha, don't worry! While this tech is indeed crazy, it's not quite at the level of reading minds... yet! So, for now, your thoughts are safe, and you can continue plotting world domination in peace! 😜🌍🔒
@optimoos
@optimoos Год назад
@lewingtonn
@lewingtonn Год назад
\(º □ º l
Далее
Why Does Diffusion Work Better than Auto-Regression?
20:18
2 Years of My Research Explained in 13 Minutes
13:51
Просмотров 47 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 839 тыс.
🤔 Ok, but what IS ControlNet?
25:31
Просмотров 42 тыс.
Offset Noise: Midjourney Dethroned
16:48
Просмотров 30 тыс.
What Actually is A.I.?
10:31
Просмотров 2,9 тыс.