Тёмный
No video :(

Anomaly detection with TensorFlow | Workshop 

TensorFlow
Подписаться 602 тыс.
Просмотров 106 тыс.
50% 1

Learn how to go from basic Keras Sequential models to more complex models using the subclassing API, and see how to build an autoencoder and use it for anomaly detection with an electrocardiogram dataset to find abnormal heart rhythms.
Resources:
Intro to Autoencoders → goo.gle/3eheOXi
Speaker: Laurence Moroney
Watch more:
TensorFlow at Google I/O 2021 Playlist → goo.gle/io21-T...
All Google I/O 2021 Workshops → goo.gle/io21-w...
All Google I/O 2021 Sessions → goo.gle/io21-a...
Subscribe to TensorFlow → goo.gle/Tensor...
#GoogleIO #AI #ML
product: TensorFlow - General; event: Google I/O 2021; fullname: Laurence Moroney; re_ty: Livestream;

Опубликовано:

 

21 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 73   
@jstello
@jstello 3 года назад
Hands down my favorite ML teacher
@LaurenceMoroney
@LaurenceMoroney 3 года назад
Thanks, Juan!
@jaggyjut
@jaggyjut 3 года назад
The 45 mins workshop are really good. Please continue to make more of these workshops.
@LaurenceMoroney
@LaurenceMoroney 3 года назад
Good to know, glad you enjoyed!
@cantblockjo3193
@cantblockjo3193 Год назад
Fantastic video! You made neural networks very simple to understand.
@timoose3960
@timoose3960 3 года назад
This use of generative AI is just brilliant!
@LaurenceMoroney
@LaurenceMoroney 3 года назад
Thank you :)
@rommeltito123
@rommeltito123 2 месяца назад
What a great learning experience! Thank you Laurence Moroney
@thiagoribeiro4733
@thiagoribeiro4733 3 года назад
Simply awesome!!! Thank you so much Laurence, it's always a pleasure to learn from you!!
@LaurenceMoroney
@LaurenceMoroney 3 года назад
Thank you! :)
@hdm_vision
@hdm_vision 3 года назад
He is my teacher in coursera. Awesome Teacher. He teach me with Andrew Ng
@nitishthakur3644
@nitishthakur3644 3 года назад
I love Generative modeling and your teaching style!
@laurencemoroney655
@laurencemoroney655 3 года назад
Thanks!
@eng-khalil
@eng-khalil 2 года назад
nice workshop love the explanation, and the materials given
@rajarams3722
@rajarams3722 11 месяцев назад
Thanks. ECG is a time series..Can it be a simple Feed forward auto-encoder ? Should it not be a RNN Auto-encoder ?
@TheMegaEzio
@TheMegaEzio 3 года назад
Awesome content, very helpful for my research. More of this.
@GeorgeZoto
@GeorgeZoto Год назад
Great content and well designed example on using an autoencoder for anomaly detection.
@PedroAcacio1000
@PedroAcacio1000 2 года назад
Hi, first of all, thank you very much for the content. My question is if this would be possible with a simple classification model. If so, what are the pros and cons of each approach? Thanks in advance
@lzdps
@lzdps 3 года назад
Thank you for teaching this! I was confused for a long time...
@laurencemoroney655
@laurencemoroney655 3 года назад
Welcome!
@jeremysapienza7277
@jeremysapienza7277 Год назад
Amazing, thanks for sharing your knowledge with us!
@inkytbe26
@inkytbe26 3 года назад
More of these .. would be great for decision tree as well
@ludwigstumpp
@ludwigstumpp 3 года назад
For Decision Trees you might be better checking out sci-kit learn.
@ManpreetSinghMinhas
@ManpreetSinghMinhas 3 года назад
Awesome workshop! I really liked how you selected the threshold as mean + std_dev. But if we make the Gaussian assumption do you think that mean + 2*std_dev would be a safer value?
@mhammedbenarbia409
@mhammedbenarbia409 2 года назад
....ki
@mhammedbenarbia409
@mhammedbenarbia409 2 года назад
8ème
@lfmtube
@lfmtube 2 года назад
Excellent video and thanks a lot! Please allow me to make a question. Is the method shown in the video recommended to train and detect anomalies in financial transactions? Is the way of comparing the transaction against a previously calculated threshold recommended, or do you recommend any other I can use? I am new in ML, so if you know any example I can use would be incredibly appreciated.
@willykitheka7618
@willykitheka7618 2 года назад
Super useful content!
@SaifAhmed81
@SaifAhmed81 2 года назад
Thanks for the great content, Please if anyone could help in this, I have not understant why the python the dens layer started with 128 input and not 784?
@lysapala2812
@lysapala2812 2 года назад
I like your videos a lot! I have a question about the time series analysis. As I understood the shape of your network depends on the length of your data has. But what about the problem, when your single trials have different time duration, for example when one heartbeat is going longer than another one?
@rajarams3722
@rajarams3722 11 месяцев назад
It should be a RNN Autoencoder..
@jonathonyee6028
@jonathonyee6028 2 года назад
At 34mins 31s under the history code section, shouldn't the validation_data be using normal_test_data instead?
@gpgk_1841
@gpgk_1841 Год назад
Thanks , this videos will be helped me!
@bhavinmoriya9216
@bhavinmoriya9216 2 года назад
Awesome! Why you do not have to add input layer?
@nazaramin8635
@nazaramin8635 Год назад
Very valuable, many thanks...
@fahemhamou6170
@fahemhamou6170 2 года назад
تحياتي الخالصةthank you
@2LazySnake
@2LazySnake 3 года назад
Hi! Thanks for the great content! Is it necessary for the decoder to mirror the encoder? Why does it mirror the layers of the encoder?
@prasadkakade1529
@prasadkakade1529 2 года назад
Hello, could you please let me know what should be the value of dense layer like 32,16,8 if we have only 8 variables instead of 140 as ECG indicators??
@pasqualeburo4170
@pasqualeburo4170 3 года назад
@Laurence Moroney Very interesting, thank you!! But what if we are in un unsupervised scenario? In this case, i trained the autoencoder on the whole unlabeled dataset and made predictions on the same whole data, trying to spot anomalies. Given that the data are unlabeled and given that the percentage of anomalies should be less than about 1% of the entire dataset, would be correct to train the autencoder on the whole dataset, then predicting on the whole data and finally looking at the reconstruction errors in order to spot anomalies (points with the highest reconstruction error) ?
@laurencemoroney655
@laurencemoroney655 3 года назад
Maybe! I think it would depend on your results from the initial training, but the approach seems sound.
@pasqualeburo4170
@pasqualeburo4170 3 года назад
Thank you !!
@anatoly-k
@anatoly-k Год назад
could you share your approach example?
@livesinging3924
@livesinging3924 3 года назад
Very good content
@LaurenceMoroney
@LaurenceMoroney 3 года назад
Thanks!
@Max-eo6vx
@Max-eo6vx 2 года назад
Thank you Laurence. People would naturally worry about the 3 false negatives that happen to show low losses. Is there any remedy for that?
@paix311
@paix311 7 месяцев назад
thanks coatch
@maloukemallouke9735
@maloukemallouke9735 8 месяцев назад
perfect
@eagle43257
@eagle43257 3 года назад
Thank you very much
@islamandislam8945
@islamandislam8945 3 года назад
@Laurence Moroney where to see the ipython code here? could you please attach?
@LaurenceMoroney
@LaurenceMoroney 3 года назад
The URL is shown in the slides around 28:50
@aguelejoseph5753
@aguelejoseph5753 Год назад
when i trained my model, I had a 96% accuracy but when i deployed it to an android application for what I needed, the prediction was totally different from what I got in Python, what could be the reason for this ? Any answer would be highly appreciated.
@ms.b.deepthiasst.professor3987
@ms.b.deepthiasst.professor3987 2 года назад
Which one works good for anomaly detection in time series whether autoencoder or Variential Autoencoder. Which one you would suggest.
@christophermoriarty2004
@christophermoriarty2004 9 месяцев назад
Christopher Moriarty This is body copy and it's a little like this. This is body copy and it's a little like this.
@rajavallib-wi3fn
@rajavallib-wi3fn 9 месяцев назад
so, data from high-dimensional solution space is reduced to its lower-dimensional solution space and then error is computed. Somehow, related to Proper Orthogonal Decomposition. Is it right?
@Aristocle
@Aristocle 3 года назад
There is an example for anomaly detection in images(like fashion MNIST dataset) ?
@xenophon167
@xenophon167 2 года назад
Is Multivariate Anomaly Detection possible with deep learning? For a dataset that includes different sensors readings?
@richardfinney2548
@richardfinney2548 Год назад
Can anyone explain why we validate with both the normal and anomalous data?
@shariqueansari9921
@shariqueansari9921 3 года назад
Sir, would you like to suggest any book of tensorflow in which I could learn topics such as # Kalman Filter # BERT # LSTM and other advance topics
@sann6688
@sann6688 2 года назад
Thank you very much for your very useful video!
@TheThunderSpirit
@TheThunderSpirit 3 года назад
I have done the same thing on kaggle. using the ecg from physionet
@LaurenceMoroney
@LaurenceMoroney 3 года назад
Cool!
@NISHANTKUMAR-yw4dm
@NISHANTKUMAR-yw4dm 3 года назад
Thanks ! Actually i am working on a project on "anomaly fault detection in power system"...and it really helps a lot...Can you suggest me something in line of my project(some links or articles will do)..?
@ireziqn97
@ireziqn97 2 года назад
how did it go? starting something similar
@znaguiasma8719
@znaguiasma8719 2 года назад
@@ireziqn97 ٦ا
@sreejapp1945
@sreejapp1945 3 года назад
Hello , Can you please explain how we are finding the number of layers and neurons in neural networks???
@laurencemoroney655
@laurencemoroney655 3 года назад
It's very much trial and error at first until you get the best results. Over time you'll get more of a 'feel' for it while you build and train more models. You can also use a tool like Keras tuner to find the optimal settings.
@islamandislam8945
@islamandislam8945 3 года назад
@TensorFlow could you please reply where we can look at the code?
@LaurenceMoroney
@LaurenceMoroney 3 года назад
The URL is shown in the slides around 28:50
@sashaqaz5622
@sashaqaz5622 3 года назад
Where we can get the code?
@LaurenceMoroney
@LaurenceMoroney 3 года назад
The URL is shown in the slides around 28:50
@Jirayu.Kaewprateep
@Jirayu.Kaewprateep Год назад
📺💬 He listening ... 📺💬 The decoding part is work very well on the extended type of coding. 🥺💬 You are correct it is a transformer and generator there are many references about them and it explains why it required encoding and decoding. 🥺💬 They are not required to have the same symmetric layers but they use functions from the generator with the objective, you can merge them into one model but translation will require for learning. By conclusion, objective and input are different and transformation information can be used in the target environment by the transformers part. 🥺💬The same as the cryptography class model and its concept transformer widely use in many objectives but itself can decode the input to the output they generated, the generators transformed input into messages. In your online class, there is word text generators that is the same. 📺💬 I use ECG 🥺💬 It is similar to the Fourier function transform we use it with repeating signals such as frequency but human brains or activities have an identity and there are many frequencies working at the same time as you test heart-working examines required to have opposite channels connected or all connected to investigate its working, compare to voices or natural signals. 📺💬It is count number of products, not Furrier 🥺💬 I explain about the transcoder ( transformer - encoder ) see the recursive function. I am not trying to talk about the name of the theory to make everything look too hard but when input as sequences pass though multiple layers of neurons it squeezes and expands when weaken connection is expanded and denses high connection. See the output graph or you can formula. 📺💬 ยุ้ยเชิญว่าอาจาร์ยขี้โม้ด้วยครับพอดีญาติกับซัน 🥺💬 ไม่มีใครว่าใครหลอกครับถ้าไม่เดือดร้อนดูเค้าทำกับผมก่อน แค่นั้นแหละผมก็ดูไปเรื่อยดีกว่าดูคนทะเลาะกันครับ ขอบคุณครับ สอนดีครับแต่เหมือนที่อาจาร์ยบอกบางข้อมันยากเกิน
@chris_kouts
@chris_kouts Год назад
This audio is bad.
Далее
180 - LSTM Autoencoder for anomaly detection
26:53
Просмотров 88 тыс.
Variational Autoencoders
15:05
Просмотров 492 тыс.
Never Troll Shelly🫡 | Brawl Stars
00:10
Просмотров 1 млн
Build a Deep CNN Image Classifier with ANY Images
1:25:05
Anomaly Detection: Algorithms, Explanations, Applications
1:26:56
Simple Explanation of AutoEncoders
10:31
Просмотров 103 тыс.
Autoencoder Explained
8:42
Просмотров 185 тыс.
This is why Deep Learning is really weird.
2:06:38
Просмотров 382 тыс.