Тёмный

TensorFlow Tutorial 3 - Neural Networks with Sequential and Functional API 

Aladdin Persson
Подписаться 80 тыс.
Просмотров 130 тыс.
50% 1

Опубликовано:

 

24 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 132   
@anandiborade6349
@anandiborade6349 3 года назад
This is the most underrated Tensorflow tutorial series I have ever seen.
@hananxh
@hananxh 3 года назад
true
@tejasindani1760
@tejasindani1760 2 года назад
Well said!
@akshanshsingh3766
@akshanshsingh3766 3 года назад
This is the best TensorFlow tutorial series I found. It's better than all other platforms like Coursera, udemy etc. Thank you!
@thevoid5181
@thevoid5181 10 месяцев назад
even better than tensorflow? (they have tutorials too)
@junaiddooast7435
@junaiddooast7435 Месяц назад
i have visited all these platforms but this is still the best of all
@henkjekel4081
@henkjekel4081 2 года назад
Thank you for the videos man really helpfull! Some comments, correct me if I'm wrong: 1. When doing x_train.reshape you should use x_train.reshape(60000,-1). In your video you use x_train.reshape(-1,784), stating that the -1 will keep the 60000 the same. Actually the -1 will cause reshape to automatically find the 784 without you having to compute 28*28, so to take full use of the syntax its easier to use x_train.reshape(60000,-1) 2. You mention that the type of the data is float64, but the type of the data is unit8. Therefore, I don't think we will be computationally more efficient by changing to float32. 3. Add this to the first lines of your script if you want a clear terminal output: import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' from os import system system("clear") 4. Add this to your vscode settings if you want everything to work nicely: { //to open settings in json format: "workbench.settings.editor": "json", //to open default settings when opening user settings "workbench.settings.openDefaultSettings": false, //"python.pythonPath": "C:\\Users\\31627\\pyver\\383\\Scripts\\python.exe", //this is the default python //"python.pythonPath": "C:\\Users\\31627\\.conda\\envs\\tf2.4\\python.exe" "python.pythonPath": "C:\\ProgramData\\Anaconda3\\python.exe", //this is the default python "python.disableInstallationCheck": true, //dont know why "editor.tabCompletion": "on", //to be able to tab out of '' "breadcrumbs.enabled": false, "workbench.startupEditor": "newUntitledFile", "workbench.editorAssociations": { "*.ipynb": "jupyter-notebook" }, "workbench.colorTheme": "Default High Contrast", //to not show the file path at the top of the code file //we installed the extension code runner to have a clear code ouptue in the terminal "editor.fontSize": 17, "editor.fontWeight": "500", "debug.console.fontSize": 17, "terminal.integrated.fontSize": 17, "terminal.integrated.fontWeight": "600", "kite.showWelcomeNotificationOnStartup": false, "python.formatting.provider": "autopep8", "editor.formatOnSave": false, "python.formatting.autopep8Args": [ "--ignore", "E402" ], "code-runner.executorMap": { "python": "$pythonPath -u $fullFileName" }, "code-runner.clearPreviousOutput": true, "code-runner.showExecutionMessage": false, "code-runner.saveFileBeforeRun": true, "code-runner.runInTerminal": true, "python.showStartPage": false, "python.condaPath": "C:\\ProgramData\\Anaconda3\\_conda.exe", "python.defaultInterpreterPath": "C:\\ProgramData\\Anaconda3\\python.exe", "notebook.cellToolbarLocation": { "default": "right", "jupyter-notebook": "left" }, //this is the default conda }
@reimartsarmiento8364
@reimartsarmiento8364 2 года назад
thanks
@hamzajaved5283
@hamzajaved5283 2 года назад
Thanks for this, always enjoy reading comments that offer alternative suggestions. Just my two cents: 1. Nice spot, certainly seems neater to do it this way! To be more general one could also write: x_train = x_train.reshape(x_train.shape[0], -1) Assuming the first dimension corresponds to the samples, different x_train sets will still be reshaped correctly using this, rather than having to hard code the exact number of samples each time 2. Actually by rescaling the pixel values to be between 0.--1.0 (accomplished by the /255.0 operation), then the datatype does by default become float64. Manually setting the datatype therefore to float32 cuts memory usage by half One can check this for themselves using x_train.dtype (which will return int8, float64 or float32 depending on which transforms have been applied) And to get the actual size of this object in memory, you can use: import sys; sys.getsizeof(x_train) 3. & 4. Didn't look into these as not too bothered by the warning messages, and not a vscode user :)
@You-7860
@You-7860 28 дней назад
Bro I am not getting the things used inside those built in function as I am new i have not watched the theory part.....what would be your suggestions ​@@hamzajaved5283
@You-7860
@You-7860 28 дней назад
​@@hamzajaved5283bro I am not able to understand the function parameters like some what ... I amnew and I have not watched theory of neural network correctly... I started watching this course of tensorflow ...bro what would be your suggestion
@praneethbhat7977
@praneethbhat7977 27 дней назад
​@@You-7860 watch those videos first which he suggests in this video u will get an idea
@SizigiaTaps
@SizigiaTaps Год назад
Great Tutorial! The best part, obviously, is when you said "número tres".
@AladdinPersson
@AladdinPersson Год назад
Agree i listen to this part on repeat
@im-Anarchy
@im-Anarchy 8 месяцев назад
@@AladdinPersson why did you said "número tres" instead of neural nets
@anujprasad001
@anujprasad001 3 года назад
one of the best set of tutorials for TensorFlow.
@puneethj9920
@puneethj9920 3 года назад
This is the best Tensorflow series I have seen on Internet. Thanks Man! Cheers
@cutyoursoul4398
@cutyoursoul4398 3 года назад
As a beginner, this series is perfect, thanks a lot
@GabisFutureLab
@GabisFutureLab 2 месяца назад
Thank you for these tutorials - and hello from 2024!
@balakrishnakumar1588
@balakrishnakumar1588 4 года назад
Superb enjoyed the tutorial. Waiting for the playlist to grow.
@PraYogiz
@PraYogiz 3 года назад
I think this tutorial is the best, explain and cover all the thing needed.
@iantaggart3064
@iantaggart3064 7 месяцев назад
Accuracy increased by 0.07% with the inclusion of another layer size 128, and a further 0.34% with seven epochs instead of five.
@mayankarya7045
@mayankarya7045 2 года назад
Hi Aladdin, You proved it is the best one. Thanks ❤️
@kohinoortanishq3968
@kohinoortanishq3968 3 года назад
Thank you so much. This helped me a lot. Previously I was not aware of the functional API and I badly needed this for my project.
@yashvardhannegi5909
@yashvardhannegi5909 4 года назад
You sir deserve more views and likes
@kanishkgandhi101
@kanishkgandhi101 3 года назад
using the exact same code, I am getting x_train.size=60000,28,28 but when I run the model I am getting 1875/1875 in each epoh rather than 60000/60000....why is this happening??
@johnhawkins8914
@johnhawkins8914 2 года назад
1875 iteration with 32 samples (batch size) each iteration = 60000
@alternativepotato
@alternativepotato 3 года назад
you should watch statquest when it comes to theory seriusly guys, although aladdin resources are good. Stat quest is by far more concisive, and understandable
@jeremynx
@jeremynx 3 года назад
I am so happy to find this videos!
@emotionblur7214
@emotionblur7214 23 дня назад
I'm trying with tensorflow 2.17.0 and the line inputs = keras.Input(shape=(28*28)) produces the error "cannot convert '784' to a shape. solution: apparently shape has to be explicitly a tuple, so shape(28*28, ) (with comma) does it.
@ahmedmohamedmohamedmohamed282
@ahmedmohamedmohamedmohamed282 2 года назад
great explanation and great video
@vidyakadam4021
@vidyakadam4021 3 года назад
Very nicely explained the code and its uses. Thanks a lot.
@cx4917
@cx4917 3 года назад
Hey when you train the model why do you have 60k observation while using a batch size.
@shlepeekeg1412
@shlepeekeg1412 2 года назад
I have have started this after coding in SQL for a year
@akashbhoi1951
@akashbhoi1951 3 года назад
best tutorial ever
@michelchaghoury870
@michelchaghoury870 2 года назад
MANNNN so usefull, please do more and more of these videos pleasee and keep going
@joysanimationstudio2375
@joysanimationstudio2375 3 года назад
4:06 i cant compile its give me error : list index out of range from tf.config.experimental.set_memory_growth(physical_devices[0], True) line. please help me
@RiadAhmed-ce6qo
@RiadAhmed-ce6qo 4 месяца назад
you are very good thanks
@NayarJoolfoo
@NayarJoolfoo Год назад
Such a clear explanation (y)
@nerymarques42
@nerymarques42 Год назад
I could not thank you enough! superb content
@matinfazel8240
@matinfazel8240 3 года назад
thnaks, it was awesome
@essamgouda1609
@essamgouda1609 3 года назад
God Bless you Sir !
@tsegaamanuel5907
@tsegaamanuel5907 3 года назад
Really superb tutorial
@coding10yearold
@coding10yearold 3 года назад
Shouldn't each Epoch have 60,000 total_size /32 batch_size= 1875 whatyoumacallits?
@tanvik5427
@tanvik5427 2 года назад
YEP
@nandhagopalcs5608
@nandhagopalcs5608 3 года назад
you are the best sir
@dhawals9176
@dhawals9176 3 года назад
Why is it showing 6000/6000, wasn't your batch size 32?
@jeremynx
@jeremynx 3 года назад
Thank you very much!!! You are really great!
@delyartabatabai9636
@delyartabatabai9636 2 года назад
great explanation! Thanks!
@JearBear6896
@JearBear6896 2 года назад
For the people confused on the tensorflow.keras.layers. The new way is .keras.datasets.
@lukehyde6942
@lukehyde6942 5 месяцев назад
The code provided doesn't work, if I modify it to work with latest tensorflow, I get 0.15 accuracy. I don't have a GPU to use, its an older laptop, does that change the answer? Currently I'm using the sequential method.
@lukehyde6942
@lukehyde6942 4 месяца назад
I think the issue was the line inputs=inputs, etc. that is very critical, hopefully this is useful info for someone
@dhruvnegi422
@dhruvnegi422 3 года назад
hey i used adadelta and rmsprop as my optimizers and in both cases for training set the accuracy was above 0.99 and loss of around 0.0062, but while evaluating the loss function is pretty high 1.32 and 1.42 resp with accuracies of 0.95 for both. What might be the reason for this huge devaiation, is it due to over-fitting or any other concept i am missing??
@AladdinPersson
@AladdinPersson 3 года назад
Sounds like overfitting to me, try and add dropout, l2 regularization and/or data augmentation which we cover in future videos :)
@dhruvnegi422
@dhruvnegi422 3 года назад
@@AladdinPersson ohh cool cool, thanks mate
@meethansaliya4885
@meethansaliya4885 3 года назад
when i am doing on hand practice i got a error for value of y_true and y_pred in loss, can anyone help me out , thanks in advance
@watcharakietewongcharoenbh6963
@watcharakietewongcharoenbh6963 2 года назад
I don't understand what is the "one input and one output" you mean, and why is the Sequential API cannot do that case?
@im-Anarchy
@im-Anarchy 8 месяцев назад
same but now did you understand?
@mahneh7121
@mahneh7121 Год назад
I notice that the n of parameters comes from the cross links, 512*28*28 and just as I was writing I noticed that the remaining 512 have to be the weights, interesting that they have so many as you actually need just one for optimizing the shift.
@utpalpodder-pk6vq
@utpalpodder-pk6vq 3 года назад
sir,in the last portion of this lecture you mentioned about extracting the model features for the layers of the model.. so my doubt is when we should extract the model features ...is it we can extract the model features both after training the model and before training the model.Since while I build the cnn model then I found we can extract the features of the layers of the cnn model both after and before training the model and even we can predict the inputs using those features. So,its creating some confusion about how we are able to predict the inputs using the intermediate layer features even before we train the model. please help me to solve this confusion.
@shikhargupta7080
@shikhargupta7080 Год назад
Its possible but the output will not be correct in this case
@abhishekbhosale5310
@abhishekbhosale5310 11 месяцев назад
I've been trying to train the exactly same model, but for some reason i was able to get max accuracy of only about 92 percent. I even tried tuning the hyper parameters but the results were same. Can you tell me what might be the probable issue?
@dicesdw
@dicesdw 2 года назад
If we remove the normalization from data, we'll get the same results but taking more time to compute?
@donfeto7636
@donfeto7636 2 года назад
we can use the extracting specific layer features for transfer learning ?
@__-op4qm
@__-op4qm 2 года назад
can just select which parameters/tensors to get derivatives for and train (aka step, update) only them, while selected others are kept fixed. PS (long tangent): Generally can use derivative of anything wrt any tf.Variable type tensors for whatever purposes, when using tf.GradientTape for training a nn or for anything else (like when fitting arbitrary parametric functions or anything where derivatives are useful). Can save (e.g., as pickles), load, swap or manually update the trainable weights (which are all tf.Variable type) or any other Variable type tensors using .assign() method, generally is convenient. Any consecutive steps (can be sequence of methods called from several classes) that need to happen fast (e.g,. during training) can be just wrap in a function decorated with @tf.function, frees up the whole pipeline to be handled/debugged eagerly (i.e. numpy like behaviour) outside of training, and by the way when the graph mode is on (only inside @tf.function executions) frees and option train eagerly/interactively but slower. Moreover, hands on control of what Jacobeans constrain when running tf.GradientTape in eager more allows to check and skip gradient updates if there are any tf.nan values, which would otherwise break the model. There jacobians can also be manually fiddled with before passing to optimiser, like can do clipping, can normalise them, maybe even reweight them per datapoint. In summary tf.GradientTape and @tf.function, mixed with .Layer and .Model inheritance tools, allow user to do almost whatever they want with this library.
@donfeto7636
@donfeto7636 2 года назад
@@__-op4qm thank you for taking time for writing this comment , i appricate you can you recommend book for studying tensorflow for beginners
@__-op4qm
@__-op4qm 2 года назад
@@donfeto7636 This channel is awesome! Also, on coursera there is a useful course called: 'custom-models-layers-loss-functions-with-tensorflow'. [my 1st reply got deleted probs cos i put this full link in.] The main thing I was saying is that some tf decorators have some nuances to consider, to train faster and correctly. Googling forums etc and trying things out until it trains equally well as the official best practice code, step by step, was a good exercise when pushing the library to do unusual custom things. I really like the fact that tf now very nicely supports the eager mode functionality, meaning that things can be selectively ran in graph mode only when and where that's needed for speed up.
@tahaa1994
@tahaa1994 3 года назад
You are the best
@aaaqaaaa2720
@aaaqaaaa2720 2 года назад
Hi sir when I excute the code ihave problem that error name sequential is not defined why?Thanks advance
@granatapfel6661
@granatapfel6661 3 года назад
Why the commands import Tensorflow as tf and further more isn't highlighted? I think I use the right interpreter. Can someone help me
@samthrimavithana8243
@samthrimavithana8243 3 года назад
Hi when I print (x) it gives an error ypeError: 'tensorflow.python.framework.ops.EagerTensor' object is not callable
@malik_fa
@malik_fa 3 года назад
How can we load a .csv or img dataset from a local directory instead of using built in mnist dataset?
@Rohankumar-dd2ss
@Rohankumar-dd2ss 3 года назад
for csv use pandas and change it into numpy
@olo259
@olo259 5 месяцев назад
thanks sir for the video
@dijkstra4678
@dijkstra4678 2 года назад
unfortunately in Tensorflow 2.8.0 the Functional API is broken. After consulting google it appears that recently the Functional API is outputting the same error message that nobody knows how to solve.
@jeremynx
@jeremynx 3 года назад
The best!
@utkar1
@utkar1 3 года назад
Hey thanks for this awesome lesson. A query though. The same model with functional API gives me 9.7-9.8% accuracy while the Sequential is giving me 97-98% accuracy
@nikai4249
@nikai4249 Год назад
try from_logits = True
@apocalypt0723
@apocalypt0723 4 года назад
thanks for the video.
@IseseleVictor
@IseseleVictor 6 месяцев назад
The tutorial is very helpful, though I'm getting an error when trying to print model summary following tutorial 3: print(model.summary()) : raise ValueError(f"Cannot convert '{shape}' to a shape.") ValueError: Cannot convert '784' to a shape.
@WildSurferYT
@WildSurferYT 5 месяцев назад
I have same problem
@thetensordude
@thetensordude 3 года назад
Nice tutorial! Can you also make a video about The Tensorflow Core?
@philippfrogel9355
@philippfrogel9355 3 года назад
get dis guy more subs
@cedricmanouan2333
@cedricmanouan2333 4 года назад
Very good !
@AladdinPersson
@AladdinPersson 4 года назад
I appreciate you man!
@neillunavat
@neillunavat 3 года назад
How to only print accuracy while training a Keras functional API model? Please help? If you are here? I am trying to compare 3 different output layers with different activation functions. But the problem is, I only want the accuracy and not the loss while training. I have no issues with it but the line is too LONG. I want to compare each layers' accuracy. Epoch 1/5 1875/1875 - 4s - loss: 3.7070 - Sigmoid_loss: 1.1836 - Softmax_loss: 1.2291 - Softplus_loss: 1.2943 - Sigmoid_accuracy: 0.9021 - Softmax_accuracy: 0.9020 - Softplus_accuracy: 0.5787
@grahamastor4194
@grahamastor4194 3 года назад
Question: when building a model without a final activation layer and the allowing the loss function to apply the last activation; what does the model.predict code look like? Thanks.
@sofyanmahmoud4776
@sofyanmahmoud4776 3 года назад
You are amazing
@XhunterDragon96
@XhunterDragon96 3 года назад
Hi, i have a question. Why is it the the shape of the input is 28*28? I understand that the images are 28 by 28 pixels, but i thought the entries were 60k? I dont understand exactly how this part goes.
@SaiKiranAdusumilli
@SaiKiranAdusumilli Год назад
60k are the number of images... and each image consistent of 28*28 pixels.. that means 784 values.... so for each image we are having 784 values...
@Ven0mm04
@Ven0mm04 7 месяцев назад
Hey, is there any possibility that I can get 2 output? So like I'm gave one picture in and the output is 2 new pictures?
@Ven0mm04
@Ven0mm04 7 месяцев назад
nvmd don't senn the complete video xD
@asherabecassis9575
@asherabecassis9575 3 года назад
13 / 5000 For Apple users -> os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
@moussa5495
@moussa5495 Год назад
VERY NICE AND CLEAR
@Amir-gi5fn
@Amir-gi5fn 4 месяца назад
ValueError: Cannot convert '784' to a shape. AttributeError: 'NoneType' object has no attribute 'items'
@Amiths18
@Amiths18 29 дней назад
Add " , " after 784
@shudharsanmuthuraj1076
@shudharsanmuthuraj1076 3 года назад
hello, why while printing it shows 60000 samples instead of based on batch_size (60000/64)?
@navalsurange3588
@navalsurange3588 3 года назад
yes same is happening with me did you got it resolved??
@iva1389
@iva1389 2 года назад
wouldn't it be more practical if you'd use Flatten layer instead of doing reshaping on the x_train?
@DiaaHaresYusf
@DiaaHaresYusf 2 года назад
the division into 255 is not because of performance .. its because of weights asigned to the neural network -- weights going to be generated randomlly between 0 and 1 and if you kept your Xs value big it will be ignored and may all neural network wont learn .. thanks
@magicalpotato196
@magicalpotato196 Год назад
I understand why the last Dense layer is 10, but why is the first layer 512, and the second layer 256, im new to this completely so if anyone could give an explination for dummies id appreciate it :)
@SaiKiranAdusumilli
@SaiKiranAdusumilli Год назад
it's the random number that we can take ... but there is some logic in minimising values the dense layers .... for example let's take a human image that needs to be predicted.... in the first dense layer we are identifying will produce some results that will predict fingers or hands and legs, hair , eyes etc (identification of small parts).. now coming to 2nd dense layer we will combine those fingers and will form a hand image or leg image , by using eyes and hair we can identify a head.. now coming to final output layer we will combine all these values(hand, face ) and we will predict it's a human image... so that it works..
@akashk7390
@akashk7390 6 месяцев назад
can you please provide the code
@juanluismagana9043
@juanluismagana9043 3 года назад
thank you so much, I really understand everything (I'm not native english speaker)
@bhaviksharma326
@bhaviksharma326 3 года назад
to normalize you divided by 255.0, where that number came from? Did you already know the max value?
@Rohankumar-dd2ss
@Rohankumar-dd2ss 3 года назад
pixel ranges from 0 to 255
@abuabdullah9878
@abuabdullah9878 3 года назад
@@Rohankumar-dd2ss Thank you!!!
@HomonculusPort
@HomonculusPort 3 года назад
why is my y_test only have 10000 data in it compared to 60000 on y_train
@johnhawkins8914
@johnhawkins8914 2 года назад
for model training we need a lot of data - 60000 samples from y_train but for evaluation smaller dataset will be enough, hence only 10000 samples in y_test
@amruthavarshini8094
@amruthavarshini8094 4 месяца назад
campusx dhekloo ekk barrr
@navalsurange3588
@navalsurange3588 3 года назад
dude how is your training time soo less mine is 69 -75 sec per epoch also, while taking batch size = 32 only 60000/32 training data elements are passed not 60000 so I changed batch size to 1 and now it is taking 69-75 sec
@philippfrogel9355
@philippfrogel9355 3 года назад
i think this is because he uses a gpu, and you dont? it can be tricky to activate it sometimes, i think. also the batch-size doesnt make sense if it is 1. then you do each train step only with one image. if your batch-size is eg 32, then normally still all 60000 images are taken within one epoch. be careful with these answers, im not a pro myself
@kaylaparys7146
@kaylaparys7146 3 года назад
hello mine also takes only about a second each and I am running a GTX 1060. Since tensorflow appears to run on the GPU I would assume the stronger the GPU the faster the computations.
@randyrabinzengui9174
@randyrabinzengui9174 4 года назад
what is the the application using for that code? pycharm or an other
@AladdinPersson
@AladdinPersson 4 года назад
PyCharm yes
@randyrabinzengui9174
@randyrabinzengui9174 4 года назад
@@AladdinPersson but I'm trying with my Paycharm but i think it's have some problems! Please how get the code 🙏
@randyrabinzengui9174
@randyrabinzengui9174 4 года назад
@@AladdinPersson it's on mac or Windows?
@Namenlos-r8f
@Namenlos-r8f Месяц назад
your voice sounds so familiar
@shivamanand8998
@shivamanand8998 4 года назад
Got accuracy of 98.1 by using 1 relu 2 relu 3 sigmoid
@furkatsultonov9976
@furkatsultonov9976 3 года назад
You can use sigmoid function for binary classification like cat-no cat. For the MNIST problem you should use softmax activation function for the output layer as we need multiple labels.
@moussa5495
@moussa5495 Год назад
Do u have a website?
@rudela9900
@rudela9900 2 года назад
Never mind, a parenthesis wss in the wrong place. Thanks anyway.
@dailyupdatelesson4097
@dailyupdatelesson4097 4 года назад
bro, we need more advanced lessons, these codes already on github or other,
@AladdinPersson
@AladdinPersson 4 года назад
I hear you, am doing more advanced too, but we have to think about the newbies too ;)
@nikhildr4441
@nikhildr4441 3 года назад
yeah! what about beginners 😔
@manishahajare2470
@manishahajare2470 2 года назад
5:46
@Rindik
@Rindik Год назад
Thanks for tutorial, but fu*king pycharm killing my nervous. But Vs code helped me
@thevoid5181
@thevoid5181 10 месяцев назад
if you had problems importing keras like me from tensorflow import _tf_uses_legacy_keras_ do this instead: import keras
@thevoid5181
@thevoid5181 10 месяцев назад
from keras import layers from keras.datasets import mnist
@FillyRoid
@FillyRoid 2 месяца назад
​@@thevoid5181 from keras.datasets import mnist doesnt work. They doesnt find datasets. u can find datasets over "from keras import datasets", but from keras.datasets it doesnt work
@BeautifulFeets10
@BeautifulFeets10 3 года назад
Burning question; What PC (and GPU model) do you use??!
Далее
TensorFlow Tutorial 8 - Model Subclassing with Keras
22:59
Ничего не делаю всё видео 😴
00:33
Why Does Diffusion Work Better than Auto-Regression?
20:18
TensorFlow Tutorial 2 - Tensor Basics
21:01
Просмотров 179 тыс.
Watching Neural Networks Learn
25:28
Просмотров 1,3 млн
Transformer Neural Networks Derived from Scratch
18:08
Просмотров 140 тыс.
Ничего не делаю всё видео 😴
00:33