Welcome to Hackers Realm, your ultimate destination for learning all things in tech! 💻🌐 From Python tutorials to machine learning and deep learning projects, web scraping, blockchain concepts, data structures and algorithms, and much more, we've got you covered! 💯
Don't hesitate to share your queries in the comment section and don't forget to subscribe to stay updated with latest content! 🔔 Subscribe: bit.ly/hackersrealm 🗓️ 1:1 Consultation with Me: calendly.com/hackersrealm/consult
Remember, God is a Hacker, not an Engineer! 🙌
Follow on social media channels and website for more resources.
And if you want to support the channel, make a small donation to our UPI ID or PayPal. 🙏 🆙 UPI ID: hackersrealm@apl 💲 PayPal: paypal.me/hackersrealm ₿ Binance Pay ID: 238629676
The important thing to keep in mind is that If we approach this problem using normal map = dict(), it will KeyError for arr[i + 1] (because it may not be there in the map), However, if we use Counter(arr) to create the dictionary, then it will not give us KeyError (It will simply return 0 for any key not in the map).
Thank you so much!!!. A question: in what type of distributions can the box plot be used? For example, if the data follows a uniform distribution, does it make sense to find outliers? What do you recommend me?
i am not getting the program to be run correctly maxsum=-99 for i in range(4): for j in range(4): top=sum(arr[i][j:j+3]) mid=arr[i+1][j+1] bot=sum(arr[i+2][j:j+3]) hourglass=top+mid+bot maxsum= max(hourglass, maxsum) return maxsum this is the code
I have a scenario of extracting most relatable ( having more words matching ) audios using an input audio. What's the best way to approach this problem ? If it is embeddings then how I'll go about it and if not, then tell me which one would be better.
As far as I know, initially we use smaller learning rate because at the start of the training our model usually takes larger steps to reach the minima, but as it progresses it becomes slow so if we are dealing with a large dataset we can simply use higher batch size like 8192 etc and by using warming up the learning rate (which will gradually increase the learning rate during training) so that our model can converge not only more faster but will yield more accurate results (generalization) as a lower batch sizes of 32, 64 usually yields.
Thanks for the video. I am currently working on EfficientNet and Topological data analysis. I have to extract features from the TDA and then from EfficientNet and then combine it and run it on SVM. Can it work there too?? Thanks again
Time series analysis. Some classical time series forecasting methods. Machine learning time series methods. How to use Facebook prophet to forecasting.
@@JonSnow-gs1hg fbprophet model, I have already covered, please search for traffic forecast analysis to check that. Remaining topics, I will try to cover soon
Second question, at the end you are testing your model with images that comes from X, and X has been used to train the model so aren't you testing the model on images that it already saw?
you could also try with new image doing the same preprocessing techniques, and not all the images were used for training as there is validation split used for evaluation. I will make a note and use new images in future video, thanks.
Thank you so much for this. I have been trying to install opencv with cuda for a few days now and there was always some new error that I could not resolve. This is the first tutorial that worked without any issues. Thank you so much!!
Hey Hackers, From minute 26-28, the video is screen was frozen during recording while the audio is available, you can see the code in the notebook after that. Thanks for your Patience!!!
bro thanks a lot but initially i got so many errors that it had taken my 2 days but i used chatgpt, google, youtube to resolve all that issues and At the End i got it.
i want to generate 256X256 images and input images are also 256X256 but during epoch training it is showing 32/32 but for u it is showing 674/674 how to increase from 32/32 to 674/674
I'm getting this error displaying, "y contains previously unseen labels: angry". (I've named each of the classified sub-folder in the test folder by their expression) The error actually arises while using the transform function using the label encoder. It successfully runs the y_train but throws this error in y_test. Please support if possible.
Don't worry about it, even pro coders has to read the question carefully 2-3 times to better understand it. Use the sample input and output to understand quickly, that's how I do it
Is the denoiser only good at denoising images actually containing a digit ? (since digits is the only trained data) What would happen if you would input something else (eg: letter or other symbols). Would it try to morphe it to the closest digit or would it be able to denoise it properly ?
I anwser my own question : I have try to input letters to an autoencoder only trained with digits : TL;DR : it does not work very good. The output is mostly digits, usually with a shape as close as possible to the input letter. For example "Z" will usually output "2", "C" some sort of "0" and so on.