Hey Babbili, try accessing from different devices. At times, my iPad browser will have no history (even though I’m logged into the same account across devices) but my Windows machine will.
Hi kindly I'll love to create a website and I have similar site that does what I want, my concern is can chatgpt keeps giving the violation message how best to create such this site with right prompt
ChatGPT doesn't let me create a Poker HUD (Heads up display with the statistics of every player you played with) for PokerStars. Can you figure out how to make it build a Poker HUD? Thank you for the contents you bring in your videos
Can't you just make a config file to refer to all your 50 page of html files and save every parsed data on a single dataframe? I haven't tried it yet but I'm pretty sure it can be done. This way you don't have to rename for very single page.
Correct me if I'm wrong but this is actually parsing, not web scraping. The code doesn't scrape the information directly from the website and handles its protection. Nonetheless, nicely done.
may be...but you misunderstood. you can't scrape "directly". when you write regular python code... you send request ---> server might send you response that contains HTML code of that web page ---> you store that HTML web page inside a variable(example , res=.... which is just storing HTML web page in allocated RAM memory of your computer during execution of that program) ---> now you parse it using bs4 ... did you get that? you are not scraping it on the server... it's always on your side. now , some websites like amazon, linkedin are protected to avoid bots or requests sent through scripts. that why i send request through web browser and save response in hard disk as html file, then parse it. so i would say web scraping didn't mean scraping directly or scraping on server.
@@ChatGPT-AI Yes I 100% get it and appreciate the detailed explanation. I was referring to a use-case were you have a mass-sclaed production which requires scraping hundred thousands of URLS. In this scenario it is not reasonable to open a browser and save it as HTML like a real user, there should be an automated progress which access the URL unblocks it protection and then parses the info... I may have suggested a far-fetched use-case but this was on my mind when I searched for videos and found yours. Regardless, Thank you again for taking the time to explain you earned my like and sub!
@@ChatGPT-AI its easy to write an entire script to scrape all pages rather than doing this, I scrapped all pages using selenium and beautifulsoup, didnt get any error though
Hey awesome tutorial man,im having some problem when i lets say try to scrape all the phones suppose 50 page after 5 page i dont get any result i did not understand why tho,how can i solve that
url=url of first page write for loop (i in range 1 to 50) { target_link=url+ 'page='+'i' then send request to target_link from python code every 5 minute. } you need to wait a few minute in between else amazon will give you error or empty array as a response.
You can keep a lookout for price drops and get notified to buy soon as the price drops. Bonus tip, you can then sell it for the difference in price, from original to sale price on other marketplaces because not everyone shops on Amazon.
@@one_autumn_leaf69 Yes there are, I do believe they cost money. Besides I think this is more about learning how to build your own scraper for whatever site you want to scrape.