Mr John sir. I would really liketo thank you for the service you have given us online. YOU ARE TRULY VALUED THANK YOU FROM SOUTH AFRICA. You seriously are become a role model and you teaching styles are awesome!!!!!!!!!!
most of the time what I do is use selenium to get me where I want then extract what I want by making soup of the page using beautifulsoup extracting specific tag info then afterwards using pandas to save the list data in data frame and exporting it out as a csv or excel .
I have been using requests along with bs4 & I did heard about scrapy and I agree it isn't good for beginners so I was a beginner that time it really was daunting. But now I think it's time to scrapy time
Another very educative video of yours. Thanks!!. Question: what would be the best method to scrape a site that has a huge table that is spread across many pages, in a way that (for example) only 30 rows are shown in the first page (say 1 - 30) then the next 30 on the 2nd page (31-60) etc. etc. all the way up to over 10,000 rows or more. Can I move to the next page using Scrapy of BS4 or I need Selenium for that purpose?.
That depends on the way the page is getting the data. I suspect it’s via Ajax - check out my newer video on scraping JavaScript tables that might help you
Overall I think one should take the time to learn Scrapy if you need to web scrape for a job. It will be worth it in the long run. What do you think? (as I move over to your scrap for beginners video) LOL
The principles are the same, you make a request and receive data. I don’t know JavaScript well really to comment - but as far as I’m concerned you can’t go wrong with Scrapy - it’s built specifically to scrape data after all
I am new to scraping. If you have a dynamic website that requires you input dates or numbers and click on buttons, what else besides selenium works? Does Beutiful Soup work? Very interested.
Sir I getting an error while running scrapy project.. error is Scrapy 2.4.1 -no active project Unknown command: crawl Use "Scrapy" to see available commands
you mentioned that if you needed to click a button or input in a field than selenium could be what you're after, does that mean that you _can't_ accomplish that with, say, scrapy and some addons?
Well, yes you could - depending on what it is. You can write LUA scripts for Splash that can simulate that, or if you are find a way around having to actualy click something, like getting the data elsewhere, or by finding the url that the data comes from you could get around it. There are some libraries that allow some control over these things but are all based around a browser somehow, like Mechanical Soup
Do you have a video that goes over the best scraper/tool for websites that have a constantly changing text element? Stock prices being the most well known example of this. I'm making something to scrape a "freefall auction" (price drops until someone buys one, or until it hits a predetermined low) and gather the lowest prices reached for multiple auction lots. I love using requests-html but it seems that it only captures the initial state of a rendered page, rather than any updates that occur once loaded. My guess is do basic gathering info with requests-html, then grab prices with selenium, which is my current approach, but wanted to check with the expert!
@@JohnWatsonRooney May I email you a similar thing as well? Found it difficult with scripts, couldn't reach the page source with python (I think they rejected me because it's headless) and couldn't render it with requestsHTML...
Thanks for the information! I found another method which is not very efficient but worked for me on a small dynamic website page. I ran selenium in the background, sent keys (ctrl + a, ctrl + c) then pyperclip.paste() into a variable. Then I used re module on the string to take the information I needed. I used split method as well on the new lines to convert the string into a list with strings.
Thanks for your videos. I can understand you very well, thank you for taking care of your pronunciation, I am Spanish and we do not have Spanish-speaking channels as good as yours. Keep it up.
Hi John, I missed you lately because I got busy. I want to tell you Happy New Year. And I want to ask how I can benefit financially from web sides scraping Regards Waleed
Happy new year to you too. To start I’d say try to get some paid work scraping data that people need, then try to build something useful with data you scrape and charge for the service
You mentioned that selenium sends information about itself to websites being scraped, so that websites could detect that selenium is being used. I'm curious if you know more about this and any workarounds?
Thanks for the whole content! I got a question and it would be very helpful for me, if you can support: I have to scrape a dynamic website. If I scroll down, more objects are loading to this page (always 50 new). When I look in "developers" of my browser, I find the Data I need in the folder "XHRs" and with every scroll for new 50 objects, there is a new file called "730" with the new 50 objects in json-format. I need all the 730-files. do you know how to scrape them?
Sure check out this video I did here it covers how to get that information Always Check for the Hidden API when Web Scraping ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-DqtlR0y0suo.html
@@ayeshavlogsfun Selenium is really good for automating tasks such as browsing, interacting with elements on a page, etc. I had more trouble setting it up purely to scrape results, as that isn't it's primary focus.
There are some things which I can't understand. What if I have to scrape a website that use Javascript (if I make a request, I receive a part of the content of the page)? The unique solution is to use Selenium or, with Scrapy, you can handle it without problem?
You’ll need to use something to render the JavaScript out for you, and return all the page data as html to parse. That thing is a browser - but in some cases a smaller lighter version of a browser that runs headless (we don’t see it). That’s like splash or puppeteer (that’s what requests-html uses) or it could be selenium that we can control
@@JohnWatsonRooney So now i am finding that to extract text and parse html 'beautifulSoup' is really good than Scrappy which also confirms what you said in your video ! So right now for my task i am using beautifulSoup (for parsing) and Scrappy(for running a headless browser), Which works fine but I am curious if you know any more easy techniques to parse html using scrappy especially to get text. Please let me know if you have any thoughts . Thanks you !
How do I scrape from reebonz.com? They added a layer of protection from a vendor (which I can’t remember) that renders their site almost impossible to scrape.
I was trying to get things done last 20 minutes with BeautifulSoup but I have to press an accept button on wozwaardeloket.nl and the site is made in JSP so that mean's BeatifulSoup will not be able to post form data to one page and than post another form data to the new page right?
Can you make a video about recent scrapy-playwright bug about implementing scrapy-playwright setting implementation and some books or resources to learn scrapy.
Hey bro, i need some help, I'm working on a project, a part from it is getting some data from Instagram and put them Into my Web app of course they must be always updated.. In this case i think selenium is low but i need it to connect to Instagram account also i need http request... So plz advise me..
It seems easier to use selenium to scrape google map by searching different zipcodes for gas prices but it’s too slow. Can scrapy be able to interact with the website like searching different inputs, or it is better to just use google api?
Hello John, Thanks for the video! I have two questions: What the best tool for scrapper a website with login or autentication? And, when the website use a api with autentication, what can i use?
Easiest option is to use selenium or playwright, but it can be done with requests too - you’d need to find the login endpoint and send the credentials over.