Hey JWR, JRW here. I've been "scraping" for 20 years now. Amazing how the tools have matured. Dead simple these days. That said, this video is a fantastic example of a cherry picked site to demo these tools. Few real world websites are this simple to parse using CSS. Please dedicate some time to digging through more challenging selectors. Thanks in advance - John
Hey! Cherry picked example are unfortunately a part of it, people simply won’t watch a long video where I am trying to work stuff out. Also this video was more of a demo of how different tools work, but also end up at the same result. I’m not sure I agree about the parsing css part though, I don’t often find an issue on sites where it is most html and css and minimal js
@@JohnWatsonRooney I think if you did some master classes where you tackled some complex sites and worked through things that you would have a fair few views on those videos. The easy options you use are VERY useful for specific tips but I would love to see you work through some real difficult situations as well. But your content is awesome so believe me I am not complaining! Thank you so much.
@@JohnWatsonRooney I garantee there will be people that will watch a long video to see a professional trying to figure out things. Most true learners are just sick of the magic and smooth programming experience many videos show, when in reality, trying to do it by ourselves we just end up struggling a lot. And that's even unhealthy to those starting in the area, thinking everything is always that simple just for the sheer amount of such cherry picked contents. Just make clear from the beginning that is a "advanced" content, an example of when things are not that simple so people can relate to it and feel compelled to watch it.
hey. nice video. But the most problem this days on scraping its the amount of js/non html things that make us a lot of difficulty to get. But overall thanks for sharing
Thanks for sharing such important information I didn't know that there are modules other than beautifulsoup and requests I have a question Can you make a video on how to use authentication proxy with selenium? I have used options.add_argument('--proxy-server=ip:port') it doesn't work with me It doesn't show the alert box which I should input the username and password
Hey thank you so much for your explanation!, What happen when products have this structure? ... ... ... ... ... because I cant get anything from this web I tried everything like html.css(div.Fractal-ProductCard__productcard--container ) or html.css(div.productcard--container) or html.css(div.t:m|n:productcard|v:default) and much more
Год назад
I'm starting with Python ans web scraping all along and this video is amazing and teach me a lot of basic things! Thank you a lot for such a fantastic video.
Hey John, great video here. I wasn’t getting the price in my results, it was just empty for some reason? I messed around and used the span code as well but it just returned “no text found”. Any ideas? Thanks for everything, your videos are great
Love the content & specially how to write code clean and neat. The best channel in my 5 years youtube journey. May Allah give you more power and we are enjoying the best content. thanks!
@@JohnWatsonRooney i ALready Searched and Have Used it Thanks Very Much , But i Have Another Problem now , iam Scraping Some Data , One of them is Prices , There SomeTimes are 2 prices , The 2 prices are always in the html but there is sometimes one only that's displayed on page . i Can't Find any Class or anything to difeerentiiate between them to get the element that's appearing on the screen only ,
John, Thanks for your tutorials. Enjoying the web scrapping I am planning to dig into this more. Curious, this tool used "css" to get the data. Are there other tools to get "dynamic" data or JS data? Just wondering thanks
Hi John, I love your videos. What if view-source doesn't return all the HTML on the page that we want? Do we need to use a headless browser and wait for elements to load? Or is there a good requests library we can use instead? Thanks
HTTPX works just like requests, but it can be Async when needed. Selectolax is faster and more focused (CSS selectors only) than BS4. I always say use what you prefer I have found after exploring different tools that these 2 work the best for me!
Using requests and beautifulsoup up to now, it's great to discover competitive alternatives. Would append data in a json file instead of csv also work ?
Always good to see other options in case they fit your needs better. Yes you can append to a json file, look at json lines too it might be better for you
great video John. Thanks for that. One question come to my mind. Why do you use dataclasses first and then transfer them to dictionaries. Why not to use dictionaries from the beginning? Whats the advantage of dataclassess here besides IDE hints? Don't have much experience with them thats why I'm asking.
Thanks. In this case there wasn’t much of a benefit I am just in the habit of using them now. The benefit comes in the validation you can do with them when accepting data in and out of your program
Thanks for the video, John. I was finally able to run my code successfully following the steps in this video. I was following the older videos for selenium and playwright but couldn't get the results you displayed in the video. I think the html code on the websites had changed since you recorded the video. The only issue I ran into for this one is my csv file has a blank row between every exported line. So instead of 300 rows, I have 600. Any idea why?
thanks - yes unfortunately that is part of it, websites change so my examples often expire. I try to show the methods as much as I can. As for your CSV, some of your data probably has a newline character at the end, try adding .strip() to each line to see which one it is!
great video like always, i have this error scraping a site with python playwright … intercepts pointer events retrying click action, attempt #57, Is a heavy javascript site, i am tryingn to click a button. Thanks
Hi, @John I've been following you for a long time and watching all your scraping videos with Python. I have started to create scraper but the website is not allowing me to access as it is considering my script as a bot, though I have changed the user-agent to latest chrome but still, that website is recognizing me as a bot. My question is that which combo I should use for scraping little complex JS/AJAX/bot-aware websites? People say that selenium is good for that purpose, but you say that selenium is not a good option now a days as it is slow, then what do you suggest, which combo should I use, that can fit in many scenarios, if not all. Looking forward! Thanks.
Can we run this from Spyder or Jupiter? Also request you to please try and scrapp Tesco link.. I had tried it and it was working fine for sometime but now due to the restrictions my code doesn't work.
Hey John, great video! I am a beginner at webscraping and vscode in general. I saw that your import csv part got added automatically, I think? Can you please tell me how to do that? Is that some extension like Auto Import?
@@JohnWatsonRooney oh, okay, got it. No problem. I ended up searching Google and came to know about this extension auto import and included it in my vscode, which is indeed able to add those import lines by itself.
so I've been learning scrapy basics and following your channel for quite a while. So as per this video this is the latest method to scrape the pages ! what about those ol scrapy start_url and responses to get the whole page and link extractors and follow_url to get to next pages and stuff! i mean do we still need to use them at some point or we could replace them with this method altogether. ? And thanks for the sharing new ways to scrape. cheers
if your scraper behaves similar to a human then its fine. If it totally smashes some poor sods ecommerce page then no. If its a well built webapp then they coing to be throttling you anyway, IMHO
Hey man I've been watching your content for the past couple of weeks and it's fire. A good content idea would be to create a beginner series where you go through the absolute basics, I'm proposing this because nearly all of the videos on the topic are very outdated. Cheers.
Respected Sir, I need your help to get links or download F B marketplace images using the scraping tool, I did a lot of work but I am confused, even though I got links to all listed items but couldn't get the link to images in the individual list shown in the new tab. Please guide me
@@JohnWatsonRooney I did! I've got a longer question regarding my Amazon Scraping Project, which i am currently doing with Selenium. Everything works fine, its just quite slow.. Is it okay to hit you up with a direct message/email?
This was a good example of how to get started, but I still had some questions: - In your opinion, why are httpx and selectolax better than requests and BeautifulSoup? - There are so many places where things can fail - status code =/= 200, website sends you to a "I'm busy" page, etc. - that are missing here. If you are communicating with an unreliable website, this code may fail even with the hobby application, much less something that is scraping professionally. Is there anything in httpx/selectolax that helps with the exception handling compared to requests/BS4?
Httpx has async ready for you when you need it and selectolax is a much faster parser than bs4. It still comes down to preference- use what works for you! Yes I’m this video I didn’t flesh it out fully with error handling, and retries and other parts that would make the script more complete for more professional use. I didn’t want to cover too much in one go and also reach as many people as possible
Hi John, Thanks for the awesome video! I really like your clear explanations. I was trying to run your code but on a Google Search page but got into some difficulties. I was hoping you could please tell me what I'm doing wrong. The div class I'm trying to grab: (which represents a Google Search result). But what's returned is an empty list: [] def parse_queries(html): queries = html.css("div.MjjYud") print(queries) I, therefore, cannot grab nested "div", "h3", and "cite" classes which hold the information I require to populate my dataclass attributes (website address, website title, website text). For example: address --> title --> text --> (*) As you suggested, I also looked at the page source and did find this "MjjYud" div class. My code: import httpx from selectolax.parser import HTMLParser from dataclasses import dataclass, asdict import csv @dataclass class Query: website: str title: str information: str def get_html(): url = "www.google.com/search?q=data+science+courses" resp = httpx.get(url) html = HTMLParser(resp.text) return html def parse_queries(html): queries = html.css("div.MjjYud") print(queries) results = [] for item in queries: new_item = Query( website=item.css_first("cite.iUh30 qLRx3b tjvcx").text(), news_title=item.css_first("h3.LC20lb MBeuO DKV0Md").text(), textual_info=item.css_first("div.VwiC3b yXK7lf MUxGbd yDYNvb lyLwlc lEBKkf").text() # the inside ) results.append(asdict(new_item)) print("new_item") return results def to_csv(res): with open("results.csv", "a") as f: writer = csv.DictWriter(f, fieldnames=["website", "news_title", "textual_info"]) writer.writerows(res) def main(): html = get_html() res = parse_queries(html) to_csv(res) main() Thank you very much for taking the time to read my comment 🙏🏼