Тёмный

Best Web Scraping Combo? Use These In Your Projects 

John Watson Rooney
Подписаться 86 тыс.
Просмотров 43 тыс.
50% 1

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 124   
@nuritas8424
@nuritas8424 Год назад
the way you explain is so clean. thanks a lot
@drac.96
@drac.96 Год назад
Nice to see a new intro and the step by step explanation is really good
@rics6035
@rics6035 Год назад
John! Thanks so much for your amazing videos, they are super useful and interesting to watch!
@y2kdeuce2
@y2kdeuce2 Год назад
Hey JWR, JRW here. I've been "scraping" for 20 years now. Amazing how the tools have matured. Dead simple these days. That said, this video is a fantastic example of a cherry picked site to demo these tools. Few real world websites are this simple to parse using CSS. Please dedicate some time to digging through more challenging selectors. Thanks in advance - John
@JohnWatsonRooney
@JohnWatsonRooney Год назад
Hey! Cherry picked example are unfortunately a part of it, people simply won’t watch a long video where I am trying to work stuff out. Also this video was more of a demo of how different tools work, but also end up at the same result. I’m not sure I agree about the parsing css part though, I don’t often find an issue on sites where it is most html and css and minimal js
@contrarypagan
@contrarypagan Год назад
@@JohnWatsonRooney I think if you did some master classes where you tackled some complex sites and worked through things that you would have a fair few views on those videos. The easy options you use are VERY useful for specific tips but I would love to see you work through some real difficult situations as well. But your content is awesome so believe me I am not complaining! Thank you so much.
@AmodeusR
@AmodeusR Год назад
@@JohnWatsonRooney I garantee there will be people that will watch a long video to see a professional trying to figure out things. Most true learners are just sick of the magic and smooth programming experience many videos show, when in reality, trying to do it by ourselves we just end up struggling a lot. And that's even unhealthy to those starting in the area, thinking everything is always that simple just for the sheer amount of such cherry picked contents. Just make clear from the beginning that is a "advanced" content, an example of when things are not that simple so people can relate to it and feel compelled to watch it.
@gabiie9839
@gabiie9839 Год назад
nice work john, web scraping lord.
@JohnWatsonRooney
@JohnWatsonRooney Год назад
thanks for watching!
@rosaarzabala5189
@rosaarzabala5189 Год назад
Great combo! Thanks for ur videos 🙌 almost didn't get the doge 👀
@JohnWatsonRooney
@JohnWatsonRooney Год назад
thanks ;D
@joaoalmirante4268
@joaoalmirante4268 Год назад
hey. nice video. But the most problem this days on scraping its the amount of js/non html things that make us a lot of difficulty to get. But overall thanks for sharing
@francius3103
@francius3103 Год назад
It sayts venv/bin/activate doesn't exists; theres only a file falled python and another one called python3 there :(
@rovshenhojayev1843
@rovshenhojayev1843 Год назад
can we add link and image of that product on this lib?
@JohnWatsonRooney
@JohnWatsonRooney Год назад
Yes it would work the same way
@tanekapace8080
@tanekapace8080 Год назад
I’m interested in a bot that can fill out online forms at multiple websites. Kindly respond if you could help me
@shivambajaj6228
@shivambajaj6228 Год назад
I usually encounter Error 429 scraping web pages. Is there any way I could bypass that?
@hossamgamal8661
@hossamgamal8661 Год назад
Thanks for sharing such important information I didn't know that there are modules other than beautifulsoup and requests I have a question Can you make a video on how to use authentication proxy with selenium? I have used options.add_argument('--proxy-server=ip:port') it doesn't work with me It doesn't show the alert box which I should input the username and password
@JohnWatsonRooney
@JohnWatsonRooney Год назад
I'm going to do some more selenium vids i will try to cover this in those, but i'm not sure exactly why that doesn't work
@hossamgamal8661
@hossamgamal8661 Год назад
@@JohnWatsonRooney Thanks
@y2kdeuce2
@y2kdeuce2 Год назад
@@JohnWatsonRooney how about a Firefox AWS lambda function with a rotating proxy? :D
@nicolasalarcon58
@nicolasalarcon58 Год назад
Hey thank you so much for your explanation!, What happen when products have this structure? ... ... ... ... ... because I cant get anything from this web I tried everything like html.css(div.Fractal-ProductCard__productcard--container ) or html.css(div.productcard--container) or html.css(div.t:m|n:productcard|v:default) and much more
Год назад
I'm starting with Python ans web scraping all along and this video is amazing and teach me a lot of basic things! Thank you a lot for such a fantastic video.
@jasonbischer4568
@jasonbischer4568 10 месяцев назад
Hey John, great video here. I wasn’t getting the price in my results, it was just empty for some reason? I messed around and used the span code as well but it just returned “no text found”. Any ideas? Thanks for everything, your videos are great
@karimbenamar362
@karimbenamar362 3 месяца назад
same here😢 any idea how to solve the issue ?
@lucianocarvajal6698
@lucianocarvajal6698 Год назад
And can httpx to scrap info from dynamic / javascript web pages? Because what i see in the video is that is being used in a normal html website
@JohnWatsonRooney
@JohnWatsonRooney Год назад
you can, if you can find the backend API, or otherwise you will need to render the page with browser automation like playwright
@hayat_soft_skills
@hayat_soft_skills Год назад
Love the content & specially how to write code clean and neat. The best channel in my 5 years youtube journey. May Allah give you more power and we are enjoying the best content. thanks!
@artabra1019
@artabra1019 Год назад
nc asdict method save much time
@itzcallmepro4963
@itzcallmepro4963 Год назад
I Got Errors and Searched And Found that the site iam trying to scrape uses CloudFlare Protection is there anyway to bypass that ?
@JohnWatsonRooney
@JohnWatsonRooney Год назад
Try cloudscraper, it can have some good results
@itzcallmepro4963
@itzcallmepro4963 Год назад
@@JohnWatsonRooney i ALready Searched and Have Used it Thanks Very Much , But i Have Another Problem now , iam Scraping Some Data , One of them is Prices , There SomeTimes are 2 prices , The 2 prices are always in the html but there is sometimes one only that's displayed on page . i Can't Find any Class or anything to difeerentiiate between them to get the element that's appearing on the screen only ,
@juliohernandez4890
@juliohernandez4890 11 месяцев назад
Thanks for wonderful material. Maybe is me but right now price is not saved. Thanks
@Septumsempra8818
@Septumsempra8818 Год назад
My business got funding!!! Thank you Mr Rooney.
@JohnWatsonRooney
@JohnWatsonRooney Год назад
great! thanks for watching!
@hrvojematosevic8769
@hrvojematosevic8769 Год назад
Are you hiring? :)))
@Septumsempra8818
@Septumsempra8818 Год назад
@@hrvojematosevic8769 developers, Yes.
@roshanyadav4459
@roshanyadav4459 Год назад
Can i join u i worked with scrapy playwright Beautifulsoup selenium I am an intermediate programer
@hrvojematosevic8769
@hrvojematosevic8769 Год назад
@@Septumsempra8818 it's a broad term T_T
@eternogigante685
@eternogigante685 8 месяцев назад
Does this work on SPAs rendered by frameworks like react and such?
@RonWaller
@RonWaller Год назад
John, Thanks for your tutorials. Enjoying the web scrapping I am planning to dig into this more. Curious, this tool used "css" to get the data. Are there other tools to get "dynamic" data or JS data? Just wondering thanks
@jasonbischer4568
@jasonbischer4568 10 месяцев назад
Also, a csv file is not being created when I run the script? Any idea?
@ramarajesh9554
@ramarajesh9554 Год назад
First
@GelsYT
@GelsYT Год назад
OH MY GOODNESS! THANK YOU! THIS IS SO MUCH EASIER TO COLLECT THE DATA AND CONSTRUCT IN A DATA STRUCTURE LIKE DICT THANK YOU!
@JohnWatsonRooney
@JohnWatsonRooney Год назад
thanks I'm happy to help!
@arabymo
@arabymo 5 месяцев назад
Putting your mind and thinking into the code! What a way to explain and learn. Thank you.
@thorstenwidmer7275
@thorstenwidmer7275 10 месяцев назад
May I ask which Lenovo this is?
@JohnWatsonRooney
@JohnWatsonRooney 10 месяцев назад
It’s an old x200. I’ve put an SSD and more RAM in it and it’s a workable machine
@thepoorsultan5112
@thepoorsultan5112 Год назад
Already have been using selectolax and httpx combo
@frynoodles1274
@frynoodles1274 Год назад
Hi John, I love your videos. What if view-source doesn't return all the HTML on the page that we want? Do we need to use a headless browser and wait for elements to load? Or is there a good requests library we can use instead? Thanks
@JohnWatsonRooney
@JohnWatsonRooney Год назад
if it doesn't you have a few options, headless browser is one, or seeing if there are AJAX requests you can use too
@OPPACHblu_channel
@OPPACHblu_channel Год назад
Thanks u for sharing experience, very interesting and helpful!👍
@bakasenpaidesu
@bakasenpaidesu Год назад
Great video... Btw u can use pandas to convert dictionary to CSV.
@JohnWatsonRooney
@JohnWatsonRooney Год назад
thanks - yes its much easier too, but pandas is a big library to import in
@gisleberge4363
@gisleberge4363 Год назад
Why you "left" requests and beautifulsoup? Just curious about their downsides compared to the ones you recommend here.
@JohnWatsonRooney
@JohnWatsonRooney Год назад
HTTPX works just like requests, but it can be Async when needed. Selectolax is faster and more focused (CSS selectors only) than BS4. I always say use what you prefer I have found after exploring different tools that these 2 work the best for me!
@nztitirangi
@nztitirangi Год назад
very cool. I kept getting timeouts so did this to solve: client = httpx.Client(timeout=None) resp = client.get(url) return HTMLParser(resp.text)
@dungphung2252
@dungphung2252 Год назад
Hi,i just want to know if it work on all websites. Tks
@7Trident3
@7Trident3 Год назад
No dickin around, meat and potatoes! This should be the gold standard on how to make a programming vid.
@JohnWatsonRooney
@JohnWatsonRooney Год назад
Thanks I appreciate it!
@guillaumebignon6957
@guillaumebignon6957 Год назад
Using requests and beautifulsoup up to now, it's great to discover competitive alternatives. Would append data in a json file instead of csv also work ?
@JohnWatsonRooney
@JohnWatsonRooney Год назад
Always good to see other options in case they fit your needs better. Yes you can append to a json file, look at json lines too it might be better for you
@pypypy4228
@pypypy4228 Год назад
I like this approach. Thank you!
@michakuczma4076
@michakuczma4076 Год назад
great video John. Thanks for that. One question come to my mind. Why do you use dataclasses first and then transfer them to dictionaries. Why not to use dictionaries from the beginning? Whats the advantage of dataclassess here besides IDE hints? Don't have much experience with them thats why I'm asking.
@JohnWatsonRooney
@JohnWatsonRooney Год назад
Thanks. In this case there wasn’t much of a benefit I am just in the habit of using them now. The benefit comes in the validation you can do with them when accepting data in and out of your program
@zelt7466
@zelt7466 Год назад
requests_html one love)
@garymichalske2274
@garymichalske2274 Год назад
Thanks for the video, John. I was finally able to run my code successfully following the steps in this video. I was following the older videos for selenium and playwright but couldn't get the results you displayed in the video. I think the html code on the websites had changed since you recorded the video. The only issue I ran into for this one is my csv file has a blank row between every exported line. So instead of 300 rows, I have 600. Any idea why?
@JohnWatsonRooney
@JohnWatsonRooney Год назад
thanks - yes unfortunately that is part of it, websites change so my examples often expire. I try to show the methods as much as I can. As for your CSV, some of your data probably has a newline character at the end, try adding .strip() to each line to see which one it is!
@joshman844
@joshman844 Год назад
does this work in google colab?
@digitalbangladesh6977
@digitalbangladesh6977 Год назад
hello sir..what are downsides of scrapy in respect of this project??
@JohnWatsonRooney
@JohnWatsonRooney Год назад
None really - I just sometimes feel like it’s overkill for a small project and think of it more for larger scrapers and crawlers
@bittumonkb866
@bittumonkb866 Год назад
What about sites needed login?
@yoanbello6891
@yoanbello6891 Год назад
great video like always, i have this error scraping a site with python playwright … intercepts pointer events retrying click action, attempt #57, Is a heavy javascript site, i am tryingn to click a button. Thanks
@UniquelyCritical
@UniquelyCritical Год назад
1000th like here at 5:12 AM CST. Thanks!!!
@JohnWatsonRooney
@JohnWatsonRooney Год назад
Thank you!!
@yawarvoice
@yawarvoice Год назад
Hi, @John I've been following you for a long time and watching all your scraping videos with Python. I have started to create scraper but the website is not allowing me to access as it is considering my script as a bot, though I have changed the user-agent to latest chrome but still, that website is recognizing me as a bot. My question is that which combo I should use for scraping little complex JS/AJAX/bot-aware websites? People say that selenium is good for that purpose, but you say that selenium is not a good option now a days as it is slow, then what do you suggest, which combo should I use, that can fit in many scenarios, if not all. Looking forward! Thanks.
@Relaxing_Sounds_Rain
@Relaxing_Sounds_Rain Год назад
Hi thank you john. This work on windows very well but linux ubuntu do not work. help me please
@rajkumargerard5474
@rajkumargerard5474 Год назад
Can we run this from Spyder or Jupiter? Also request you to please try and scrapp Tesco link.. I had tried it and it was working fine for sometime but now due to the restrictions my code doesn't work.
@JKnight
@JKnight Год назад
That was fantastic. Cory Schafer tier content. Love to see it.
@JohnWatsonRooney
@JohnWatsonRooney Год назад
Thank you - he’s the best so very happy to be included there!
@pranavanand24
@pranavanand24 Год назад
Hey John, great video! I am a beginner at webscraping and vscode in general. I saw that your import csv part got added automatically, I think? Can you please tell me how to do that? Is that some extension like Auto Import?
@JohnWatsonRooney
@JohnWatsonRooney Год назад
i'm gonna be honest with you... i think i typed it in but forgot and edited that part out... sorry
@pranavanand24
@pranavanand24 Год назад
@@JohnWatsonRooney oh, okay, got it. No problem. I ended up searching Google and came to know about this extension auto import and included it in my vscode, which is indeed able to add those import lines by itself.
@azwan1992
@azwan1992 Год назад
I love you man.
@mitchconnor8764
@mitchconnor8764 Год назад
Great video
@karthikshaindia
@karthikshaindia Год назад
Good one. However, request HTML may replaced and comfortable instead BS4. This one have to be decoded for specifically
@tarik9563
@tarik9563 Год назад
maybe a stupid question, what about scraping data that is only generated from a request + captcha?
@arsalan0561
@arsalan0561 Год назад
so I've been learning scrapy basics and following your channel for quite a while. So as per this video this is the latest method to scrape the pages ! what about those ol scrapy start_url and responses to get the whole page and link extractors and follow_url to get to next pages and stuff! i mean do we still need to use them at some point or we could replace them with this method altogether. ? And thanks for the sharing new ways to scrape. cheers
@jfk1337
@jfk1337 Год назад
Why no async?
@pythonprogrammer2186
@pythonprogrammer2186 Год назад
Very nice!
@karlblau2
@karlblau2 Год назад
Great video
@disrael2101
@disrael2101 Год назад
Source code
@philtoa334
@philtoa334 Год назад
Thanks.
@lucasmoratoaraujo8433
@lucasmoratoaraujo8433 Год назад
Nice video! Thank you for sharing your knowledge with others!
@mectoystv
@mectoystv Год назад
Hello, in order to use web scraping, you must ask permission from the owner of the content of the web page?
@nztitirangi
@nztitirangi Год назад
if your scraper behaves similar to a human then its fine. If it totally smashes some poor sods ecommerce page then no. If its a well built webapp then they coing to be throttling you anyway, IMHO
@losefaithinhumanity8238
@losefaithinhumanity8238 Год назад
Hey man I've been watching your content for the past couple of weeks and it's fire. A good content idea would be to create a beginner series where you go through the absolute basics, I'm proposing this because nearly all of the videos on the topic are very outdated. Cheers.
@Lahmeinthehouse
@Lahmeinthehouse Год назад
Nice video! What do you use for screen recording ?
@JohnWatsonRooney
@JohnWatsonRooney Год назад
OBS! It’s free
@Lahmeinthehouse
@Lahmeinthehouse Год назад
Great, thanks! Also, do you have LinkedIn ?
@CreativeCorners
@CreativeCorners Год назад
Respected Sir, I need your help to get links or download F B marketplace images using the scraping tool, I did a lot of work but I am confused, even though I got links to all listed items but couldn't get the link to images in the individual list shown in the new tab. Please guide me
@CreativeCorners
@CreativeCorners Год назад
I'm still waiting for your favorable reply please
@mirkolantieri
@mirkolantieri Год назад
Nice video John!
@JohnWatsonRooney
@JohnWatsonRooney Год назад
thanks for watching!
@MrRementer
@MrRementer Год назад
Yes, happy to see a new Video!
@JohnWatsonRooney
@JohnWatsonRooney Год назад
thanks, i hope you enjoyed it!
@MrRementer
@MrRementer Год назад
@@JohnWatsonRooney I did! I've got a longer question regarding my Amazon Scraping Project, which i am currently doing with Selenium. Everything works fine, its just quite slow.. Is it okay to hit you up with a direct message/email?
@MrPaynealex6
@MrPaynealex6 Год назад
Any recommendations to avoid rate limiting aside from rotating proxies?
@jhonjuniordelaguilapinedo2746
I guess keep using requests library, because the get function lets you put the header and the proxy you want.
@scottmiller2591
@scottmiller2591 Год назад
This was a good example of how to get started, but I still had some questions: - In your opinion, why are httpx and selectolax better than requests and BeautifulSoup? - There are so many places where things can fail - status code =/= 200, website sends you to a "I'm busy" page, etc. - that are missing here. If you are communicating with an unreliable website, this code may fail even with the hobby application, much less something that is scraping professionally. Is there anything in httpx/selectolax that helps with the exception handling compared to requests/BS4?
@JohnWatsonRooney
@JohnWatsonRooney Год назад
Httpx has async ready for you when you need it and selectolax is a much faster parser than bs4. It still comes down to preference- use what works for you! Yes I’m this video I didn’t flesh it out fully with error handling, and retries and other parts that would make the script more complete for more professional use. I didn’t want to cover too much in one go and also reach as many people as possible
@scottmiller2591
@scottmiller2591 Год назад
@@JohnWatsonRooney Thanks for your prompt and useful reply!
@nztitirangi
@nztitirangi Год назад
@@scottmiller2591 client = httpx.Client(timeout=None) resp = client.get(url) return HTMLParser(resp.text)
@nadavnesher8641
@nadavnesher8641 Год назад
Hi John, Thanks for the awesome video! I really like your clear explanations. I was trying to run your code but on a Google Search page but got into some difficulties. I was hoping you could please tell me what I'm doing wrong. The div class I'm trying to grab: (which represents a Google Search result). But what's returned is an empty list: [] def parse_queries(html): queries = html.css("div.MjjYud") print(queries) I, therefore, cannot grab nested "div", "h3", and "cite" classes which hold the information I require to populate my dataclass attributes (website address, website title, website text). For example: address --> title --> text --> (*) As you suggested, I also looked at the page source and did find this "MjjYud" div class. My code: import httpx from selectolax.parser import HTMLParser from dataclasses import dataclass, asdict import csv @dataclass class Query: website: str title: str information: str def get_html(): url = "www.google.com/search?q=data+science+courses" resp = httpx.get(url) html = HTMLParser(resp.text) return html def parse_queries(html): queries = html.css("div.MjjYud") print(queries) results = [] for item in queries: new_item = Query( website=item.css_first("cite.iUh30 qLRx3b tjvcx").text(), news_title=item.css_first("h3.LC20lb MBeuO DKV0Md").text(), textual_info=item.css_first("div.VwiC3b yXK7lf MUxGbd yDYNvb lyLwlc lEBKkf").text() # the inside ) results.append(asdict(new_item)) print("new_item") return results def to_csv(res): with open("results.csv", "a") as f: writer = csv.DictWriter(f, fieldnames=["website", "news_title", "textual_info"]) writer.writerows(res) def main(): html = get_html() res = parse_queries(html) to_csv(res) main() Thank you very much for taking the time to read my comment 🙏🏼
Далее
How To Scrape (almost) ANY Website with Python
13:45
Просмотров 39 тыс.
Want To Learn Web Scraping? Start HERE
10:54
Просмотров 27 тыс.
ТАРАКАН
00:38
Просмотров 727 тыс.
Try My Price Monitoring Beginner Python Project
18:19
Scraping with Playwright 101 - Easy Mode
19:56
Просмотров 11 тыс.
How To Scrape Any Website Using Hidden APIs
25:11
Просмотров 2,5 тыс.
This AI Agent can Scrape ANY WEBSITE!!!
17:44
Просмотров 58 тыс.
Supercharge Your Scraper With ASYNC (here's how)
14:03