Тёмный

Industrial-scale Web Scraping with AI & Proxy Networks 

Beyond Fireship
Подписаться 390 тыс.
Просмотров 693 тыс.
50% 1

Learn advanced web scraping techniques with Puppeteer and BrightData's scraping browser. We collect ecommerce data from sites like Amazon then analyze that data with ChatGPT.
#javascript #datascience #chatgpt
Get $10 Credit for BrightData get.brightdata.com/fireship
Puppeteer Docs pptr.dev

Опубликовано:

 

23 апр 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 611   
@beyondfireship
@beyondfireship Год назад
Use this link to get a $10 credit, enough cash to scrape thousands of pages get.brightdata.com/fireship
@DeanDavisMarketing
@DeanDavisMarketing Год назад
@Reddblue
@Reddblue Год назад
This man selling wood and iron to shovel makers
@anze
@anze Год назад
@beyondfireship ad link doesnt work
@NoahKalson
@NoahKalson Год назад
​@@anze worked for me. Try now.
@tamasmajer
@tamasmajer Год назад
The pricing page says 20$/GB. I checked how big the pricing page was it loaded 4MB, so then it costs 20$ for 250 pages? That seems very expensive. Or how should i calculate the price?
@rvft
@rvft Год назад
I like how he didn't use "cheap" during the entire video because my god the pricing is absolutely madness on the advertised product
@brunopanizzi
@brunopanizzi Год назад
Industrial scale!!!
@koba2160
@koba2160 Год назад
scraping aint cheap, but theres many ways to make it much cheaper
@mrgyani
@mrgyani Год назад
​@@arteuspw what do you mean by 1gb/$1? You mean browsing 1gb of data for a dollar with a single proxy? How many proxies do you get for $1?
@user-kj2kt8jt4n
@user-kj2kt8jt4n Год назад
@@arteuspw Please tell me where to buy them at this price.
@mantas9827
@mantas9827 Год назад
Is 20$ per GB considered expensive? I wonder how much could you scrape from a site like amazon for that GB... surely a lot ?
@albiceleste101
@albiceleste101 Год назад
As a freelance dev I get contacted all the time for scraping, it's definitely one of the most requested along with Wordpress (which I also dont work with)
@cymaked
@cymaked Год назад
interesting - 8 years of freelancing and never had one such request 😮
@dinoscheidt
@dinoscheidt Год назад
And with a freelancer, the business has the advantage that YOU break the terms and conditions of the companies you scrape (are legally liable and suable). Not the business 😊 so a cheap code monkey and legal scape goat all in one 💪
@mrgyani
@mrgyani Год назад
Where do you get these projects from?
@VividCoding
@VividCoding Год назад
@@dinoscheidt Wait can they really do that? They are the ones who wanted to scrape the data in the first place.
@dabbopabblo
@dabbopabblo Год назад
I'm not even a freelancer and I cant count the number of times on two hands Ive been asked to make someone a website. They think because I'm a web developer I am just some guy who goes around making websites willy nilly. And the few times I have actually went through with helping someone out, they want everything Wix or Wordpress provides and have the audacity to suggest I shouldn't be asking so much in pay when a drag and drop builder can suffice.. THEN USE THE BUILDER GOD DAMMIT. My knowledge is wasted on front end work anyways.
@YuriG03042
@YuriG03042 Год назад
toward the end of the video, Jeff suggests that you can grab all the links and then make requests to those links. it gave me flashbacks of another video on the main channel where a company did this and ended up with a 70k+ GCP bill after one night of web scraping, because their computing instance was forever recursing and was scalable up to 1000 instances lmao
@alexcasillas2488
@alexcasillas2488 Год назад
This reminds me of when I solved 100 captchas manually so that I could download some data files from a website for an ai. I got a sever message temporarily banning me from the website saying that I must be a bot. I learned my lesson and stuck to only solving 99 captchas each day from then on until I had enough data files
@EliteGamerpk
@EliteGamerpk Год назад
As a web scraping tool developer, one thing to note about the chatGPT code about extracting product names etc is that it's not going to work on all cases. What I mean by that is we can see there are some random class names like '._cDEzb'. And these classes can vary from page to page. So your code for one listing page, might not work for other. The way I do this is using some advanced query selectors that don't rely on unreliable classes. Can go into more detail if required.
@CrackedPlayz
@CrackedPlayz Год назад
Please do!
@RiChYFanatics
@RiChYFanatics Год назад
Dont be shy :p
@myhitltd5826
@myhitltd5826 Год назад
so that's why I copy full selector of the element and work with it in puppeteer.
@MrNsaysHi
@MrNsaysHi Год назад
AFAIK puppeteer doesn't support finding elements by xpath, so what do guys use?
@thrand
@thrand Год назад
@@MrNsaysHi well, real men write their own html parser and query language. But peasants like myself use css selectors with document.querySelectorAll.
@Autoscraping
@Autoscraping 4 месяца назад
An extraordinary piece of video material that has proven highly useful for our new team members. Your generosity is immensely appreciated!
@Maneki-Nico
@Maneki-Nico Год назад
Your videos are somehow exactly relevant to the code I am writing every week - interesting for sure!
@xanderbarkhatov
@xanderbarkhatov Год назад
If I'm not mistaken, page.waitForSelector(selector) already returns the element handle, so you don't need to use page.$(selector) after that. Anyway, great video, as always. Thank you! ❤
@yvanguemkam4739
@yvanguemkam4739 Год назад
You're right, wanted to said that... But don't have money to spend on the browser. Is there an alternative?
@cyberzjeh
@cyberzjeh Год назад
​@@yvanguemkam4739 you can host puppeteer yourself and pay for a proxy service if you need it, might come out cheaper (but more work obviously)
@Loubensdoriscar
@Loubensdoriscar 4 месяца назад
Zeus Proxy's specific emphasis on session management is a key factor that resonates with my goal of executing data retrieval tasks with a focus on mimicking genuine user behaviors.
@meansnada
@meansnada Год назад
I love how there are legit businesses to bypass captchas and mess up with data :)
@dislike__button
@dislike__button Год назад
Scraping isn't illegal
@Tylersmodding
@Tylersmodding Год назад
and individuals
@aresakmalcus6578
@aresakmalcus6578 Год назад
@@dislike__button if it's against Terms of Service of the given site, it is
@Bruceylancer
@Bruceylancer Год назад
@@aresakmalcus6578 I'm not a lawyer, but how can it possibly be illegal? It can be against ToS, sure, then the website owners can surely act accordingly, i.e. ban your account on the said website, ban your IP address, and so on. But illegal? Are there any laws out there that prohibit collecting public data? Are there any cases of people getting sued for scraping? I haven't heard of such, maybe you can provide some examples. Also, there are 8-figure businesses built on scraping, like Ahrefs or Semrush.
@Bruceylancer
@Bruceylancer Год назад
@@Andrew-zy7jz Exactly! Very good example.
@unknownlordd
@unknownlordd Год назад
Web scraping is still my favourite type of projects it's so fun and "meaningful" to me and with the help of AI i can see it becoming much much easier
@0187
@0187 Год назад
same, gives me shitton of satisfaction
@GeekProdigyGuy
@GeekProdigyGuy Год назад
thanks Jesus
@alejandroarango8227
@alejandroarango8227 Год назад
Unfortunately GPT4 is still too expensive to use in projects and gpt3.5 is still too stupid.
@unknownlordd
@unknownlordd Год назад
@@alejandroarango8227 it's stupid enough so you still do much of the work yourself cause eventually it's just a tool to help and personally it helps me enough
@unknownlordd
@unknownlordd Год назад
@@0187 exactly what i feel
@yashkhd1100
@yashkhd1100 Год назад
To be frank out of all youtubers Fireship has most interesting and to the point videos and gives most value out of time spend. Kind of just wondering how he keeps track of all the varied topics and able to make most out of it.
@julienwickramatunga7338
@julienwickramatunga7338 Год назад
He already has five prototypes of Neuralink chips plugged into his brain, linked to the Web via 5G, and he is using digital clones of himself (coded in JS of course) to make more video content (with the help of ChatGPT). That makes him the most powerful being on the planet. Praise the Cyber-Jeff! 👾
@RobinhoodCFO
@RobinhoodCFO 3 месяца назад
With ChatGPT of course
@AdamBechtol
@AdamBechtol 2 месяца назад
Mmm
@BharadwajGiridhar
@BharadwajGiridhar Год назад
One thing jeff is that these websites change css class names on every refresh. So it's better to write code with selectors that don't change like id or aria label.
@prabhavkhera4959
@prabhavkhera4959 Год назад
Thanks Jeff. I was planning on building a project that uses web scraping and this video absolutely dropped at the perfect time. Appreciate it. I love your videos and hope for more such content in the future :)
@DanielLavedoniodeLima_DLL
@DanielLavedoniodeLima_DLL Год назад
I remembered that web scrapping was a nightmare to deal with, specially doing this proxy rotation by ourselves. This tool is not cheap, though, so at least here in Brazil (and other emerging countries alike), companies will still be doing that like the old days. The captcha solving was actually done by real people at the time I worked in a company that mined those kind of data a few years ago, but I guess this can be automated with GPT-4 tools now
@abishekbaiju1705
@abishekbaiju1705 8 месяцев назад
Thanks for making this video. I am actually working on a project where the users can add amazon products and look for price changes and also get notified with price changes. My objective was to learn web scraping.
@shawnvirdree8593
@shawnvirdree8593 Год назад
Wow, you’re on the cutting edge of technology 🤯
@danieldosen5260
@danieldosen5260 Год назад
I never thought of returning data as JSON... that's obvious and brilliant...
@ikedacripps
@ikedacripps Год назад
When I first saw puppeteer when I was learning nodejs this is exactly the kind of use case I wanted to apply it to. Specifically wanted to scrape csv files and have some AI learn it and make some sense out of it. I think it’s now more than possible
@DemPilafian
@DemPilafian Год назад
Downloading CSV files would typically not be considered _"scraping"._ You don't have to scrape the data out of a CSV file -- it's already data.
@ikedacripps
@ikedacripps Год назад
@@DemPilafian you just wanna falsify my statement but scraping for csv file is as valid as scraping for pdf files. I specifically wanted to scrape soccer analytics websites for those csv files. Hope that puts it into perspective for you .
@Jeanseb23
@Jeanseb23 Год назад
You've foiled my plan 5 years in the making. At least now I have a free 10$ credit for Brightdata to catch up. Thanks Fireship!
@gatonegro187
@gatonegro187 2 месяца назад
how much did u end up spending
@Ruf4eg
@Ruf4eg Год назад
Man, you are reading my thoughts! this video came at the right time when I wanted to scrape some websites!!!!
@bossdaily5575
@bossdaily5575 Год назад
Virgin API users vs Chad Web scrapers
@user-bp9dx1ir7w
@user-bp9dx1ir7w 10 месяцев назад
Thank you for teaching me puppeteer and bright data, beats all content on internet
@CODE_YOUR_TYPE
@CODE_YOUR_TYPE 4 месяца назад
I love you man i was trying for so long and you are the only one who gave the solution thank you so much
@selimachour
@selimachour Год назад
I usually block the fetching of images, css, fonts (and javascript if the website can run without) which speeds up the page load by a lot!
@desertislanddivs
@desertislanddivs Год назад
This is a great spell for Howarts Ai Academy, Thanks Professor Fireship ^^
@pythoneatssquirrel
@pythoneatssquirrel 8 месяцев назад
I have build hundreds of scrapers in both VBA and Python using Selenium. Everything can be done, this video it's just an ad for one of those hundreds of this kind of service providers.
@VaibhavShewale
@VaibhavShewale Год назад
damn that was really amazing, i was actually thinking of taking snippet of the page extract data then delte that page and repeat
@abz4852
@abz4852 Год назад
fireship you are uploading videos faster than new javascript frameworks get released
@felixmildon690
@felixmildon690 Год назад
Best video yet thanks fireship. This will introduce me to puppeteer and the services BrightData offers (BrightDatas prices are a concern based on the comments section)
@kinglane8634
@kinglane8634 Год назад
Thanks for always helping us devs keep out workflow clean and simple!!! If you plan on starting a subscription service I'd love to see what you're offering.
@trickster6254
@trickster6254 Год назад
He has got a website offering courses. I bought the Angular one myself and was really good.
@classmanOfficial
@classmanOfficial Год назад
Selenium has a headless mode :) if you guys want to try it out, works well enough for multithreading
@nichtolarchotolok
@nichtolarchotolok Год назад
Been using puppeteer for a few yrs for freelance web scraping. Puppeteer and Playwright have been a saving grace in many circumstances.
@donirahmatiana8675
@donirahmatiana8675 10 месяцев назад
could you give some tips to not getting ip banned?
@nichtolarchotolok
@nichtolarchotolok 10 месяцев назад
@@donirahmatiana8675 puppeteer-extra library and the puppeteer-extra-stealth plugin. If that doesnt work, you'd need rotating proxy like that of bright data as mentioned in the video.
@jacekpaczos3012
@jacekpaczos3012 5 месяцев назад
@@nichtolarchotolok are you not using scrapy? I always thought of scrapy as the most convenient solution.
@nichtolarchotolok
@nichtolarchotolok 5 месяцев назад
@@jacekpaczos3012 I started off on the nodejs route and havent had the need to try the python way of doing this. I do remember trying scrapy in my early days but for some reason puppeteer felt more intuitive to me. That is probably because I felt more comfortable writing javascript code.
@beefykenny
@beefykenny Год назад
This video has a lot of value.
@d3layd
@d3layd Год назад
Thank you for this! I used ChatGPT to write a puppeteer script for me the other day and it was fucking slick
@KabbalahredemptionBlogspot
@KabbalahredemptionBlogspot 9 месяцев назад
OK that was way cooler than I thought
@estebancordoba555
@estebancordoba555 Год назад
In my country, some products are more expensive than amazon, I built a scrapper to get the products and price with params as the brand or names but amazon blocked me couple of times, this si really nice solution!
@Jason-nv6ku
@Jason-nv6ku Год назад
You're amazing! Many thanks!
@kasparsc
@kasparsc Год назад
Sir, you are a legend 🔥🔥🔥
@rstar899
@rstar899 Год назад
Amazing video as always 🎉
@blaizeW
@blaizeW Год назад
Another gold gem for daddy fireship 🤑🔥
@NathanDodson
@NathanDodson Год назад
See. This is why I watch all your videos, Jeff. I'm a super shit JS coder, but I'm pretty decent with Python. This gives me an idea for my own eBay business, and scouring those tool docs for Python SDKs to do the same thing. Honestly, it's been your videos that have kept me in the coding space. You always have these creative "concept/idea" videos and a good majority of them have me opening up VSC to do some tinkering. Thanks for all your content brother.
@priapulida
@priapulida Год назад
there's Pyppeteer
@maskettaman1488
@maskettaman1488 Год назад
@BeBop No, it's Pyppeteer
@minhuang8848
@minhuang8848 Год назад
@@bebop355 *pyppeteer tho
@JGBreton
@JGBreton Год назад
did this materialize?
@tonymudau3005
@tonymudau3005 11 месяцев назад
​@@JGBretonlmao 😂 asking myself the same thing
@hamza-325
@hamza-325 Год назад
I worked for a digital shelf company that scrap the data from Amazon and more websites. They use many proxy services but one of the most expensive ones was BrightData, so the more experienced workers always instructed us to not use BrightData unless it is really necessary.
@sciencenerd8326
@sciencenerd8326 Год назад
what are the others that are better?
@hamza-325
@hamza-325 Год назад
@@sciencenerd8326 the company has made some cheap proxies using the machines of AWS for examples (they don't have many IPs but they do the job for many websites). And I think there are cheaper services like ProxyRack.
@fhnvcghj1587
@fhnvcghj1587 7 месяцев назад
​@@hamza-325I have a task of selenium bot I have 1000 account but need 1 ip for each account to make request to the website and do the work any idea or paid service for that
@EuricoAbel
@EuricoAbel 2 месяца назад
Incorporating Zeus Proxy into your SEO strategy ensures efficient and effective monitoring and data gathering processes.
@wandenreich770
@wandenreich770 Год назад
Very insightful
@AbuBakar-pc2fp
@AbuBakar-pc2fp Год назад
Awesome Explanation
@MrKrzysiek9991
@MrKrzysiek9991 10 месяцев назад
Microbots AI chrome extension helps with building prompt with HTML code included. Chech it out it you want to write automation code faster.
@forbiddenera
@forbiddenera Год назад
Puppeteer is the source of non stop memory leak nightmares for me. Fortunately I got it down to under like 30mb a day but originally it was like 30mb per leak and like 250+mb a day leaked (and it was mostly only loading 2 pages back and forth)
@alejandroarango8227
@alejandroarango8227 Год назад
I avoid using it to the maximum, it is a waste of server resources.
@andy12379
@andy12379 Год назад
You could just close the browser and open a new one every time you use it to avoid memory leaks
@ehsanpo
@ehsanpo Год назад
web scraping with ruby and rails is one of the best ways
@wlockuz4467
@wlockuz4467 Год назад
Remote browser as a service is actually a genius idea. Often times when you want to scrape at scale the most painful thing to do is hosting and using effective proxies. But with this you can literally leave the scraper running on your machine and let brightdata take care of the proxies. You don't even need good specs because the browser runs on a different server.
@quickkcare605
@quickkcare605 Год назад
Well thought!
@klapaucius515
@klapaucius515 Год назад
smells like ad
@wlockuz4467
@wlockuz4467 Год назад
@@klapaucius515 Do you mean that for my comment or the video?
@arrvee7249
@arrvee7249 10 месяцев назад
ikr, then you can just pay brightdata $10,000 and go on to make $52 for the data you've scraped.
@danvilela
@danvilela Год назад
Brooo, this is awesome!
@Victor4X
@Victor4X Год назад
Stuff isn't censored properly at 3:00 But I assume those creds are temporary anyway
@cymaked
@cymaked Год назад
theres many videos on Fireship where he jokes about living dangerously and letting the cred be seen 😂 obv temp stuff
@thie9781
@thie9781 Год назад
​@@cymaked or just F12 to let somebody waste their time
@Kevgas
@Kevgas Год назад
You should create a course on how to do this, Id pay for that!
@KhaledAlMola
@KhaledAlMola 10 месяцев назад
That is a cool website to use. I'll try it one day
@aseluxestays
@aseluxestays 10 месяцев назад
I'm here because I need to hire someone who can provide this service for me. Great video!
@TheHassoun9
@TheHassoun9 5 месяцев назад
Hi I'm willing to help# I'm a dev looking for commission
@daniamaya
@daniamaya Год назад
Gold. Just pure gold.
@forbiddenera
@forbiddenera Год назад
..while Puppeteer can run headless, you don't have to run it headless. It may still seem headless from what most might consider that term to mean but headless or not is a config option for Puppeteer, running with headless disabled can help beat bot detection sometimes.
@rid9
@rid9 Год назад
This feels like the kind of programming work a ferengi would be involved with.
@manfredcomplex366
@manfredcomplex366 Год назад
Freaking Money Glitch. Love you man❤
@luxurycondobbmg
@luxurycondobbmg Год назад
I remember my first time scraping a website - except back then, we didn't have ChatGPT proompts to do it for us. We had to physically read the documentation and actually understand the code we wrote
@robertwitzke6134
@robertwitzke6134 Год назад
great video!
@chaseclingman
@chaseclingman Год назад
I liked how you showed the timeout as 2 * 60 * 1000 so beginner friendly haha
@mrgalaxy396
@mrgalaxy396 Год назад
I mean that's way more readable than 1200000, this is a pretty common practice
@nskiran
@nskiran Год назад
We used to user selenium web driver ( webactions) and phantomjs to scrape data. Ip problems were solved with nohodo In good olden days 2014 stack
@felixmildon690
@felixmildon690 Год назад
Tutorial starts at 2:15
@AnshTiwari-fx2yq
@AnshTiwari-fx2yq 4 месяца назад
May god bless you
@calmgee
@calmgee 8 месяцев назад
This was gold
@exploringcrypto6609
@exploringcrypto6609 Год назад
Jeff how can you process data so fast?
@katykarry2495
@katykarry2495 Год назад
can you share the code in the description? for us to test it and edit it to our own needs? loving your videos!
@daniel_q40
@daniel_q40 Год назад
Data is the new gold
@summonlucifer3603
@summonlucifer3603 5 месяцев назад
If you use selenium to open a browser window you can easily scrape from any website
@panther_puneeth
@panther_puneeth Год назад
went above head with such fast
@maxivy
@maxivy Год назад
Awesome video - I will have to rewrite it in Python though ;) because I am a human bean
@NicolaiWeitkemper
@NicolaiWeitkemper Год назад
BeautifulSoup is better anyways :P
@priapulida
@priapulida Год назад
@@danielsan901998 or Pyppeteer
@NicolaiWeitkemper
@NicolaiWeitkemper Год назад
@@danielsan901998 Correct, that's not an even comparison. However: BeautifulSoup >> Cheerio
@rallysahil
@rallysahil 3 месяца назад
Awesome !
@CandyLemon36
@CandyLemon36 6 месяцев назад
I'm impressed by the depth of this material. A book with corresponding themes was a key influence in my life. "AWS Unleashed: Mastering Amazon Web Services for Software Engineers" by Harrison Quill
@TPAKTOPsp
@TPAKTOPsp Год назад
Any reason why you have used puppeteer over playwright? I see bright data has support for both.
@adityag6022
@adityag6022 11 месяцев назад
Thank you sir
@garywaddell6309
@garywaddell6309 Год назад
Brilliant
@gregheth
@gregheth Год назад
Wow. Thanks
@kevinbraga9526
@kevinbraga9526 11 месяцев назад
Great video, i have a question for you, how do you know that this is the industry standard for modern web scraping? Like how can you find out this information.
@JustBR0
@JustBR0 Год назад
Bright data is throwing their money!!
@TheLime1
@TheLime1 Год назад
Good money making right there
@kevinbatdorf
@kevinbatdorf Год назад
some of those query selectors look like they’d break in a week. Maybe you need to add openai to the workflow more directly
@RichardHarlos
@RichardHarlos Год назад
It's a proof of concept/tutorial, not an explicit recommendation for bulletproof boilerplate. Context, eh? :)
@yellowboat8773
@yellowboat8773 Год назад
Maybe outputting the html every time to openai then having that pick the query selector then insert into the script. Do have to be very specific with your prompt because it often replies with: The query selector is: a.carousel
@3rawkz
@3rawkz Год назад
Scrapy all day baby!
@kairee1093
@kairee1093 Год назад
thanks
Год назад
Only for this topic alone its worth to learn python along with Scrapy
@v1s1v
@v1s1v 8 месяцев назад
Nice tutorial, but there are AI tools now like Kadoa that can do all of this for you. In the time it takes for you to watch this video, you can get an AI scraper up and running.
@SkySesshomaru
@SkySesshomaru Год назад
o.o that's some impressive shit right there
@tw-wp5uv
@tw-wp5uv Год назад
Bright Data is quite expensive with average success rates for webistes with high protection measurements. keep that in mind if you want to scale your scraping
@sebastianacostamolina9593
@sebastianacostamolina9593 9 месяцев назад
really cool
@oblivion_2852
@oblivion_2852 Год назад
Could we have a vid on the difference between Selenium and Puppeteer?
@UmanPC
@UmanPC Год назад
Great!!!
@Xld3beats
@Xld3beats Год назад
Guess its time to write a program that applies to every job on the internet
@kalelsoffspring
@kalelsoffspring Год назад
Presumably this can be used to DDoS as well, do you know if there are any protections in place or how blame is handled if someone does cause something like that? Like, Amazon will start giving 403s, does it automatically get a fresh clean IP? Those aren't infinite so I'm curious if you'd be charged for going through to many IPs at a particular service
@xetera
@xetera Год назад
bright data is insanely expensive so that's the protection against DDoS lol. You'll run out of money before you even have the chance to send enough traffic to cause a problem
@Dev-Siri
@Dev-Siri Год назад
just as I thought the ai videos ended
@shaharyarkhan7553
@shaharyarkhan7553 5 месяцев назад
Can you please make a video on how to efficiently fetching divs with dynamic classes etc
@TheMalcolm_X
@TheMalcolm_X Год назад
This video felt like one giant sponsored ad.
@progamer1196
@progamer1196 Год назад
as soon as I saw the thumbnail I knew this was an ad for brightdata
@profsacin
@profsacin Год назад
subtle sarcasm made me giggle 🤓 .. a bit
@hermanplatou
@hermanplatou Год назад
Doesn't amazon rotate the classes and ids, effectively breaking your selectors? Not sure how the most advanced RPA bots work, but im hoping that some of them offer a AI that grabs screenshots and parses them instead. Would be interesting with a follow up!
@makkusu3866
@makkusu3866 Год назад
Yea, I think classes should be autogenerated, at least after every deployment if not every request. Fast and dirty solution would be to use openapi sdk to prompt ChatGPT to generate document query code and eval it
@trappedcat3615
@trappedcat3615 Год назад
@@makkusu3866 You can select elements based on attributes or lack of attributes, or you can use pseudo-classes such as :nth-of-type. There are dozens of them.
@iljazero
@iljazero Год назад
@@trappedcat3615 yea, that is how i wrote scraping for other website, i targeted div elements with style X which often ... doesn't change cuz... why ;D
@arthurchazal3064
@arthurchazal3064 Год назад
Most websites with random ids/classes names still have a common and repetitive structure. Axios + Regex and you'll process ~10 time as much pages as puppeteer, with minimal bandwidth by default and simpler code. Just validate the output with a strict schema (as you always should) and you'll maybe have to update it once a year at most. Puppeteer's only real advantage is TLS fingerprint
@cyberzjeh
@cyberzjeh Год назад
​@@arthurchazal3064 you can also use sg like cheerio as a middleground between an entire headless browser, and parsing html with fukken regex (chad move tho ngl)
@mlnima
@mlnima Год назад
I was using nodeJS before but now only python with chrome web driver + ad blocker extension on selenium and it get the job done. also I can get free proxy list and use it on the script. I call that advance not the one you show
Далее
Филимонова Милана Мой БОЙ ❤️
00:16
10 Math Concepts for Programmers
9:32
Просмотров 1,7 млн
Never install locally
5:45
Просмотров 1,6 млн
This UI component library is mind-blowing
8:23
Просмотров 559 тыс.
Always Check for the Hidden API when Web Scraping
11:50
I tried 8 different Postgres ORMs
9:46
Просмотров 381 тыс.
Malware Development: Processes, Threads, and Handles
31:29
Masterclass: AI-driven Development for Programmers
8:49
Web Scraping with ChatGPT is mind blowing 🤯
8:03
Просмотров 32 тыс.