Тёмный

The Biggest Mistake Beginners Make When Web Scraping 

John Watson Rooney
Подписаться 78 тыс.
Просмотров 103 тыс.
50% 1

The first 1,000 people to use the link or my code johnwatsonrooney will get a 1 month free trial of Skillshare: skl.sh/johnwatsonrooney05221
The biggest mistake I see beginners make is trying to scrape from the frontend system, and not the backend.
The most common thing I see when I am asking about how to scrape a certain site is people struggling to get data from the front end of a modern website. These sites are using SPA (single page applications) that rely heavily on JavaScript or JS framework to load up the data. This way, whilst sometimes possible is just not option for extracting the data. We need to understand a little bit about how these sites work so we can use the right approach or method to accessing the data.
github.com/jhnwr/billionaires...
Scraper API I use: www.scrapingbee.com/?fpr=jhnwr
DISCORD (NEW): / discord
Patreon: / johnwatsonrooney
Proxies: proxyscrape.com/?ref=jhnwr
Hosting: Digital Ocean: m.do.co/c/c7c90f161ff6
-------------------------------------
Disclaimer: These are affiliate links and as an Amazon Associate I earn from qualifying purchases
-------------------------------------
Video
Camera: amzn.to/3b5N1ej
Lens: amzn.to/3QAm7LT
Lighting: amzn.to/3Oivn64
Audio
Interface: amzn.to/3OqEqRQ
Microphone: amzn.to/3xISfEC
Headphones: amzn.to/3zQrwZv
PC
Case: amzn.to/3dEz6Jw
PSU: amzn.to/3kc7SfB
CPU: amzn.to/2ILxGSh
Mobo: amzn.to/3lWmxw4
Ram: amzn.to/31muxPc
GFX card: amzn.to/2SKYraW
32" monitor: amzn.to/3sqgq98 (main)
27" monitor: amzn.to/2GAH4r9 (vertical)
24" monitor: amzn.to/3jIFamt (spare)
Dual monitor arm: amzn.to/3wpBvlg
Mouse: amzn.to/2SH1ssK
Keyboard: amzn.to/3A1B8jU

Наука

Опубликовано:

 

3 май 2022

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 136   
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
The first 1,000 people to use the link or my code johnwatsonrooney will get a 1 month free trial of Skillshare: skl.sh/johnwatsonrooney05221
@josele7666
@josele7666 Год назад
Would it be possible to scrape any website with playwright with the hidden browser? For example: williamhill or bet365?
@thetransferaccount4586
@thetransferaccount4586 Год назад
one of the best tutorial i've seen on web scraping. i wish you could make more of these. thank you
@CodePhiles
@CodePhiles 2 года назад
this is brilliant and opens a new door for us, thank you John for your great work
@MrViolentKoala
@MrViolentKoala 9 месяцев назад
You genius! At first I was skipping over the video like "what is he doing, that's not scraping, thats no help"... but then I actually took a look at the website I wanted to scrape, and there it is! All the info I want, nicely formatted into a json! Copied your code from github after and after a few minutes everything was working perfectly. Thank you a lot! (Also for me it was the same, only needed the cookie for the request to work.)
@dmitriyneledva4693
@dmitriyneledva4693 Год назад
your channel and this video are the best things youtube has ever suggested to me ^^
@SunDevilThor
@SunDevilThor 2 года назад
Pretty cool trick. Thanks for sharing!
@iamkian
@iamkian 2 года назад
Thank you. I'm going to be testing a little bit with this.
@fransubaru
@fransubaru 2 года назад
Amazing content John! Thanks for sharing!
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
Thanks!
@MiguelRodriguez2010
@MiguelRodriguez2010 3 месяца назад
What a great explanation, thanks man!
@hrvojematosevic8769
@hrvojematosevic8769 2 года назад
Nice videos, keep creating quality content John :) Just a quick question: can you not do the same via requests by using .Session() and .cookies?
@sweatdog
@sweatdog 6 месяцев назад
I also wondered why Session() wasn't used to track the cookies for the request. perhaps he was demonstrating that the initial "accept cookies" button needs to be interacted with using Playwright, then you are off to the races.
@Bobbias
@Bobbias 2 года назад
Not every site relies on JS for loading data from the back end. Sometimes your only option is to scrape the front end. Being able to directly query an API is always going to be the better solution, but sometimes you're stuck doing things the hard way.
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
Absolutely! If it’s only html, then use that!
@Bobbias
@Bobbias 2 года назад
@@JohnWatsonRooney yeah, I don't generally do web scraping, but I ended up doing some for my most recent project, and unfortunately not only was there no API to pull from, just navigating through the html was a pain due to a lack of Ids/classes on some relatively deeply nested tables. It resulted in some pretty nasty code to work my way to the relevant data :/
@ToughdataTiktok
@ToughdataTiktok Год назад
I did a similar scraping project on an ancient website with no html id/classes. It was a pain, but regexes did help
@agentnull5242
@agentnull5242 Год назад
@@ToughdataTiktok How did they help? I've done this before and had to quit.
@ToughdataTiktok
@ToughdataTiktok Год назад
@@agentnull5242 Regex helped to extract data where there were patterns in the HTML
@aliacosta7073
@aliacosta7073 11 месяцев назад
Great tutorial, thaks a lot!!!!
@ViktorRzh
@ViktorRzh 2 года назад
Good job! 👏
@twiincentral8780
@twiincentral8780 2 года назад
Thank you for all the great content and specifically this video! Going to try this with Walmart to see if it will work!
@austinrutledge6484
@austinrutledge6484 Год назад
I'm using Walmart as my web scraping test too lol. They have really good bot detection.
@stephenwilson0386
@stephenwilson0386 Год назад
Really glad I found your channel, I'm hoping to learn enough web scraping to get some extra income on the side (or maybe even full-time). Very off-topic, but what window manager are you using?
@JohnWatsonRooney
@JohnWatsonRooney Год назад
Thanks, its a good skill to learn. It also teaches you about how websites work, APIs and handling data. I use I3WM, this is skinned using regolith, however I don't use that anymore, a much more basic i3 skin
@user-eu6vy8sz4w
@user-eu6vy8sz4w 6 месяцев назад
Informative!
@Julian-sn9vc
@Julian-sn9vc Год назад
Great tutorial! I have a question though: What do you have to pay attention to when you log in to the site in order to scrape the data in relation to the cookies, and not be blocked? I'm wondering if there are any specific precautions or best practices we should follow when our requests are connected to an account. Thanks!
@David-mj9st
@David-mj9st 2 года назад
I used to copy cookie directly from browser, after this video I shloud review my code and make it a little better.
@oyuncubirlii
@oyuncubirlii Год назад
amazing video!!!!
@wangdanny178
@wangdanny178 2 года назад
another API tutorial with the help of cookies. Thanks John.
@san24dec
@san24dec Год назад
Really nice explanation. Will this method also work where we interact with webelements to download file from frontend. Like click a button and then download csv file.
@vtrandal
@vtrandal Год назад
Excellent. I think I got the main points. Thank YouI! But ... I do have multiple problems recreating your results. First, the Forbes website implementation seems to have changed, but I found a fetch request by filter larger-than:1M and used it in Insomnia as you did. However, I do not follow how you decide which cookie to use in your code. Your hard coded [3] for "INDEX LIST" baffles me. In fact any nonempty cookie seems to work (perhaps only for a while).
@jithin.johnson
@jithin.johnson 2 года назад
Good one!
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
Thanks!
@kanikakushwaha2009
@kanikakushwaha2009 Год назад
thank you
@defiler90
@defiler90 11 месяцев назад
scraping backend you'll encounter honeypots. If you're scraping sites with strong anti-scraping, going from the front-end is probably the only way. Especially if they're determining whether you're a bot based on behavior.
@markblake77889900
@markblake77889900 2 года назад
Pretty decent methodology, but a few inaccuracies. CORS does not have an effect on the backend api security it is a client based security model. It instigates a "pre-flight" check response in order to stop untrusted websites from consuming state changing APIs. You're not using a browser so you'll never hit a CORS limitation. To consume the API you just need the application Oauth2 token, or whatever is being used for API Auth. You may also require setting your header "origin", which is just an idiosyncratic server behaviour.
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
Great, thank you - I appreciate the corrections
@markblake77889900
@markblake77889900 2 года назад
@@kesbetik I ment authentication, but authorisation would still make sense. Was it too ambiguous? Have I confused you? Any authenticated API needs some sort of session information. This is often an API token using the Oauth2 mechanism but absolutely doesn't have to be. Any suitablely random string if good enough, and can be in the post body or the header. There are just security concerns for each, such as leaving tokens in the DOM makes them accessable to JavaScript etc.
@soniablanche5672
@soniablanche5672 Год назад
Yeah, CORS is to protect legitimate users from attackers rather than protect the server from attackers.
@humansaremortal3803
@humansaremortal3803 2 года назад
Hello, have you tried scrapping sockets at any time?
@statsnow3354
@statsnow3354 2 года назад
can you make asynchronous scraping with playwright ?
@coolcat4805
@coolcat4805 2 года назад
Great as always, can you share code as you mentioned?
@fischerdev
@fischerdev Год назад
I already did complex web scrapping, using C#, retrieving more than 1 million records. But I am finding these videos interesting. I would have to learn Phyton. Two questions: a) What is this Inspect Tool? It is called Insomnia? b) How to handle with Google "I am not a robot"? Do you have some video about it?
@Victor_Marius
@Victor_Marius 10 месяцев назад
What if you need to scrape an website that has multiple s embedded into each other? Do you get the contents of the s with the main page?
@justusmzb7441
@justusmzb7441 5 месяцев назад
How do you handle backends that use some weird very dynamic security methods? I think its recaptcha v3 in my case (Javascript call goes through a function the name of which suggests as much). I was desperately trying to crack the search endpoint of the Al-Jazeera search function for a research project... And simply hijacking the cookies still resulted in a 403 even if they were freshly stolen from a selenium session just milliseconds ago...
@gugurlqk
@gugurlqk Год назад
Hey John, your work is awesome but for some reason I get a JSONDecodeError. I tried executing the insomnia code in a separate .py file and it returned the json data. How ever when I try to execute your code I hit on to this nasty JSONDecodeError. Do you have any advice how to fix it?
@Kaganerkan
@Kaganerkan 5 месяцев назад
Gonna ask how to submit a form etc like this or is it possible. Im trying to fill a blank textlabel then press submit to send.
@johnme60
@johnme60 Год назад
I have a website , It downloads a JavaScript and calculates a Token . Now this token this is sent to api as Bearer and looks like it only calculates the token while loaded into browser. It's been a week and token is not still expired , but I really want to make it dynamic. should I use browser to grab load and grab the token?
@lulu_dotes
@lulu_dotes 2 года назад
thank you for very informative and interesting turtorial, may I ask what browser are you using?
@lulu_dotes
@lulu_dotes 2 года назад
what browser are you using for inspecting?
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
Sure, it’s Firefox
@lulu_dotes
@lulu_dotes 2 года назад
@@JohnWatsonRooney thank you so much, I'm learning so much from your tutorials, I am new to web scraping and im watching your playlist, can you recommend any of your playlist to start in web scraping?
@anthonyrodz7189
@anthonyrodz7189 4 месяца назад
newbie question, but is Google ending third-party cookies in Chrome going to change this ?
@axedexango
@axedexango 2 года назад
Great videos, thanks for help us be better. what i can do if i cant find the API? i click in all XHR but non of them has the values
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
Thanks! Try going to different links and other parts of the site and see if you can find any
@axedexango
@axedexango 2 года назад
@@JohnWatsonRooney i tried but i cant find any API, they have a websocket i can conect but i also dont get any values
@Christian-mn8dh
@Christian-mn8dh Год назад
@@axedexango I have the same issue
@Kfq1111
@Kfq1111 2 года назад
Can you use the same method an access controlled website? There a website that does some questionnaires I am trying to get the weighting of each question.
@agrobots
@agrobots 2 года назад
Same question here. I’m trying to scrape sites that require PIV card authentication.
@MrMehrd
@MrMehrd 10 месяцев назад
Genius
@__STOIC__
@__STOIC__ Год назад
TNice tutorials should be the first video that pops up when you're new to making soft
@stewart5136
@stewart5136 Год назад
Thanks John. The site has changed and now it looks like you can just grab '......page-data/index/page-data.json' - but your video really helped to always inspect and see what's happening. Instead of Playwright could we do the same with requests_html render()?
@JohnWatsonRooney
@JohnWatsonRooney Год назад
Unfortunately that happens! Site updates means you have to stay on top of everything. You probably could use render() but it hasn’t worked well for me recently so I stopped using it
@killianrak
@killianrak Год назад
Hey ! The API that I’m scrapping is giving me the response after some delay , does anybody knows why this delay occurs and how to bypass it ? Thanx
@wangdanny178
@wangdanny178 2 года назад
the incognito mode shows the same web elements. And on my side, there is only 'accept' and no 'accept all' for cookies. and i have to scroll down to the bottom to click the 'cookies preference', which is totally different from the vid.
@kafran
@kafran Год назад
Why use Playwright to get the cookie instead of requests' session?
@KhalilYasser
@KhalilYasser 2 года назад
Thank you very much for your amazing tutorials. I have used Insomnia to mimic the request as you did, when unchecking cookies and clicking on Send, I still have a response. I tried to delete the cookie at all but the same response I recieved. I have created a new request and changed the settings of the request before doing anything but I got a response. I need when I uncheck the cookie to get blank response as you did in the video.
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
hey! are you doing it on the same site? if its a different website some will let you in without the cookie
@KhalilYasser
@KhalilYasser 2 года назад
@@JohnWatsonRooney Yes, I am trying on the same website you explained in the video
@RyuDouro
@RyuDouro Год назад
will this work on all browsers?
@Rapid1898
@Rapid1898 2 года назад
Hello - where can i find the code from this video? (can´t find any github or something else in the description)
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
yes sorry added now to desc!
@Rapid1898
@Rapid1898 2 года назад
@@JohnWatsonRooney Thanks a lot!
@tobiewaldeck7105
@tobiewaldeck7105 2 года назад
Hi, I have 2 questions. 1. How do I know the context.cookies index? 2. What scraping method should I use on a website where the next button doesn't change the page number but only the data dynamically. a Chrome extension or even Uipath can do this for me, but with a site I'm practicing on the json data I'm getting is irrelevant.
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
Hey! The headers returned will be a list or dictionary- if you print out the whole thing you can work out how to index or reference it. For the second question you want to try to find the Ajax request that’s being made when the next button is clicked, check out the video on my channel called best scraping method
@tobiewaldeck7105
@tobiewaldeck7105 2 года назад
@@JohnWatsonRooney Thank you for replying. I was hoping I would not have to go so deep into webscraping with coding because I just learned some vba an python for this purpose, but unfortunately there isn't a one size fits all website and creating my own scrapers just takes less pc resources.
@gugurlqk
@gugurlqk Год назад
@@JohnWatsonRooney can you explain in more detail the index part?
@TrevCole
@TrevCole 11 месяцев назад
If the site only displays the json data as POST instead of a GET request, do you have to use front end scraping?
@JohnWatsonRooney
@JohnWatsonRooney 11 месяцев назад
You should be able to replicate the pair request in the same way and get the results back
@ajitha7768
@ajitha7768 Год назад
Sos grande papaaa espero la nueva version
@graczew
@graczew 2 года назад
Grate content as always. I'm curious how you store all scraped data? I'm on data analysis path and do some small projects where data is gathered daily. Did you work on something like that? I have sql database to hold all raw data than use pandas to clean it and analyse.
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
Thanks! Small projects I use SQLite, anything bigger I use Postgres. I have used mongodb before too which I may do a video on
@jeroenvermunt3372
@jeroenvermunt3372 2 года назад
@@JohnWatsonRooney Could you do a video (or maybe you already have) on efficiently updating your postgres database. I am currently designing some kind of pipeline myself which stores batches of insert operations and batches of update operations to update in one transaction. I think it would be a useful video for developers trying to implement scrapers in production. Love the content as always!
@renancatan
@renancatan 2 года назад
Guys what I do is: Use sqlalchemy library to work with sqlserver/postgres/mysql, there are some methods that can be instanciated like append and replace, So I insert the scraped data into replace method for temporary tables or tables that will be droped and recreated automatically every day through this method, and also in the same script bellow I define the append method, every time this script runs, it inserts to sql more lines, there you can also define the table's format like int, varchar, boolean etc
@graczew
@graczew 2 года назад
I just start learn sqlalchemy but for small projects it's looks to much. What I like in it is modeling and data validation. There is one thing witch confuse me. Relationships. I wish to see someone who build whole project like scrape, clean and validation plus db design.
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
@@graczew Hey sure I get that, I've got some projects lined up that I can tailor more to this side of scrape/validate and load. the best way to understand the basic relationships is to learn to build some basic web apps with something like Flask, it helped me a lot
@yakobadamovich1813
@yakobadamovich1813 Год назад
I wish all websites were exposing their APIs like Forbes is doing... Sometimes Selenium is a necessity
@zackplauche
@zackplauche 11 месяцев назад
Is there a tl;dr of this vid?
@wangdanny178
@wangdanny178 2 года назад
a follow-up question. Is it the same Network layout in Chrome browser?
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
Sam functionality yeah just looks a little different
@wangdanny178
@wangdanny178 2 года назад
I do not know what is wrong. But there is no "button.trustarc-agree-btn". This video is 2 days ago. Is it possible they change the web elements already?
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
Open the url in incognito mode and it should be there
@hamdimohamed8913
@hamdimohamed8913 Год назад
what is the web browser that you are using ?
@JohnWatsonRooney
@JohnWatsonRooney Год назад
I use Firefox mostly but sometimes chrome for demonstrating
@fhkdhkdyidyhfufufh9011
@fhkdhkdyidyhfufufh9011 Год назад
Are there sites that do not support web scraping?
@sandeshpandit5540
@sandeshpandit5540 Год назад
Please tell me, is web scrapping a good career option? Do companies hire you for web scrapping?
@reverendbluejeans1748
@reverendbluejeans1748 Год назад
Selenium is used for website testing. You could be an automation aq tester.
@reverendbluejeans1748
@reverendbluejeans1748 Год назад
My inspector on Chrome looks completely different. I dont know what to do.
@kamizandesh2750
@kamizandesh2750 2 года назад
thanks for your helpful tutorial. is it possible to explain web scrapping on some websites which has dynamic context like financial yahoo? there are many samples but none of them does not work properly or does not show price online. best regards
@Data_Creator
@Data_Creator Год назад
wow...
@soniablanche5672
@soniablanche5672 2 года назад
LMAO 5 MB of JSON. Are they sending the entire database ?
@drac.96
@drac.96 2 года назад
Can you please share the code? It's not in the description.
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
yes sorry added now to desc!
@drac.96
@drac.96 2 года назад
@@JohnWatsonRooney Much appreciated!
@vetrivelr5708
@vetrivelr5708 2 года назад
Great info. But i do have quick a question. Like in my company website i was able successfully login via request. But i try to find different pages links. But i don't find any anchor tags or links for any clickable buttons which will lead to different pages. Therefore i used explicit URLs for the corresponding pages. But it always returns only home page html details. I used context manager here. I don't understand why. Any suggestions would be appreciated. Thanks
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
If you log in using requests you’ll need to use a session so it saves your logged in status and you can then visit other pages
@vetrivelr5708
@vetrivelr5708 2 года назад
@@JohnWatsonRooney Thank you. Of course i have used session as i have learned from you. Never matter what page URL i use, it simply always return Home page HTML details only. That's what i couldn't figure it out.
@Rapid1898
@Rapid1898 2 года назад
Hello - i tried to follow your description but when i do the exactly same thing like you (on windows) and reload the page i get 244 requests. Why do you only have 7 when showing this in the video? I was also not able to find the page-data.json file. What i also saw that you have a "file"-column in your inspect window - but i don´t have a column "file" for selection when clicking right mouse on the column-headers. How can i find this json-file resonse you showed in the video?
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
Are you using chrome? I find that the inspect element tool looks different on chrome (this is Firefox) they will be there in chrome too - sometimes you need to click different pages to see them
@Rapid1898
@Rapid1898 2 года назад
@@JohnWatsonRooney Yes its chrome - but then i will give it a try with firefox
@Rapid1898
@Rapid1898 2 года назад
Tried it now with Firefox and with that it works as i see it in your video. Another question btw - is it somewhere possible to get the code you used in your videoi (can´t find any github-link in the video-description).
@wangdanny178
@wangdanny178 2 года назад
John another question is about insomnia. My requests are always timed out. Do you have any idea?
@kusunagi1
@kusunagi1 2 года назад
What are cookies actually? Just request?
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
It’s a small bit of data the server puts on the client computer to help identify it
@mehmetkamildemirkent4319
@mehmetkamildemirkent4319 2 года назад
Been there, done that.
@kiddo8714
@kiddo8714 Год назад
Want to learn Web scraping. where to begin on your channel ?
@JohnWatsonRooney
@JohnWatsonRooney Год назад
Good question, I must reorganize my playlists! maybe this one ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-GyB43hudfQw.html
@LoneWolF45236
@LoneWolF45236 4 месяца назад
didn't know that Robert Downey Jr. aka "iron man" Master the web scraping.😆
@return_1101
@return_1101 2 года назад
What is wrong with Selenium?...
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
Nothing, it’s a great tool for testing websites. But I think many people lean on it for scraping data when they don’t need to
@return_1101
@return_1101 2 года назад
@@JohnWatsonRooney Thank you very much. This video very informative. But I know only Selenium and BS4...(requests).
@oopss794
@oopss794 Год назад
a headless browser won't be able to get an httpOnly cookie for example
@theTrigant
@theTrigant Год назад
Why, of all the pages on the web, did you choose a frickin site about the most sinister humans on earth?
@coddude3284
@coddude3284 11 месяцев назад
allow saving videos, why you are disabling it !
@kikiryki
@kikiryki 2 года назад
Hi John ! Would you please explain in a new video, how to automate a Google search of "list of queries" (Company names as keywords in column A in a XLSX or CSV file), and Save some output of Google Search (top 3 results - URL, title, Adress, business ID), in a new Results CSV or XLSX file? Thank you!
@moomoocow526
@moomoocow526 2 года назад
Automating Google searches is almost impossible given their bot detection. Try using an alternative search engine
@kikiryki
@kikiryki 2 года назад
@@moomoocow526 not necesarrily Google, but i want to learn to iterate a list în a searchbox (dropdown list to choose) and save în csv some results.
@anasouardini
@anasouardini 8 месяцев назад
Biggest mistake one: using Selenium LOL
@rons96
@rons96 2 года назад
"Hey guys, selenium is wrong, let's use playwright instead, that works the same way" hurdur.
@gabrielkcgamox
@gabrielkcgamox Год назад
hiii, want to know to identify the CSS that you write in page.click() ,
Далее
Always Check for the Hidden API when Web Scraping
11:50
Кошка-ГОНЧАР #шортс #shorts
00:28
Просмотров 797 тыс.
Web Scraping Methods You NEED to Know
7:42
Просмотров 14 тыс.
This script I threw together saves me hours.
13:38
Просмотров 17 тыс.
still the best way to scrape data.
41:01
Просмотров 13 тыс.
Web Scraping with ChatGPT is mind blowing 🤯
8:03
Просмотров 35 тыс.
BeautifulSoup + Requests | Web Scraping in Python
6:58
The most important Python script I ever wrote
19:58
Просмотров 151 тыс.
Собери ПК и Получи 10,000₽
1:00
Просмотров 1,3 млн
Купил этот ваш VR.
37:21
Просмотров 297 тыс.
ДЕШЕВЫЙ НОУТБУК C OZON ЗА 17000р
13:00