Тёмный

How to store data with Python and SQLite3 

John Watson Rooney
Подписаться 82 тыс.
Просмотров 44 тыс.
50% 1

If you are not storing your data into a database yet and aren't sure where to start let me help you - use SQLITE. In Python we have easy access to sqlite databases and I will show you how to easily create, connect and add data to you new database, including a project based example, where we don't want to add data that already exists.
Support Me:
Patreon: / johnwatsonrooney (NEW)
Amazon UK: amzn.to/2OYuMwo
Hosting: Digital Ocean: m.do.co/c/c7c9...
Gear Used: jhnwr.com/gear/ (NEW)
-------------------------------------
Disclaimer: These are affiliate links and as an Amazon Associate I earn from qualifying purchases
-------------------------------------

Опубликовано:

 

7 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 65   
@tubelessHuma
@tubelessHuma 3 года назад
SQLite is very easy as compared to other databases. A short but very useful video.👌👍
@lubomirivanov1741
@lubomirivanov1741 3 года назад
An absolute pillar of society.
@thebuggser2752
@thebuggser2752 9 месяцев назад
Very nice overview of creating and viewing a database.
@eugenepierce8883
@eugenepierce8883 3 года назад
Thank you, John! You're doing an amazing job by sharing your knowledge with others in such a great form. Wish you all the best!
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
Thank you!
@deeperblue77
@deeperblue77 3 года назад
Really helpful and great stuff. Thanks John.
@CaribouDataScience
@CaribouDataScience 11 месяцев назад
Thanks
@mrklopp1029
@mrklopp1029 3 года назад
Thank you for this! Keep up the great work.
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
Thank you lots more coming!
@camp3854
@camp3854 3 года назад
would be cool to see this as part of a scrapy project e.g. in a pipeline
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
That video is done and will be released in a week or so!
@camp3854
@camp3854 3 года назад
@@JohnWatsonRooney Nice! thanks for all your hard work, your channel is amazing!
@ammadkhan4687
@ammadkhan4687 Год назад
Could you please make some video about using microsoft graph API to access outlook or sharepoint with some steps to Register the app
@jonathanfriz4410
@jonathanfriz4410 3 года назад
Another great one John!
@rugvedz
@rugvedz 3 года назад
Thank you for the videos. I've learnt a lot from you. Can you please make a video about handling captchas without using selenium?
@AlexRodriguez-do9jx
@AlexRodriguez-do9jx 2 года назад
As an extension to this tutorial would be really cool to see a way of hosting this sqlite3 database instance in a docker instance w/redis or something. Nevertheless excellent video, super practical. Love your content.
@w33k3nd5
@w33k3nd5 3 года назад
hey that was wonderful , could u make a video on when amazon blocks from scrapping or shows captcha . because the way you explains and teaches the things are really really easy to copup with . Thanks
@00flydragon00
@00flydragon00 Год назад
wow this vid is so clean
@chizzlemo3094
@chizzlemo3094 2 года назад
Cool thanks
@amarAK47khan
@amarAK47khan 10 месяцев назад
great practical stuff !
@Daalen03
@Daalen03 3 года назад
Thanks John! Really helpful again. A while back you mentioned a video for deploying to Heroku, is that still in works?
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
It is!
@asmuchican490
@asmuchican490 3 года назад
John I think mongodb is better than sqlite for crawling multiple spiders. In sqlite we have to write more codes and gives unnecessary errors related to pipelines and database connection. For complex and large project mongodb is better.
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
sure, SQLite has its downsides. I like mongo and have used it in some of my personal projects. The point I wanted to make was that if you aren’t familiar with databases then use SQLite now and start getting used to it. Good points though
@serageibraheem2386
@serageibraheem2386 3 года назад
The best of the best
@Thomas_Grusz
@Thomas_Grusz 2 года назад
Thanks for this!
@KhalilYasser
@KhalilYasser 3 года назад
Thank you very much.
@shoebshaikh6310
@shoebshaikh6310 3 года назад
Great work.
@businessman6269
@businessman6269 3 года назад
Amazing video! Could you do a video on web scraping using a VPN network as a proxy? For example, using Protonvpn or Nordvpn for scraping data from amazon? Thanks!
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
Great suggestion! I'll add it to my video notes!
@trungnguyenduc2443
@trungnguyenduc2443 3 года назад
Hi, Can you guide us how to create new sheet in workbook everyday for data update.
@proxy7448
@proxy7448 2 года назад
late comment but how about checking if data is found ?? i could loop through but depending on data that'd be slow
@nurlansalkinbayev3890
@nurlansalkinbayev3890 3 года назад
Hello John.Thanks for your job.Can you make video how to send email once a day automatclly?
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
Yes I can - i will add it to my notes!
@bisratgetachew8373
@bisratgetachew8373 3 года назад
Great Video once again. Can you please finish the fastapi video? Thanks again
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
Sure - it’s on my list, I’ve got a lot going on but will get there
@FabioRBelotto
@FabioRBelotto 2 года назад
Great video. Can you talk a bit about using sql "create table as" in python?
@gentrithoxha7797
@gentrithoxha7797 2 года назад
Hello, great content.I wanted to ask a question that i have looked in Google so long couldn't find answer. Is it possible to use selium in headless mode and then when a get 200 response open that same request in gui mode? I would appreciate if you respond to this.
@JohnWatsonRooney
@JohnWatsonRooney 2 года назад
I'm sure you could - open it in headless, get a 200, close the browser and then reopen with headless=False. I've not tried it but I believe it would work
@DM-py7pj
@DM-py7pj 3 года назад
At no point do you close any of the connections. Does this mean you have a load of open connections in the background or is there some sort of (out of scope?) clean-up?
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
Sure, you don’t need to worry about closing the connection, just the transactions with execute(). There’s generally no issue leaving it to close itself
@ronanamelin
@ronanamelin 2 года назад
What if the "price real " changes the value ? How would you update the entry than ?
@tyc00n
@tyc00n 3 года назад
it would be great if you go from scraped API json to NOSQL without duplicates, deduping so to speak
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
Sure, I’m working on mongodb videos for next month
@sheikhakbar2067
@sheikhakbar2067 2 года назад
I need your advice regarding SQL... Is there any advantage to learning it, if I am pretty comfortable with pandas? So basically the question is, what do I get out of learning SQL ... what are its advantages over pandas? Something that pushed me to learn SQL (I keep saying SQL because I see a lot of python programmers say SQLite, I wonder how it is different from SQL?!) ... the thing that pushed me to want to learn it was seeing pandas not able to deal with large data frames, so how is SQL doing int that regard? pandas is great, but once the pickle file exceeds 2 GB, handling data becomes extremely difficult.. is it the same with SQL?
@sheikhakbar2067
@sheikhakbar2067 2 года назад
The background to my question is that I once scraped a website, and the output was huge (around 15 GB) ... so I had to devise a plan to scrape and save the output to around 30 smaller JSON files, so I can process the data in pandas.
@finkyfreak8515
@finkyfreak8515 Год назад
You should probably use something well established for that kind of data. Try SQLite first as you can see it's quite easy. Probably You already have a solution for it after 10months would you mind sharing your experience?
@b.1851
@b.1851 3 года назад
lets go !!! first comment. keep up the work john
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
Thanks!
@patrykdabrowski8333
@patrykdabrowski8333 3 года назад
Hi John, Could you please share your VSC settings which you were usingin previous videos? The theme looks pretty good and terminal also!
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
HI - The VS Code is Gruvbox Material, and the terminal is ZSH installed into WSL2, I dont remember the specific settings though sorry!
@dzeykop
@dzeykop 3 года назад
Hello John thank you for amazing, really amazing video again. You lessons are awesome and it looks so easy. 😊😊😊👏👏👏 P. S. Please, Please, make a course on Udemy about all the Python stuff and I buy it immediately 😎😎
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
thanks! I'd love make a course, but I would want it to be worth it to people. I do have a rough plan down
@dzeykop
@dzeykop 3 года назад
@@JohnWatsonRooney all your RU-vid videos have a big value for everyone who will learn Python language. And your style to teach is really pleasant. 👍👍👍👏👏👏 Thank you
@augischadiegils.5109
@augischadiegils.5109 3 года назад
@asmitapatel547
@asmitapatel547 3 года назад
Can you scrap Udemy website ?
@stuartwalker9659
@stuartwalker9659 2 года назад
so I did this using pycharm and jupyter and neither didn't created my .db database, and it created an error alert stating that the name I gave my .db; NameError: (name) is not defined. Did you define your example.db off video?
@magicmedia7950
@magicmedia7950 2 года назад
Moving too fast
@Pylogicx
@Pylogicx Год назад
Please work on your speaking style. I often come on your channel and try to learn something from you. But the way you speak destroys the learning passion. 😢
@noahdoesrandom
@noahdoesrandom Год назад
Keep getting this error, can anyone help? Traceback (most recent call last): File "/Users/noah/Documents/Database/main.py", line 7, in cur.execute('''CREATE TABLE IF NOT EXISTS patientnameandage sqlite3.OperationalError: near "in": syntax error
@airbnbguest6370
@airbnbguest6370 3 года назад
How does this compare to storing the data in a pandas dataframe and then exporting the pandas dataframe to sql format? Is this just a better way to save the scrapped data into a db in real time, line by line? I need to get better at extract transform load processes for my scrapers that I want to run consistently to build a time series picture. Would be interested to see some videos on simple ways to set up a scraper to run say once per week, and push data to a sql database on aws, that can then be queried via a graphql api using something like hasura.io, and then how to monitize that dataset on rapid api or a make your own site for it with something like blobr.io.
@JohnWatsonRooney
@JohnWatsonRooney 3 года назад
Yes skip pandas if you end goal is just storing the data. Put it all into a database like this and then pull out the bits too want to analyse into a pandas data frame. I’ve done some videos on cronjobs before but I do prefer the project approach - I’m working on a series now that takes scraped data and saves it to a database, I could adapt it to run each week and then work on a front end to display it
@mcooler1
@mcooler1 3 года назад
@@JohnWatsonRooney This is exactly what I would like to see. Please consider doing video about how to automatically (periodically) run scraper + save to database + show in front end. Many thanks.
Далее
Linkin Park: FROM ZERO (Livestream)
1:03:46
Просмотров 7 млн
Mark Rober vs Dude Perfect- Ultimate Robot Battle
19:00
SQLAlchemy: The BEST SQL Database Library in Python
16:39
I've been using Redis wrong this whole time...
20:53
Просмотров 356 тыс.
Web Scraping Databases with Mechanical Soup and SQlite
19:19
Linkin Park: FROM ZERO (Livestream)
1:03:46
Просмотров 7 млн