Тёмный

Read Giant Datasets Fast - 3 Tips For Better Data Science Skills 

Python Simplified
Подписаться 236 тыс.
Просмотров 49 тыс.
50% 1

We've learned how to work with data. But how about massive amounts of data? as in - files with millions of rows, tens of gigabytes in size, and ages of staring at your computer waiting for everything to load?
Luckily, in this tutorial, I will show you how to work with a gigantic dataset of Amazon Best Seller Products that has over 2 million rows, and takes up 11GB in size 😱😱😱
A huge shoutout to Bright Data for supplying it and helping this video come to life!
⭐ you can get a free sample of this dataset here:
get.brightdata.com/pythonsimp...
Additionally, I will demonstrate that slight improvements to your code make a huge impact on the processing speed - regardless of how strong and powerful your computer is!!
For this, we will compare the performance across 2 different systems:
🖥️ my custom build new-gen PC
💻 my poor old laptop (yes, the one that is held by scotch tape and is barely operational 😅)
You will see that well-written code can even make my old laptop run like a supercomputer! 💪💪💪 #python #datasets #brightdata #data #ecommerce #datascience #pandas #pythonprogramming
📽️ RELATED TUTORIALS 📽️
----------------------------------------------
⭐ Anaconda Guide For Beginners (Install Jupyter Notebook):
• Anaconda Beginners Gui...
⭐ Pandas Guide For Beginners:
• Basic Guide to Pandas!...
⭐ For Loop For Beginners:
• Python For Loops - Pro...
⏰ TIME STAMPS ⏰
----------------------------------------------
00:00 - intro
01:05 - intro to working with professional data platforms
03:38 - complexity of loading very large datasets
06:43 - focus on relevant data ⭐
09:09 - load data in small chunks ⭐
10:25 - access and change data chunks values
12:19 - save modified data into a new csv file ⭐
14:49 - Thanks for watching! 😀
🤝 Connect with me 🤝
----------------------------------------------
🔗 Github:
github.com/mariyasha
🔗 Discord:
/ discord
🔗 LinkedIn:
/ mariyasha888
🔗 Twitter:
/ mariyasha888
🔗 Blog:
www.pythonsimplified.org
💳 Credits 💳
----------------------------------------------
⭐ Beautiful titles, transitions, sound FX, and music:
mixkit.co
⭐ Beautiful icons:
flaticon.com
⭐ Beautiful graphics:
freepik.com

Наука

Опубликовано:

 

30 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 185   
@xr1140
@xr1140 Год назад
it would have been complete if you would have shown how to save to a new file if the loading was in chucks.
@PythonSimplified
@PythonSimplified Год назад
That actually was the original plan!!! I had it filmed and eventually, I edited it out! 😀😀😀 The reason is - after experimenting on 3 different computers I've noticed it was a major waste of time! Saving the data in chunks is much much slower! And you also need to make sure to set header=False, otherwise - each chunk inside the CSV will begin with the column names. The way around it is to store the column names in advance and then use the for loop to add each individual chunk without its headers. If you're still interested - try the following: cols = pd.DataFrame(columns=["final_price", "image_url", "title", "url", "categories"]) cols.to_csv("modified_data.csv", mode="a", encoding="utf-8", index=False) data = pd.read_csv("bd_amazon.csv", chunksize= 50000, usecols=["final_price", "image_url", "title", "url", "categories"]) for idx, chunk in enumerate(data): chunk.to_csv("modified_data.csv", mode="a", encoding="utf-8", index=False, header=False) This should do the trick (but much slower, in more lines of code and a much less elegant syntax 😉) I hope it helps! 😀
@xr1140
@xr1140 Год назад
@@PythonSimplified Thank you for the explanation. I'm sure a lot of people interested in the subject will appreciate.
@hasanzurqa911
@hasanzurqa911 Год назад
@@PythonSimplified Can i draw you, I am a painter and I am sure you will like it.
@Dampferfrosch
@Dampferfrosch Год назад
@@hasanzurqa911 Python is beautiful !!
@zwollekira8202
@zwollekira8202 Год назад
@@PythonSimplified this guy is amazing! Thank you PythonSimplified
@phsopher
@phsopher Год назад
You can time things in a Jupyter notebook using an inbuilt magic command. If you wanna time a single line statement, prefix it with %time. If you wanna time the whole cell put %%time at the start. Similarly there are magic commands %timeit and %%timeit which will run your code multiple times and report the fastest time.
@PythonSimplified
@PythonSimplified Год назад
WHAT SORCERY IS THIS, PHSOPHER??? 🤩🤩🤩 Just tried it and it is absolutely incredible!!! I had no idea we can do that and I've been using Jupyter since my very first print statement in Python!!! 😱 Folks, you must try the following: %time data = pd.read_csv("data.csv",usecols=["final_price", "image_url", "url", "title", "categories"]) Nothing short of magic!! I see you as Gandalf now🧙‍♂️🧙‍♂️🧙‍♂️
@vasylpavuk391
@vasylpavuk391 Год назад
How about testing Pandas VS Polars?
@vishaldas6346
@vishaldas6346 Год назад
​@@PythonSimplified do we have a similar sort of magic command in the VS code?😂🤔
@franky12
@franky12 Год назад
@@vishaldas6346 You can run jupyter notebooks also in vscode
@bicycleninja1685
@bicycleninja1685 Год назад
Good tip. Just tried it on Carnets Plus on iPad, and it worked.
@shanesteven4578
@shanesteven4578 Год назад
Your enthusiasm shines through as usual. I know there have been some difficult times since the introduction of Openai etc, but you must not stop doing what you’re doing because thousands of people are relying on you, your wonderful teaching skills and your python (amongst other) knowledge. Thank you.
@PythonSimplified
@PythonSimplified Год назад
Thank you so much Shane!!! 😃 I definitely spent way too much time going over the comments of my ChatGPT vlog, and my only regret is that I only focused on the job aspect rather than the whole doomsday package 🤪 hahahaha The reason why I was off radar was not related to the vlog though, I was finishing up a rough semester in university and just landed in the middle east to visit my family. I'll be gone for a bit longer, but will come back with a brand new AI Simplified series! 🥳🥳🥳 The first video is all about the question of "can computers think?" and it's a huge tribute to Alan Turning! so please stay tuned 😉
@faisalee
@faisalee Год назад
Beautifully done! Please never stop making these videos :)
@mtmanalyst
@mtmanalyst Год назад
Great presentation- your explanations are the best!!
@jorgevector8153
@jorgevector8153 Год назад
Your videos are extremely didactic and easy to understand, they are the most beautiful and elegant projects on youtube! Congratulations.
@marcq1588
@marcq1588 Год назад
I am just learning python and found some of your videos. You are very good and very clear. I had issues installing Anaconda on my Windows 11 computer. It was very slow and crashing most of the time. I have good hardware so that was not the issue. I might try it again as this Juniper looks good. Thank for your time and efforts making these videos.
@HadiLePanda
@HadiLePanda Год назад
Very clearly explained and with your usual enthousiasm, keep it up! :)
@danielschwan3298
@danielschwan3298 Год назад
I need to get more into Data Science and Machine Learning processes and such videos help me a lot. Thanks for that
@PythonSimplified
@PythonSimplified Год назад
Absolutely! The chunking solution goes hand in hand with the Machine Learning batching 😉 When we load data into a neural network (or other type of models) we load it in batches rather than all at once! If you'd like to see a specific example in the realm of ML, I have a special beginner friendly tutorial covering it: ⭐ Machine Learning Databases and How to Access Them: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-8z2oLfK2sIc.html This gives you a really nice introduction to the Pytorch framework as well 🙂 It's part of a nice AI and ML Simplified playlist which you may find helpful on your exciting new journey: ru-vid.com/group/PLqXS1b2lRpYTpUIEu3oxfhhTuBXmMPppA (I'm actually working on a new AI Simplified series, starting very soon on this channel so definitely stay tunned! 😀)
@fredoh2768
@fredoh2768 Год назад
Wow! very helpful video. I was dealing with this problem. Love your videos thanks.
@georgevillac
@georgevillac Год назад
thanks for sharing this content, I learn a lot from you, keep going! 💪
@ciscodea
@ciscodea Год назад
This was beatiful, just in time for my work. Keep it up!
@jeuxmathinfo9578
@jeuxmathinfo9578 Год назад
Great video!!! As usual ! 🏆🏆🏆
@rayoh2011
@rayoh2011 Год назад
Great information and knowledge! and Love your energy!
@PythonSimplified
@PythonSimplified Год назад
Thank you so much :)
@paulocoelho558
@paulocoelho558 Год назад
Hi Mariya, I always wanted know more about giant datasets and Python. Thank you. I am looking forward for simplified python.😉😉
@raimonvibe
@raimonvibe Год назад
Great video! If I'm going to apply these skills I'll just look up the syntax 😄.
@eevlos
@eevlos Год назад
thank you for such an amazing video!
@user-xs4pv6em6k
@user-xs4pv6em6k Год назад
A delightful presentation. thanks.
@d-rey1758
@d-rey1758 Год назад
Great vid! Are you going to do any vids on Natural Language Programing (NLP), with tools like, Spacy,NLTK, Genism, Core NLP?
@modatheralawad2983
@modatheralawad2983 Год назад
Thanks you alot maria you are doing great job, may god bless you and your efforts. We need clear map to data science and machine-learning in long length videos, if you will
@JojiThomas7431
@JojiThomas7431 9 месяцев назад
Beautifully explained
@dimitriosdesmos4699
@dimitriosdesmos4699 Год назад
i love your clarity.
@PythonSimplified
@PythonSimplified Год назад
Super happy to hear, Dimitrios! Thank you! 😀
@martella13
@martella13 Год назад
I would wear a t-shirt that says "the chunk comes as-is" :D haha. Another great video! Thanks for your hard work!
@higiniofuentes2551
@higiniofuentes2551 Месяц назад
Thank you for this very useful video!
@marcinzale
@marcinzale Год назад
Great video! As usual. Thanks!
@PythonSimplified
@PythonSimplified Год назад
Thank you so much Marcin!! 😃 😃 😃
@OPlutarch
@OPlutarch Год назад
I loved the new intro so much!
@hughielow563
@hughielow563 Год назад
Would you mind doing a follow-up on this, where we do decide to iterate through the batches? (ie with the amazon example app, how would we best implement batch processing in this context). Thanks. Your vids are great.
@ssbrunocode
@ssbrunocode Год назад
The most beautiful voice on youtube, thank you for the well narrated and produced content ;)
@mellowbeatz93
@mellowbeatz93 5 месяцев назад
For me personally, the best channel out there to learn python!!! I am not kidding! Thank you so much! ❤
@PythonSimplified
@PythonSimplified 5 месяцев назад
Thank you so much for the incredible feedback!!! Super happy you like my tutorials! 😁😁😁
@pd2871
@pd2871 Год назад
Saving as a pickle or feather format instead of csv will be much faster and less memory consumable.
@PythonSimplified
@PythonSimplified Год назад
Absolutely! Thank you for the awesome tip Prakash! 😀 Pickled data is fantastic in terms of reading speed! it's a bit limited in terms of readability (and as a result - security! as you wouldn't necessarily notice any shenanigans inside what seems to be normal data😉) It's a great solution if you don't mind a slight learning curve and as we are the ones who pickle our files - security is not a problem 😀 If anyone is curious about pickling, here's the documentation: docs.python.org/3/library/pickle.html And please let me know in the replies below if you'd like to see a simplified tutorial about the pickle module 🙂
@diwakar_tsn
@diwakar_tsn Год назад
Channel active after long time ❤️🙂
@barrykruyssen
@barrykruyssen Год назад
Thank you, another great tutorial. Once we have the dataset we can search (SQL type search, any indexing?) inside the data? Maybe a follow on tutorial?
@visualish
@visualish Год назад
Thank you for your great video. But perhaps for 15 GB data it's better to use Polars instead of Pandas. It has a similar syntax to Pandas so you don't find yourself on a different planet and it uses Rust code for faster execution. It is particularly suitable for processing large data sets, as it has built-in support for multi-threaded and multi-core processing.
@RsD1968
@RsD1968 Год назад
Very interesting. Please do more videos on handling pandas dataframes + tkinter. Logical operations, comparisons, filtering, unique values ​​etc. etc. , and how to include the result in a frame on the window.
@nikluz3807
@nikluz3807 Год назад
The new intro is nice :)
@nishantpanigrahi5326
@nishantpanigrahi5326 Год назад
Can this be done using json as input as well? Actually I saw your SQLite video too. The problem I'm really trying to crack is I have a dataset where each record(300 million records in total) is a nested json object. I have the option to convert it into SQLite3 db but I am not sure how we can store something like the 'Categories' field in your dataset which has an indefinite number of array elements within it. End goal is to write an app which can send SQL queries to get filtered results from the database. Perhaps you can advise.
@guitarready
@guitarready Год назад
Thank you for this informative video. Do you think you can make a video on machine learning where we take a dataset and train and test a model to predict some event? Please it will help us many a lot. I really enjoy the way you explain these concepts. Keep up the good work!
@PythonSimplified
@PythonSimplified Год назад
I'm working on a brand new AI aeries as we speak 😉 In the meanwhile you can checkout my old AI Simplified playlist: ru-vid.com/group/PLqXS1b2lRpYTpUIEu3oxfhhTuBXmMPppA I have 2 videos showcasing the entire training + neural network making process (N-gram modelling): ⭐ Build a Neural Network with Pytorch - Storyeller PART 1: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-mzbJd0NhW2A.html ⭐Train a Neural Network - StoryTeller PART 2: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-GTyTG3XzPq8.html I must warn you though - it's a very stupid neural network as I haven't optimized it 😅 These were some of the first tutorials I've ever filmed, so I wasn't particularly good in terms of explaining. The new series will be much much better 😃
@guitarready
@guitarready Год назад
@@PythonSimplified thank you so much
@davidtindell950
@davidtindell950 Год назад
TUVM = Very Timely and Helpful !
@PythonSimplified
@PythonSimplified Год назад
Thank you David! Super happy to hear! 🙂
@vishaldas6346
@vishaldas6346 Год назад
What if my project requires loading the data as it is into the oracle database? I have done these tasks of loading 7 to 10 Million records into the database using Chunks or you can say by batches. I am not sure if there is any convenient way to reduce the time.
@saqqara6361
@saqqara6361 Год назад
Is there no code completion and code inspection/parameter preview available for jupyter? (sorry, I´m python beginner ;-) )
@muhammadsaqib453
@muhammadsaqib453 Год назад
I like your knowledge and tutorials
@PythonSimplified
@PythonSimplified Год назад
Thank you so much! Super happy to hear! 😀😀😀
@sagejpc1175
@sagejpc1175 Год назад
Wanted to ask if you are gonna do another hackathon this summer, I really enjoyed the one from last year!
@AlbayLuis93
@AlbayLuis93 Год назад
Hi! Love your videos! I learn so much, and in an easier way! Question, which laptop would you recommend to use as an entry level programmer-data-analyst. Mac? Windows? Or Linux? I dont mind Ubuntu GUI. Thank you!
@ronaypeter
@ronaypeter Год назад
Super thanks
@aonbrostin3579
@aonbrostin3579 Год назад
First - that was awesome walkthrough, Mar. Second - would be nice to see more clustering workloads. Like koalas, instead of pandas. Working on a single node is acceptable/tolerable. But to future-proof our skills we'll have to work with parallelization. Aka Spark/Dask/Ray. I reckon you can use free acc to run your jup notebooks from a company-we-all-know-that-uses-spark-as-core-engine, something to do with bricks...
@123arskas
@123arskas Год назад
Awesome. I wonder if loading them into chunks and appending them into a single DataFrame leads to a faster FULL DataFrame that we can work with altogether
@CaribouDataScience
@CaribouDataScience Год назад
My personnel record is the Citibike data. 50 csv files containing 300 + million rows. I import it into Excel using Power Query. Yep it did take a few minutes.
@OhMyPy
@OhMyPy 11 месяцев назад
@Python Simplified Can you apply this python code inside of SharePoint to overcome the 5k record limit?
@IronBadger87
@IronBadger87 Год назад
Hi Mariya, I have a question regarding programming logic, algorithm and data structures: What do you recommend for a beginner who wants to learn programming logic, algorithms and data structures? I am totally confused on what I should be doing and what are the best resources to use... I know Java syntax, but problem solving is a show stopper for me and I don't know what to study, what to do or what resources to use to improve... Please help! I am a desperate beginner Thank you! ( I am learning Java atm, but I love your videos so I follow you! :D)
@normandrioux2529
@normandrioux2529 3 месяца назад
Great video. Very helpful ! But how would you handle 6 to 10 gb of data but in 11 000 + xml files?
@benwilde1768
@benwilde1768 Год назад
Hoping one of the following videos will be on processing giant datasets with vectorization, using apply() instead of for loops, etc. 🤞
@PythonSimplified
@PythonSimplified Год назад
Haven't had a chance to explore it yet, thank you so much for suggesting! 😃 My guess is - call.apply() on each individual chunk using my code from minute 10:25, it should speed things up quite a bit 😉
@rickeyestes
@rickeyestes Год назад
Thanks!
@PythonSimplified
@PythonSimplified Год назад
Thank you so much Rickey! I really appreciate it! 😃 😃 😃
@Shawn-cr8ep
@Shawn-cr8ep Год назад
I use parquet bc it's so much faster and takes up less disk space AND saves data types, which is really useful if your optimizing dtype for speed/storage. And it's compatible with R/Power BI/ect. When I'm starting from CSV though this is an awesome tip, thanks! 🙏 I'm shocked at the speed increase!
@k3agan
@k3agan Год назад
parquet with polars ;)
@hercion
@hercion Год назад
You should have a look at the python lib datatable. That one was especially designed for huge data sets and is orders of magnitudes faster than pandas. And can do memory mapping to process data that does not fit into RAM.
@mr.electron5295
@mr.electron5295 Год назад
hey maam i am a bignner in feild of programming and i have doubt i am really bad at maths so if i want to do AI or ML in future do i need to learn maths
@skroyeducation2166
@skroyeducation2166 Год назад
can we combine kivy with flask to use kivy program as a web app
@yeahjustlikethat
@yeahjustlikethat Год назад
Awesome we can also use dask or worker-based distributed approaches. Perhaps a follow-up for you?
@LostPlaceChroniken
@LostPlaceChroniken Год назад
I really was suprised how fast Python loaded that huge dataset!
@PythonSimplified
@PythonSimplified Год назад
me too! In contrast to Excel - my new PC was actually able to load it as is! Excel on the other hand collapses almost immediately, regardless of the computer system 🙃
@Bojan456
@Bojan456 Год назад
I’d love to see a version of this video but for geospatial data. Manipulating such data is often complicated by the geospatial aspect. Anyway, great video!
@PythonSimplified
@PythonSimplified Год назад
Thanks for the suggestion and for the lovely comment! 🙂 I cant say that I'm an expert in mapping and coordinates... but if I stumble upon a nice geospatial dataset, I'll definitely explore it and see if I understand it well enough to film a tutorial about it 😉
@theaxisofinsight
@theaxisofinsight Год назад
What font are you using in your terminal? (part where you activate environment in anaconda)
@PythonSimplified
@PythonSimplified Год назад
Just the default font that Anaconda comes with 😃
@familytrap2849
@familytrap2849 Год назад
Welcome I have an off topic question. What language was Civilization VI developed in? Please answer the question as soon as possible, thank you.
@justinmoore4946
@justinmoore4946 Год назад
I was wondering if you were going to do a gui app series on pyside6 or qt design? Love the work!!
@PythonSimplified
@PythonSimplified Год назад
Next on the menu - a brand new AI Simplified series 😉 I'll post a few GUI projects in between, but my main focus at the moment is there 🙂
@justinmoore4946
@justinmoore4946 Год назад
@@PythonSimplified I love it! Can't wait to watch. I am currently using your Qt5 tutorial to help with my chatbot with gpt-3.5 where the user can select a character and the bot will emulate chat with that character. It's super fun!
@katrinabryce
@katrinabryce Год назад
My approach when downloading very large csv files, is to use data = requests.get(url,stream=True).iter_lines() That returns an iterable to the data, but doesn't start downloading it at this stage. The first row will be the headings, so get that with something like headings = next(data).decode("utf-8").split(",") Then loop over the body of the data either with a for loop, list comprehension, or multiprocessing.Pool().map() and dump each line into a database, then do queries on the database to analyse it. Or, if it isn't quite so big, then put it in a numpy array and work on it from there.
@PythonSimplified
@PythonSimplified Год назад
Thank you for the awesome tips, Katrina!! You're a rockstar!! 🤩🤩🤩 This approach reminds me of loading data with C++! It looks like it gives you full control over your data and doesn't leave any grey areas for libraries like Pandas to fill in. I LOVE IT!!! Do you find that multiprocessing works better than multithreading when it comes to loading/storing data? I've heard from online folks that multiprocessing is time-costly, but it's highly recommended for CPU-intence tasks... which I find a bit confusing! hahaha I wonder what's your take on that 🙂 (sorry, I haven't had a chance to follow up on our ChatGPT conversation. The comment section there got absolutely insane after a short while. It got substantially more comments than any one of my videos... I wasn't expecting that at all! 😅)
@katrinabryce
@katrinabryce Год назад
@@PythonSimplified I guess it depends. Maybe if all your data is strings that, that isn't particularly CPU intensive. I am usually downloading files where most of the columns need to be converted to ints and floats, and generally at least one of them needs to be converted to a date, and it needs more than one CPU thread to keep up with the maximum download speed I can do. @functools.lru_cache can sometimes help as well, but not always, it depends on the data.
@katrinabryce
@katrinabryce Год назад
Another thing I've found is, multiprocessing can be really slow to get up and running on Windows, but it is much faster on FreeBSD and Linux. If you are using a Windows computer, put your Python/Jupyter in wsl-2, then amend your Jupyter config file to run a Windows instance of the browser.
@PauloEffects
@PauloEffects Год назад
Any tutorial Python, Kivy and bluetooth ?
@alexandrohdez3982
@alexandrohdez3982 Год назад
Hi you are the best 👏👏👏🌻🌻🌻
@PythonSimplified
@PythonSimplified Год назад
Thank you so much Alexandro! 😃
@Mr-Casko
@Mr-Casko Год назад
Your way smarter than me ..God Bless.. 🤙
@jawadmansoor6064
@jawadmansoor6064 Год назад
what did you do? what sorcery is this? how does saving it with different name improves memory and processing? please explain.
@PythonSimplified
@PythonSimplified Год назад
We've saved it after disposing of 35 columns, so the dataset was already 35 times smaller before we re-saved it 😉 In addition, I believe that Pandas is optimizing the data type of each column before exporting your DataFrame, so there should be a boost of efficiency there as well 🙂
@jawadmansoor6064
@jawadmansoor6064 Год назад
@@PythonSimplified ah, I expected the dropping of not necessary columns, but I did not know that pandas can optimize data exporting. (I think this feature is new, and will be included in pandas 2.0 or so I heard.) Excited for new pandas though.
@christiaan3315
@christiaan3315 Год назад
Decoding problem with the utf-8 codec. I had to add encoding='latin-1'.
@pieterbosch87
@pieterbosch87 Год назад
I really would suggest to use Polars instead of pandas when using big files...it can be 7 times faster, use timeit to measure to difference. Handling that amount of data every min counts. I love pandas but Polars is WAY faster. Cheers. Nice tutorial though you bought the full set? How did you get access to the full dataset?
@yuhgdhg2768
@yuhgdhg2768 Год назад
You are just amazing brain with beauty 😍.
@PythonSimplified
@PythonSimplified Год назад
Thank you so much Jk!!! 😀😀😀
@mschon
@mschon Год назад
Could you talk about custom tkinter ?
@srs241
@srs241 Год назад
Can you give me an explanation why after saving and opening the data as a new file, instead of taking long to load , it took so little time?
@PythonSimplified
@PythonSimplified Год назад
The reason is - we're dealing with a much much smaller file! we reduced it from 11GB to 600MB so it's not nearly as challenging to load as the previous one 😃
@tastonic30
@tastonic30 Год назад
Please Mariya this is awesome Can you do something on python data structures and algorithms...🙏
@roros2512
@roros2512 Год назад
have you considered a voice career? like singer or actress, your voice is deep and clear, I like it very much. Thank you for all your work, you make a lot for python learners like myself
@thunde7226
@thunde7226 Год назад
Wow You are making magic Just keep doing these videos 🎉❤…..;) bye
@yekhtiari
@yekhtiari 8 месяцев назад
Could you please make a video on how to quickly add those cvs files to a sql table?
@PythonSimplified
@PythonSimplified 8 месяцев назад
I have a bunch of these already 😉 ⭐️ SQLite Basics: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Ohj-CqALrwk.html Please don't forget to add connection.commit() to insert all values to table, it's in the description but was omitted in the video. ⭐️ Webscraping Databases: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-MkGQmZoMuRM.htmlsi=Q_Z3jFfoLnF54Bxb You'll find exactly what you're looking for in this bideo, csv to SQL, just skip the webscraping part 😃 Cheers!
@yekhtiari
@yekhtiari 8 месяцев назад
Many thanks. I'll watch them now.
@sepehr_moghani
@sepehr_moghani Год назад
It's so refreshing seeing code with run right smoothly after you write your code and hit run. My code will almost always get an error after I type it.
@JBMJaworski
@JBMJaworski Год назад
Thank you for your help Mariya! ☺️ On the occasion of women's day, I would like to wish you all the best, noblest and most beautiful! :) Please keep developing such great content on your youtube channel! ❤️ 🙏
@kachiimo2355
@kachiimo2355 Год назад
Do you have like a Python tutoria crashl course series
@diwakar_tsn
@diwakar_tsn Год назад
What happen about custom gpt???
@Tobs_
@Tobs_ Год назад
good data crunching, we just need quantum computers now so we can work on data that hasn't been created in this universe yet.
@PythonSimplified
@PythonSimplified Год назад
or even better - created in a parallel universe 😉
@fullthrottlevishal
@fullthrottlevishal 3 месяца назад
not working woth the 50 gb of dataset any other alternative i trying to import in kaggle file its keeps on crashing
@PythonSimplified
@PythonSimplified 3 месяца назад
If you have a CUDA compatible GPU - try opening it with cuDF pandas. I have a tutorial of how to set it up and you can use a regular read_csv() command to read via GPU rather than CPU (if the dataset is compatible, of course): ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-9KsJRyZJ0vo.htmlsi=hnHA2gW4GzDBykDH I hope it helps! Otherwise - try a library called Polars or other Pandas alternatives :)
@fullthrottlevishal
@fullthrottlevishal 3 месяца назад
@@PythonSimplified i am using kaggle on Macbook with 8GB of RAM. ill try this one and will connect you thank you for reply. inspiring many.
@tomislam
@tomislam Год назад
How do you know the names of columns before creating the dataframe?
@PythonSimplified
@PythonSimplified Год назад
Hi Tom!! 😃 I've checked with data.columns (somewhere around the middle of the tutorial, I believe). Otherwise - I wouldn't even have the slightest clue as this dataset won't open with Excel (and probably their MAC equivalent too 😉). The software completely collapses and my only way of accessing the contents of this file was via Python 🙃
@tomislam
@tomislam Год назад
@Python Simplified Correct me if I'm wrong, but `df.columns` comes AFTER you have created the df. But creating that df process is resource consuming because of the size of the dataset. My approach to getting to know my data would be to create a much smaller df using `nrows`. Something like, dummy_df = pandas.read_csv('large_data.csv', nrows=50) This will consist only 50 rows, but I'm more interested in the column names. dummy_df.columns would now give me the column names that I can use in chunks. 😀
@willi1978
@willi1978 9 месяцев назад
will it be about duckdb?
@WebWise_Wallet
@WebWise_Wallet Год назад
I'm still waiting your first English course
@rogerbraintree9552
@rogerbraintree9552 Год назад
I wonder how to use AI in order to build an app to model a sim living on a plot of land in a specific geographical location. So, you'd enter in the geographical location and the size of the plot of land, and the current state that land is in. The app tells you what the yield of a combination of crops and plants will be. As the app develops it will be able to model whole communities living off grid. This would be a very interesting project to work on. Plenty of scope for continual development as more and more data is fed into the app. It would have possibilities for graphics and animations to really simulate the metiorological conditons of a given location and compute all practical aspects required in order to live in that given region. It would be a big project but I'm sure one that many programmers would want to work on.
@emilie1977
@emilie1977 Год назад
I have a 6 million rows of healf data and Jupyter crash. Thank you for a solution with chuncks
@utkarshgaikwad2476
@utkarshgaikwad2476 Год назад
Save it to parquet files it is the fastest method to save and store large datasets 😎
@gusolive
@gusolive Год назад
Mega!
@hermano511
@hermano511 Год назад
good afternoon, when will you come here in Brazil to visit us?, I work in an Arab restaurant when you come let us know, you are very charismatic, thanks for the videos, we learn a lot from you
@PythonSimplified
@PythonSimplified Год назад
Definitely a fantastic incentive! I've just landed in the middle east to visit family, but would love to go on an actual vacation to a place I've never been before! Brazil is certainly on my to-go list 😉 Thank you so much for the lovely comment, and if I'm ever in Brazil - you can count me in for a big shawarma plate!! 😃
@hermano511
@hermano511 Год назад
@@PythonSimplified It will be a pleasure to receive your visit, when you are here in Brazil, I can't wait.
@techbyalby5572
@techbyalby5572 Год назад
(Present and) Future challenges concern big data.
@PythonSimplified
@PythonSimplified Год назад
The biggest challenge from my perspective is the lack of privacy. We don't notice how much of our data we voluntarily provide to all kinds of services/software. This data is not always used in our favour and very often sold to other services/software of which we are unaware of 😉
@edpach
@edpach 11 месяцев назад
Would love to see the same dataset comparision but using Polars and its lazy operations
@PythonSimplified
@PythonSimplified 11 месяцев назад
Thanks for the request! will look into it 😉
@marklagana2769
@marklagana2769 Год назад
you should check out timeit from timeit import timeit result = timeit(stmt=f"main()", globals=globals(), number=n) print(f"Execution time is {result/n} seconds")
@PlamenAtanasov
@PlamenAtanasov Год назад
Hello I really like your tutorials. I have a LARGE json file(22GB) and I can not open it with pandas read_json. I will be really thankful if you make similar tutorial for json files.
@PythonSimplified
@PythonSimplified Год назад
The exact same techniques will work with read_json as well 😃 You can combine the usecols and chunksize properties to load the dataset bit by bit, no need for a special as it's not really different from read_csv 😉
@PlamenAtanasov
@PlamenAtanasov Год назад
@@PythonSimplified thank you for your response, but I am getting this error: TypeError: read_json() got an unexpected keyword argument 'usecols' , and I can not see usecols in the documentation for read_json.
@PythonSimplified
@PythonSimplified Год назад
aha! you're right, usecols is not a property of read_json()! 😱 The problem with JSON files is that one is structured differently from the other and requires a great level of customization. My suggestion: use chunksize to have a look inside the individual items of your file, and try combining it with orient='columns' to get a table-like structure for each chunk. From there - you can call the .drop() method on each chunk to dispose of unnecessary columns and then save the much smaller chunks into a new csv file (using the code example I shared in the pinned comment up top 😉) I hope it helps! it's hard to tell without seeing the actual structure of your JSON file and it's something you can only find out after successfully loading it 🙃
@PlamenAtanasov
@PlamenAtanasov Год назад
@@PythonSimplified Thank you, I will try it.
@Hoardofcoinslexminingfun
@Hoardofcoinslexminingfun Год назад
Приятно смотреть, пол канала просмотрел. Улыбка убийственная))))
@candrayudhatama3397
@candrayudhatama3397 Год назад
I'm working with >54,000,000 records. The ETL process from Oracle to SQL Server takes 8 hours.. 😅
@DroisKargva
@DroisKargva Год назад
WHERE IS THIS JUMP SCARE SOUND! I GOT SCARED JESUS 0:03
@MarceloNegreiros7
@MarceloNegreiros7 Год назад
Where are you ?
@siamahmed8287
@siamahmed8287 Год назад
Not into data science tbh. Just watching because of you hehe. Btw how have you been?
@siamahmed8287
@siamahmed8287 Год назад
@@MountMatze translate it?
@PythonSimplified
@PythonSimplified Год назад
Google Translate got it as "Open the window"... but I don't know what it means hahahaha 🙃
@siamahmed8287
@siamahmed8287 Год назад
@@PythonSimplified I was about to use that but then I fall asleep lol
@user-qk7sd4xm5c
@user-qk7sd4xm5c Год назад
@@PythonSimplified it's meaning come down, but i don't understand the context :P
@PythonSimplified
@PythonSimplified Год назад
hahahaha I think the three of us are sitting here confused, looking for a logical explanation while @@MountMatze may have already known that we won't be able to find it 🤣🤣🤣
@marksonson260
@marksonson260 Год назад
Use f-string instead of concatenating your strings!
@PythonSimplified
@PythonSimplified Год назад
One is not better than the other, it's just a matter of personal preference 🙃
Далее
Best exercises to lose weight ! 😱
00:19
Просмотров 4,3 млн
If __name__ == "__main__" for Python Developers
8:47
Просмотров 390 тыс.
Polars Is The Faster Pandas
8:53
Просмотров 12 тыс.
This Is Why Python Data Classes Are Awesome
22:19
Просмотров 797 тыс.
25 Nooby Pandas Coding Mistakes You Should NEVER make.
11:30
Web Scraping Databases with Mechanical Soup and SQlite
19:19
I loaded 100,000,000 rows into MySQL (fast)
18:27
Просмотров 177 тыс.
Новодельный ноутбук Pocket386
1:16:17
КРУТОЙ ТЕЛЕФОН
0:16
Просмотров 6 млн