Тёмный
Arpit Agrawal (Elastiq.AI)
Arpit Agrawal (Elastiq.AI)
Arpit Agrawal (Elastiq.AI)
Подписаться
Комментарии
@houssem25000
@houssem25000 Месяц назад
So I don't have to carry about performance when I make projects ?!
@loufoua7640
@loufoua7640 Месяц назад
thank you very much very well explained
@yolemmein
@yolemmein Месяц назад
Very informative. Thanks.
@vikasbohra3253
@vikasbohra3253 Месяц назад
Very nice 👌🏻👌🏻👌🏻👌🏻👏🏻👏🏻👏🏻
@mulshiwaters5312
@mulshiwaters5312 2 месяца назад
any mechanism to move files from GCP (GCS Storage) to another Cloud Provider like Azure ?
@Mju98
@Mju98 4 месяца назад
Hello sir. I tried to import 400k data into big query sandbox. But ended with more errors. Is this possible to import those data. Pls anyone help me it's urgent ( interview assignment)
@13990
@13990 4 месяца назад
Nice video but while voicing better to expand the screen than side by side videos
@vijaysoni6517
@vijaysoni6517 5 месяцев назад
Amazing 👏
@samuelbarros4005
@samuelbarros4005 6 месяцев назад
and how can i do that to predict how much sales really have using sales and refunds?
@Hrzzz1
@Hrzzz1 8 месяцев назад
we can download this database to do some testes ? I nice ideal for next video is compare this same situation with noSQL database.
@jiyasingh664
@jiyasingh664 8 месяцев назад
Amazing explanation for beginners, thanks a lot for this informative video!
@vijaybhaskar5333
@vijaybhaskar5333 9 месяцев назад
Good discussion. Informative. you guys deserve more views.
@FlowGPTOfficial
@FlowGPTOfficial 9 месяцев назад
Visit www.flowgpt.com and join the world's largest prompt community!
@DanThe1Man
@DanThe1Man 10 месяцев назад
Nice video. I see this asked a lot, too.
@DanThe1Man
@DanThe1Man 10 месяцев назад
Great talk.
@nfacundot
@nfacundot 10 месяцев назад
Hello, can I connect it on php?
@davidlean8674
@davidlean8674 11 месяцев назад
This is nice but not that impressive. Obviously, the table is being stored using Columnstore Compression techniques. So you only need to query the columns in the select list. And they are typically grouped in blocks of 1 M or more. These header pages keep rowcount values. So you are not reading every row. Just the block headers of a single column. If your query forced the scan of all rows in the "block" asking it to be combined with other fields in the same row or in other tables before you could filter it. You will no longer be in the columnstore sweet spot. and the difference in query speed would be more striking. Still good thou, as that is a common use case.
@ayanroy08
@ayanroy08 Год назад
Very well explained
@swapnali_20
@swapnali_20 Год назад
Data privacy is an important concern when it comes to interacting with AI systems. Nicely explained 🎉
@user-we2yu8ko4q
@user-we2yu8ko4q Год назад
awesome content..thanks for such videos
@toxiclife
@toxiclife Год назад
what to do when I want to overwrite 100 millions of rows into new table, in minutes? df.write.mode("overwrite").saveAsTable("FINAL"), if you could please help with this?
@elastiq-ai
@elastiq-ai Год назад
Watch the full podcast here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-UfBRIEcRAmg.html
@swapnali_20
@swapnali_20 Год назад
Great👍
@TheElementFive
@TheElementFive Год назад
The first question you should always ask when working with a 100 billion row database: “Why do I have a 100 billion row database?”
@davidlean8674
@davidlean8674 11 месяцев назад
And the answer would be "because I work with a multinational enterprise customer". If you have a large market share in China (1 bill people) , India (1 Bill people), Europe 0.75 Bill, USA (350M people) it doesn't take long to get to 100 BIllion transactions. If you want to do Financial Year on Year comparisons, you need to keep at least 24 months of data, usually 36 months. .
@mathteacher5670
@mathteacher5670 Год назад
excellent sir thank you so much highly motivational for passionate person
@gbohra
@gbohra Год назад
Guys, this is clear and concise as possible, while still providing enough information to be helpful.
@JaiGurujishukranaGuruji947
@JaiGurujishukranaGuruji947 Год назад
Excellent
@YashPatil-pm2xc
@YashPatil-pm2xc Год назад
Very cool
@abhijeet6785
@abhijeet6785 Год назад
This is really very informative!!
@vishnuiyengar8555
@vishnuiyengar8555 Год назад
Great video. Thank you so much for such a clear and clean explanation. One question I had was that I am planning to use a notebook environment for my senior design project during college. Our group is planning to use this for implementing GAN and YOLO models to detect defects in 3D printing. Since we are planning to use at least 1000 images for the training dataset for each type of defect, we wanted to know which notebook is good. Do you think Google Colab will be a good fit for such a project?
@muzahmad2104
@muzahmad2104 Год назад
Great content keep it up. But sort the echo please.
@nileshmchavda
@nileshmchavda Год назад
fantastic!
@ChuckWilliams-gt2yb
@ChuckWilliams-gt2yb Год назад
Very, very nice!!!
@ShivansuSoni
@ShivansuSoni Год назад
Amazing
@vikasbohra3253
@vikasbohra3253 Год назад
Very innovative. Thanks for explaining it in a very precise manner. 👌🏻👌🏻👍🏻👍🏻
@MDDM03
@MDDM03 Год назад
marketer of google cloud.. nothing states what to improve
@appstuff6565
@appstuff6565 Год назад
Very well Explained. Thanks!
@appstuff6565
@appstuff6565 Год назад
and subbed. :D
@dattatreyakomakula1650
@dattatreyakomakula1650 Год назад
Excellent sir.
@kaijunhe3339
@kaijunhe3339 Год назад
Great Video, super helpful
@christinamariehicks1078
@christinamariehicks1078 Год назад
Only one lady im 65 years old my name christina smith hicks ..
@karan7843
@karan7843 Год назад
Brilliant content. Would love if more videos' on same series would come often.
@aminremiiii
@aminremiiii Год назад
Please for 50 days I am looking for this i wanna to create 2000 users in mysql and set the phone number as user name and password my be say me how can i create most users with default password? That's
@ilducedimas
@ilducedimas Год назад
Thank you my good sir.
@muhamadridwan4766
@muhamadridwan4766 Год назад
wow!
@arthurrodrigues5382
@arthurrodrigues5382 Год назад
Amazing!
@PradeepMishra-qs2hz
@PradeepMishra-qs2hz Год назад
Awesome . Keep it up.
@Helloimtheshiieet
@Helloimtheshiieet Год назад
Im confused were these indexes?
@elastiq-ai
@elastiq-ai Год назад
BigQuery doesn't have indexes. It has partitions and clustering.
@jayantmathur6810
@jayantmathur6810 Год назад
Hi sir, I want to ingest data, using an API from a 3 party vendor into GCP. The data comes once in a month. Can you suggest me a way to ingest data to GCP and which services should I use to ingest data and need a Scheduler or a trigger that run automatically after 30 days
@elastiq-ai
@elastiq-ai Год назад
Apologies for the late reply. You can check out cloud run to run your api calls and you can schedule them using something like cloud scheduler or cloud composer on GCP.
@anandakumarsanthinathan4740
@anandakumarsanthinathan4740 2 года назад
Very helpful video. I just have a question. These techniques look like they are more for data upload, and not really ingestion. When we talk about "ingestion", we usually are referring to data pipelines, DataFlow, DataProc, Cloud SQL, BigQuery, etc. And as far as "streaming ingestion" is concerned, Pub/Sub is probably the first thing that comes to our mind. I fully agree that there are no hard and fast rules and its perfectly ok to call copying data to Cloud Storage as 'ingestion'.
@selvapalani9727
@selvapalani9727 Год назад
He refers the actions of 1:1 data ingestion.. if we think of any transformation necessary then we do use data pipeline using DF/DP/BQ-procedures.. or any combination of it..
@anandakumarsanthinathan4740
@@selvapalani9727 you are right. Given that S3 buckets can indeed act as data lakes and that Athena, BigQuery, Redshift and other such services can directly query data from such object storages, there is no harm in calling the process of pushing data into them as 'ingestion'. They are not 'files' any more. They could be 'data' out of which insights can be derived.
@ashamalathi
@ashamalathi 2 года назад
Nice n clean explaining. Grt work. 👍