Тёмный
MANISH KUMAR
MANISH KUMAR
MANISH KUMAR
Подписаться
Hello Everyone,
My name is Manish Kumar and I am currently working as Data engineer @Jio.

If you want to connect with me then reach out to me on:-
topmate.io/manish_kumar25

On this channel, I upload videos related to Data engineering. I have uploaded few podcast too.

If you are looking for Data engineering roadmap then go to my videos titled "How I bagged 12 offfer". I have explained my strategy in that video.

Hope I am adding some values in your Data engineering career through the videos.
mysql installation on windows 11 | Lec-26
8:12
2 месяца назад
AES encryption in python | Lec-25
22:58
2 месяца назад
error handling in python | Lec-23
29:13
2 месяца назад
*args and **kwargs in python | Lec-22
33:21
2 месяца назад
function in python | Lec-21
23:03
2 месяца назад
.join() method in python | Lec-20
24:04
2 месяца назад
string in python | Lec-17
34:47
3 месяца назад
set in python | Lec-17
34:53
3 месяца назад
tuple in python | Lec-16
31:13
3 месяца назад
dictionary comprehension in python | Lec-15
31:34
3 месяца назад
dictionary in python | Lec-14
33:55
3 месяца назад
list comprehension in python | Lec-13
15:26
4 месяца назад
while loop in python | Lec-12
20:10
4 месяца назад
for loop in python part-2 | Lec-11
38:12
4 месяца назад
for loop in python part-1 | Lec-10
46:38
4 месяца назад
list in python part-2 | Lec-9
38:19
5 месяцев назад
list in python | Lec-8
39:38
5 месяцев назад
if else in python | Lec-7
35:59
5 месяцев назад
python assignment1
5:01
5 месяцев назад
variable and data type in python | Lec-4
43:35
5 месяцев назад
how to install vs code editor in windows 11
5:34
5 месяцев назад
Комментарии
@AbhishekSharma-ue8sn
@AbhishekSharma-ue8sn 15 часов назад
Can anyone provide me the pdf of this all classes?
@satyamkumarjha4185
@satyamkumarjha4185 15 часов назад
sets are mutable.
@sachinkrshaw74
@sachinkrshaw74 17 часов назад
person_df.join(address_df,address_df["personid"]==person_df["personid"],"left").select(col("firstName"), col("lastName"), col("city"), col("state")).show()
@raaj6869
@raaj6869 22 часа назад
Bhaiya Videos kyun nhi aa rahi h, Ek community post daal dijiye. Thank you bhaiya for your Python and Spark Series.❤❤
@vaibhavshanbhag5016
@vaibhavshanbhag5016 22 часа назад
@manish_kumar_1 Sir kya mast content banaya, maja aa gaya, thank you!
@bolisettisaisatwik2198
@bolisettisaisatwik2198 23 часа назад
Wow, Manish, you are such an amazing human. Your ability to teach complex things such easily is not common. Please continue the Playlist. I have learned so much from your videos. Your lectures helped secure a job. I am post this only to let you know that your work isn't going to waste. A lot of people like me are learning and also implementing in real life.
@meetpatel2720
@meetpatel2720 День назад
Dsa ka question nahi puchha tha?
@HarshKumar-adi
@HarshKumar-adi День назад
Very Good.....
@mdtasauvar
@mdtasauvar День назад
for item in data.get('MAINDATA'): # print (item) for next_item in item.get('HeaderFields'): print (next_item) for key in next_item.keys(): if key =='FieldTypeName': count +=1 print (count)
@AKSHAY28ful
@AKSHAY28ful День назад
hi manish !!! are you going to continue this series or this was the last video of this course please let me know
@manish_kumar_1
@manish_kumar_1 День назад
For now you can consider this as a last video
@VivekKhare-z1m
@VivekKhare-z1m День назад
count = 0 for word in data["MAINDATA"]: for headerFile in word["HeaderFields"]: if "FieldTypeName" in headerFile: count += 1 print(count) Ans : count = 38
@VivekKhare-z1m
@VivekKhare-z1m День назад
labour_with_cost = {"Mahesh" : 500,"Mithilesh" : 400, "Ramesh" : 400, "sumesh" : 300, "jagmohan" : 1000, "Rampyare" : 800} total_working_days = 50 absent_day = {"Mahesh" : 3 ,"jagmohan" : 7} for labour, cost_per_day in labour_with_cost.items(): if labour in absent_day: absent_count = absent_day[labour] present_days = total_working_days - absent_count total_cost = present_days * cost_per_day else: total_cost = total_working_days * cost_per_day logger.info(f"Total cost for {labour} is ₹{total_cost}")
@mohdhamza3591
@mohdhamza3591 День назад
The staging layer is not part of data warehouse. It is an intermediate storage used for data processing before it is loaded into the data warehouse. Otherwise, there will be no difference in data warehouse and data lake.
@prachideokar7639
@prachideokar7639 2 дня назад
Ye pure project ke liye prerequisites kya hai plz rply
@manish_kumar_1
@manish_kumar_1 2 дня назад
Yes
@Sahil1001
@Sahil1001 2 дня назад
solution: df.filter(col("refer_id").isNull()).select("name").union( df.filter(col("refer_id") != 2).select("name") ).show()
@koushlendrasinghrajput6040
@koushlendrasinghrajput6040 2 дня назад
please give data set so we can practice on thAT
@taukeerahmad9328
@taukeerahmad9328 2 дня назад
I have recommended this channel to 100 of people , This is by far best channel for those who want to become data engineer.
@BishalKarki-pe8hs
@BishalKarki-pe8hs 3 дня назад
more video in SCD manish bhai
@deepaksharma-xr1ih
@deepaksharma-xr1ih 3 дня назад
good job man
@payalbhatia6927
@payalbhatia6927 3 дня назад
where could we see high cpu usage in spark UI when data coming from disk to memory and gets deserialized ?
@lakshyagupta5688
@lakshyagupta5688 3 дня назад
Hi Manish, I am using the same code and getting 4 jobs: flight_data=spark.read.format("csv")\ .option("header","true")\ .option("inferSchema", "true")\ .load("/FileStore/tables/2010_summary.csv") flight_data_repartition = flight_data.repartition(3) us_flight_data=flight_data.filter("DEST_COUNTRY_NAME=='United States'") us_india_data = us_flight_data.filter((col("ORIGIN_COUNTRY_NAME")=='India') | (col("ORIGIN_COUNTRY_NAME")=='Singapore')) total_flight_ind_sing = us_india_data.groupby("DEST_COUNTRY_NAME").sum("count") total_flight_ind_sing.show() Can you explain the reason for the same.
@__oo__._._._._._._._.___00007
@__oo__._._._._._._._.___00007 3 дня назад
Thank you
@manishgound1091
@manishgound1091 4 дня назад
Bhai mast batate ho
@satyamkumarjha4185
@satyamkumarjha4185 4 дня назад
count = 0 for i in range(len(data["MAINDATA"])): for j in range(len(data["MAINDATA"][i]["HeaderFields"])): for k in (data["MAINDATA"][i]["HeaderFields"][j]): if k =="FieldTypeName": count = count+1 else: count logger.info(count)
@shreyakeshari951
@shreyakeshari951 4 дня назад
Hi, Thank you for such informative videos I am not able to find Lecture-6 in Spark Fundamental Series, please guide me from where I can watch lecture -6
@poojapatil7193
@poojapatil7193 4 дня назад
Did you missed to upload the video because after lect 5 you taught spark architecture.. didn't told of CSV files or corrupted data storing part.
@tejasnareshsuvarna7948
@tejasnareshsuvarna7948 4 дня назад
Thank you so much for this. I now understand the concept of Window functions crystal clear!
@saurabhkatkar
@saurabhkatkar 4 дня назад
Answer for Q1: sales_df = sales_df.withColumn('sales_date',from_unixtime(unix_timestamp('sales_date', 'dd-MM-yyyy'))) grouped_df = sales_df.groupBy('product_name',year('sales_date').alias('year'),month('sales_date').alias('month')).agg(sum('sales').alias('total_sales_monthly')) sum_window = Window.partitionBy('product_name','year') grouped_df.withColumn('total_sales', sum('total_sales_monthly').over(sum_window))\ .withColumn('percent_month_sales_wrt_total',\ round(100*col('total_sales_monthly')/col('total_sales'),2)).show()
@sandeep7077
@sandeep7077 5 дней назад
Very informatic video.. like it love it
@aasthagupta9381
@aasthagupta9381 5 дней назад
Python 3.10.7 and Spark 3.5.1 worked in my case
@SandeshMotoVlogs
@SandeshMotoVlogs 5 дней назад
Sorry SCD0 explanation is wrong. In SCD0, Once the data is written to the table, we can't alter it no matter if the source data is changed
@kartikjaiswal8923
@kartikjaiswal8923 5 дней назад
nice explanation
@voice6905
@voice6905 5 дней назад
Hi Manish Sir, can you start a playlist on how to work on streaming data with Apache Kafka? Basically expecting a playlist on streaming data analyze and processing, Thank You.
@__oo__._._._._._._._.___00007
@__oo__._._._._._._._.___00007 6 дней назад
I have done ineer join to filter matching records of right table with left table, althugh left_semi is now one shot solution 😅
@imamhussain7544
@imamhussain7544 6 дней назад
Hii bro, how's work pressure and work life balance at jio
@rajkumardubey5486
@rajkumardubey5486 6 дней назад
Count bhi ek action hoga na means 3 job create hua
@Rajeshkumbhkar-x6v
@Rajeshkumbhkar-x6v 6 дней назад
Amazing explanation sir jii
@JokesFunMasti
@JokesFunMasti 7 дней назад
l_w_cost={"mahesh":500,"Ramesh":400,"Mithilesh":400,"Jagmohan":1000,"Rampyare":800} total_cost=0 for i in range(0,50): for j in l_w_cost: total_cost=total_cost+l_w_cost[j] print(total_cost) #sub Jagmohan for 7 days for i in range(0,7): total_cost=total_cost-l_w_cost["Jagmohan"] print(total_cost) #sub Mahesh for 3 days for i in range(0,3): total_cost=total_cost-l_w_cost["Ramesh"] print(total_cost) 155000 148000 146800
@agarwalankita504
@agarwalankita504 7 дней назад
your videos are very useful but your speaking content is more which makes the video large and boring
@aniketraut6864
@aniketraut6864 8 дней назад
Thank you Manish bhai for the awesome videos, thanks for giving the script.
@QuaidKhan1
@QuaidKhan1 8 дней назад
Love from Pakistan
@QuaidKhan1
@QuaidKhan1 8 дней назад
superb Teaching Skills a lot of love from Pakistan 😘
@QuaidKhan1
@QuaidKhan1 8 дней назад
superb Teaching Skills
@QuaidKhan1
@QuaidKhan1 8 дней назад
Love from Pakistan ❤
@QuaidKhan1
@QuaidKhan1 8 дней назад
proud u sir Love from Pakistan
@QuaidKhan1
@QuaidKhan1 8 дней назад
real teacher🥰
@__oo__._._._._._._._.___00007
@__oo__._._._._._._._.___00007 8 дней назад
Aashirwad dijiye guru ji,Aapke pure course ko complete kr paun 🙏
@manish_kumar_1
@manish_kumar_1 6 дней назад
All the best
@__oo__._._._._._._._.___00007
@__oo__._._._._._._._.___00007 6 дней назад
🙏
@BeingSam7
@BeingSam7 8 дней назад
w = Window.partitionBy("Product_ID") Total_df = df.withColumn("Total", sum(col("Sales")).over(w)) Total_df.withColumn("Percen_Sales", round((col("Sales")*100)/col("Total"),2) ).show()
@PrashantKumar-gp4cv
@PrashantKumar-gp4cv 8 дней назад
Hello Manish sir, first of all thank you so much for the knowledgeable playlist. we learned lot from it. can you make a video on. "How to read data from Database table using pyspark in Databricks?" Thanks.
@DsSarangi23
@DsSarangi23 8 дней назад
i have attribute error in this video when I write df.write.format