Hi, Dhiraj Gupta here. Welcome to my channel to learn Data Engineering. On This Channel We are covering interview question of various topics like DSA , PySpark , SQL and Pandas. Stay Tuned for more Interview problems for the Data Engineer , Data Analyst and Data Science Role.
Hi, When am trying run sample code, Getting below error. Please help C:\Users ushi\Pyspark_Demo\venv\Scripts\python.exe C:\Users ushi\Pyspark_Demo\Test.py Traceback (most recent call last): File "C:\Users ushi\Pyspark_Demo\Test.py", line 1, in <module> from pyspark.sql import SparkSession File "C:\spark\spark-3.4.2-bin-hadoop3\python\pyspark\__init__.py", line 58, in <module> from pyspark.conf import SparkConf File "C:\spark\spark-3.4.2-bin-hadoop3\python\pyspark\conf.py", line 23, in <module> from py4j.java_gateway import JVMView, JavaObject ModuleNotFoundError: No module named 'py4j' Process finished with exit code 1
Thanks Diraj. Am trying to do via notebook when am execting the code am getting py4JJavaerror. And how can I see pyspark kernel in notebook do u have any idea about it
# Create a window specification to partition by the row_id window_spec = Window.orderBy("row_id").rowsBetween(Window.unboundedPreceding, 0) # Fill null values with the last non-null value in the window job_skills_df = job_skills_df.withColumn("job_role", F.last("job_role", ignorenulls=True).over(window_spec))
with cte as( select year(created_at) year, month(created_at) month, sum(valuess) total_sales from amazon_monthly_rev group by year(created_at), month(created_at) ) select concat(year, '-', '0',month) year_month , format(100*cast(total_sales - lag(total_sales) over (order by month)as decimal) / lag(total_sales) over (order by month),'##.##') pct from cte
Hi Dhairy! I know you are incredibly busy and you probably get a lot questions, so this is going to take you 5 mins max. Could you please help me why my code is wrong? Your answer is going to make my day, if not weeks. My code: ------------------------------------------------------------ WITH maintable AS ( SELECT user_id, action, CAST(timestamp AS date) AS date, timestamp FROM facebook_web_log WHERE action in ('page_load','page_exit') ), page_load AS ( SELECT user_id, date, timestamp, ROW_NUMBER() OVER(PARTITION BY user_id, date ORDER BY timestamp DESC) AS rnk_load FROM maintable WHERE action = 'page_load' ), page_exit AS ( SELECT user_id, date, timestamp, ROW_NUMBER() OVER(PARTITION BY user_id, date ORDER BY timestamp) AS rnk_exit FROM maintable WHERE action = 'page_exit' ) SELECT *, e.timestamp - l.timestamp FROM page_load l INNER JOIN page_exit e ON l.user_id = e.user_id AND l.date = e.date AND l.rnk_load = 1 AND e.rnk_exit = 1 -------------------------------------------------------------------------------- When calculating for the differences in time, it seems like StrataScratch miscalculated 'only' for user 0. Could you please tell me why that is the case? I don't see flaws in my logic.
with credit AS ( select user_id,user_name, isnull((credit + amount),credit) as tot_credit from users u left join Transactions t on u.user_id = t.paid_to ) select user_id,user_name, isnull((tot_credit - amount),tot_credit) as final_credit, case when isnull((tot_credit - amount),tot_credit) > 0 then 'NO' else 'Yes' end as credit_limit_breached from credit u left join Transactions t on u.user_id = t.paid_by
with cte as (select *,lag(log_date,1,log_date) over(partition by emp_id order by log_date) prev_date from t where flag ='Y'), cte2 as (select *,case when log_date-prev_date<2 then 0 else 1 end gp from cte), cte3 as (select *,sum(gp) over(partition by emp_id order by log_date) as gps from cte2) select emp_id, min(log_date)start_date,max(log_date)end_date,count(*) consecutive_days from cte3 group by emp_id, gps having count(*)>=2
another approach with success as( select *, DATEADD(day, -1*ROW_NUMBER() over (order by success_date), success_date) s_bucket, ROW_NUMBER() over (order by success_date) S_RN from succeeded where year(success_date) != '2018' ), s_final as( select min(success_date) start_date, max(success_date) end_date, ROW_NUMBER() over (order by (select null)) RN from success group by s_bucket ), fail as( select *, DATEADD(day, -1*ROW_NUMBER() over (order by fail_date), fail_date) f_bucket, ROW_NUMBER() over (order by fail_date) F_RN from failed where year(fail_date) != '2018' ), f_final as( select min(fail_date) start_date, max(fail_date) end_date, ROW_NUMBER() over (order by (select null)) RN from fail group by f_bucket ) select 'Succeeded' as period_date, format(start_date, 'dd-MM-yyyy') as start_date, format(end_date,'dd-MM-yyyy') as end_date from s_final union all select 'failed' as period_date, format(start_date,'dd-MM-yyyy') as start_date, format(end_date,'dd-MM-yyyy') as end_date from f_final order by start_date
another approach using SQL Server with U1 as( select User1, count(*) cnt from facebook group by User1 ), U2 as( select User2, count(*) cnt from facebook group by User2 ),final as( select coalesce(User1, User2) as Users, (isnull(U1.cnt,0) + isnull(U2.cnt,0)) as Popularity_Percentage from U1 full join U2 on U1.User1=U2.User2 ) select Users, round(cast(round(Popularity_Percentage,2) as decimal)*100/ count(user) over (order by user rows between unbounded preceding and unbounded following),3) total_users from final
with cte AS ( select question_id, isnull(sum(case when action = 'show' then 1 end),0) as show_count, isnull(sum(case when action = 'answer' then 1 end),0) as answer_count from survey_log group by question_id ),ctf as ( select question_id,(answer_count/show_count) as answer_rate, row_number() over(order by (answer_count/show_count) desc ,question_id asc) as rn from cte ) select question_id from ctf where rn = 1
Hello, i think you might have missed this scenario, what if the id's are some random integers or some strings. Please review my solution and let me know if my understanding is correct. Thanks with cte as (select *,row_number() over() rn from swap_id) select name,id,case when a%2=0 then lag(id) over() when a%2<>0 then coalesce(lead(id) over(),id) else id end swapped from cte
with cte as ( select question_id, sum(case when action = 'answer' then 1 else 0 end)/sum(case when action = 'show' then 1 else 0 end) as answer_rate from survey_log group by question_id ),cte2 as ( select question_id ,row_number()over(order by question_id ) as id, dense_rank()over(order by answer_rate desc) as rate from cte ) select question_id from cte2 where id = 1 and rate = 1
with cte as( select t2name,STRING_AGG(topping_name,',') as pizza,sum(cost) as total_cost from( select t1.*,t2.topping_name as t2name,t2.cost as t2cost from toppings t1 cross join toppings t2 where t1.topping_name <> t2.topping_name)a group by t2name) select pizza,total_cost from cte order by total_cost;
with cte AS ( select * , row_number() over(order by topping_name) as rn from toppings ),ctf AS ( select t.topping_name as topping_name1,g.topping_name as topping_name2,t.cost as cost from cte t , cte g where 1 = 1 and t.rn <> g.rn ) select string_agg(topping_name1,',') as pizza,sum(cost) as total_cost from ctf group by topping_name2
select concat(z.topping_name, ',' , y.topping_name, ',', x.topping_name) as pizza, x.cost+y.cost+z.cost as total_cost from toppings x inner join toppings y on x.topping_name > y.topping_name inner join toppings z on y.topping_name > z.topping_name
@nikunjmistry373 Sir aapne same python ke questions solve krei hai kya Leetcode or Statascractch ke ? Is there any website like this for python questions that you've made? Thank you for SQL website. Its helpful
@@cretive549 I would suggest something. Don't go what all r doing mern and full stack. For DE do etl script play with docker apache spark handle data in apache supaset... step by step . For any fresher first thing is sql and python . Rest as needed..
my solution: @udf def removeSpaceandVowels(r): s = "" for i in r: if i not in "aeiou ": s += i return s dfr = df.withColumn("consonants" , removeSpaceandVowels(col("message")))\ .drop("message").withColumn("consonantCount" , length(col("consonants")))
l=[1,2,3,[4,5],[6,7],[7,8,9]] m=[] for i in l: if type(i)==int: m.append(i) elif type(i)==list: for j in range(len(i)): m.append(i[j]) else: pass print(m)
with cte as ( select *, extract (day from P1_date) - row_number() over(order by (select null)) as diff2 from ( select *, case when extract (day from ld) - extract (day from P1_date) = 1 then 1 end as diff from ( select P1.emp_id as P1_emp_id, P1.log_date as P1_date, P1.flag as P1_flag, lead(P1.log_date) over(partition by P1.emp_id order by P1.log_date) as ld from pwc_attandance_log P1 where P1.flag = 'Y') a where case when extract (day from ld) - extract (day from P1_date) = 1 then 1 end is not null) b) select p1_emp_id, min(p1_date), max(ld), count(diff2) + 1 as no_of_days from cte group by p1_emp_id, diff2
mysql with per_month as (select year(created_at) as yr,month(created_at) as mon,sum(value)total_per_mon from amazon_monthly_rev group by year(created_at),month(created_at)) ,prev_revenue as (select *,lag(total_per_mon,1,0) over()pre from per_month) select concat(yr,'-',mon)as yearMonth,round((total_per_mon-pre)/pre*100,2)as rev_diff from prev_revenue;
with percentage as (select student_id,round(sum(marks)/count(1),0) as percentage from marks group by student_id) select s.student_id,s.name,p.percentage, case when p.percentage >=70 then 'Distinction' when p.percentage between 60 and 69 then 'First Class' when p.percentage between 50 and 59 then 'Second Class' when p.percentage between 40 and 49 then 'Thid Class' when p.percentage <= 39 then 'Fail' end as Result from student s join percentage p on s.student_id = p.student_id;
with first_joindate as (select *,min(join_date) over(partition by user_id)as first from user_data) ,new_user_count as ( select join_date, sum(case when join_date = first then 1 else 0 end)new_user,count(1)total_user from first_joindate group by join_date) select join_date,new_user,case when new_user > 0 then round((new_user/total_user) * 100,0) else 0 end as percentage_new_user from new_user_count;
with cte as (select *, (total_sales_revenue-lag(total_sales_revenue,1,0) over(partition by product_id))diff from salesq) select * from products where product_id = (select product_id from cte group by product_id having min(diff) > 0)
create table pop(user1 int,user2 int); insert into pop values (1,5), (1,3), (1,6), (2,1), (2,6), (3,9), (4,1), (7,2), (8,3); with all_pairs as( select user1,user2 from pop union select user2,user1 from pop) select user1 as user,round(count(user1)/(select count(distinct user1) from all_pairs)*100,2) as per from all_pairs group by user1;