Тёмный
Learning Journal
Learning Journal
Learning Journal
Подписаться
Learn more at www.scholarnest.com/

Best place to learn Data engineering, Bigdata, Apache Spark, Databricks, Apache Kafka, Confluent Cloud, AWS Cloud Computing, Azure Cloud, Google Cloud - Self-paced, Instructor-led, Certification courses, and practice tests.

SPARK
www.scholarnest.com/courses/spark-programming-in-python-for-beginners/
www.scholarnest.com/courses/spark-streaming-for-python-programmers/
www.scholarnest.com/courses/spark-programming-in-scala-for-beginners/

KAFKA
www.scholarnest.com/courses/apache-kafka-for-beginners/
www.scholarnest.com/courses/kafka-streams-master-class/

Find us on Udemy
Visy below link for our Udemy Courses
www.learningjournal.guru/courses/

Find us on Oreilly
www.oreilly.com/library/view/apache-kafka-for/9781800202054/
www.oreilly.com/videos/apache-kafka/9781800209343/
www.oreilly.com/videos/kafka-streams-with/9781801811422/
11 - Azure Databricks Platform Architecture
11:11
2 месяца назад
10 - Introduction to Databricks Workspace
14:07
3 месяца назад
9 - Creating Databricks Workspace Service
16:49
3 месяца назад
08 - Azure Portal Overview
5:20
3 месяца назад
07 - Creating Azure Account
10:04
3 месяца назад
06 - What will you learn in this section
2:39
3 месяца назад
05 - Introduction to Databricks Platform
8:33
4 месяца назад
03 - Introduction to Data Engineering
14:20
4 месяца назад
02 - Course Prerequisites
2:28
4 месяца назад
01 - About The Course
5:58
4 месяца назад
Spark Installation Prerequisites
5:26
8 месяцев назад
06 - Batch processing to stream processing
39:59
8 месяцев назад
05 - Working in Databricks Workspace
28:24
8 месяцев назад
03 - Spark Development Environments
6:35
8 месяцев назад
Spark Development Environment
1:45
8 месяцев назад
Setup and test your IDE
4:27
8 месяцев назад
Install and run Apache Kafka
10:25
8 месяцев назад
Introduction to Stream Processing
9:17
8 месяцев назад
Streaming Sources Sinks and Output Mode
12:57
8 месяцев назад
Комментарии
@ayeshakhan8726
@ayeshakhan8726 4 дня назад
Easy explanation👍
@PANKAJKUMAR-fe8zn
@PANKAJKUMAR-fe8zn 8 дней назад
Wonderful explanation. I was studying data cloud in salesforce and they were mentioning this data format multiple time. I was clueless but I got clarity from your video. Thank you sir
@rjrmatias
@rjrmatias 10 дней назад
Excellent vídeo, thank you Master
@nisarirshad8366
@nisarirshad8366 11 дней назад
I have completed this course on udemy and highly recommend this course on udemy. It's very well explained and easy to understand.
@abhilashvasanth700
@abhilashvasanth700 14 дней назад
Hello, Is this page updated ? Can we rely on this by becoming a member and stay updated ? If not, where do all your courses be updated? I took your PySpark course on Udemy. Though the beginning was really good, the later part of the course did not have a continuous flow. How do I enroll to your batch course ?
@srikanthk-yp4wj
@srikanthk-yp4wj 14 дней назад
to watch all the videos of Databricks course playlist , if we subscribe 199/- or 399/- ?
@AhsanTirmiziVlogs
@AhsanTirmiziVlogs 17 дней назад
such an engaging content you dont loose me for a second .. amazing explanation.. bless you brother.. in my language "SAADAA KHUSHBU"
@sonurohini6764
@sonurohini6764 17 дней назад
Great .but follow up question for this by interviwever is s how do we take 4x memory per executor.
@amlansharma5429
@amlansharma5429 13 дней назад
Spark reserved memory is 300 mb in size and executor memory should be atleast 1.5X times of the spark reserved memory, i.e. 450 mb, which is why we are taking executor memory per core as 4X, that sums up as 512mb per executor per core
@justinmurray8313
@justinmurray8313 19 дней назад
Is there still a coupon to get this course for free?
@ABQ...
@ABQ... 21 день назад
Please provide prerequisites
@AmitCodes
@AmitCodes 22 дня назад
How to certify
@saisivamadhav8338
@saisivamadhav8338 22 дня назад
Awesome sir
@anuragjaiswal1399
@anuragjaiswal1399 23 дня назад
Thanks man it worked.
@ongn1611
@ongn1611 23 дня назад
Very simple and precise. Thank you
@nwanebunkemjika7822
@nwanebunkemjika7822 28 дней назад
THANKS
@deevjitsaha3168
@deevjitsaha3168 29 дней назад
is this course suitable for scala users or do we need to have python knowledge?
@tridipdas9930
@tridipdas9930 Месяц назад
What if the cluster size is fixed? Also ,shouldn't we take into account per node constraint? For eg: what if the no. of cores in a node is 4?
@AatlaAawaz
@AatlaAawaz Месяц назад
very very good and valuable course.
@veerendrashukla
@veerendrashukla Месяц назад
In the last step, you did kinit , that pulled the tgt and then dev uer could list the files. At what point of time, the client interacted with TGS with this tgt?
@user-dx9pj6bp3w
@user-dx9pj6bp3w Месяц назад
The course is very well organized
@vvsekhar1
@vvsekhar1 Месяц назад
Thank you so much. Well explained about Root user.
@federico325
@federico325 Месяц назад
incredible, thanks
@vaibhavtyagi9885
@vaibhavtyagi9885 Месяц назад
in last question each and every value you took was default only (128mb, 4, 512mb,5 cores) , so lets say the question is for 50 gb of data then still 3gb would be the answer?
@HIMANSHUMISHRA-yg8dc
@HIMANSHUMISHRA-yg8dc Месяц назад
ModuleNotFoundError: No module named 'pyspark.streaming.kafka' error using command spark-submit --packages org.apache.spark:spark-streaming-kafka-0-10_2.13:3.5.1 live_processing.py can you help please?
@Amarjeet-fb3lk
@Amarjeet-fb3lk Месяц назад
If no. of cores are 5 per executor, At shuffle time, by default it creates 200 partitions,how that 200 partitions will be created,if no of cores are less, because 1 partition will be stored on 1 core. Suppose, that My config is, 2 executor each with 5 core. Now, how it will create 200 partitions if I do a group by operation? There are 10 cores, and 200 partitions are required to store them, right? How is that possible?
@navdeepjha2739
@navdeepjha2739 Месяц назад
You can set the no of partitions equal to no. of cores for maximum parallelism. ofcourse, u cannot create 200 partitions in this case
@NandhaKumar1712
@NandhaKumar1712 Месяц назад
Hi , Thanks for the explanation. It really helps. In the above example let's say In right stream we are getting impressionId=4, and we didn't get matching events for id=4 on left stream for long time, Is it possible to get this record also inside foreachbatch() function before it gets dropped by spark?
@prasannakumar7097
@prasannakumar7097 Месяц назад
Very well explained
@robertakid727
@robertakid727 Месяц назад
That is an extradentary explanation, Thank you
@oleg20century
@oleg20century Месяц назад
Best video about this three abstractions
@Mado44555
@Mado44555 2 месяца назад
thank you for explaining i was looking for a start example to get what it is but videos were like explaining to some experts well i figured out to follow your steps , after running the code and done the ncat command i m getting errors and first one is: "chk-point-dir" any help
@learningacademy1989
@learningacademy1989 2 месяца назад
C:\kafka\bin\windows>kafka-console-producer.bat --topic test2 --broker-list localhost:9092 < ..\data\sample1.csv The system cannot find the path specified. how to fix this error
@rajibinus
@rajibinus 2 месяца назад
Insightful explanation. Thanks for the video.
@rajat_ComedyCorner
@rajat_ComedyCorner 2 месяца назад
Great job, Sir
@boseashish
@boseashish 2 месяца назад
beautifully explained...
@safkaify7875
@safkaify7875 2 месяца назад
Well spoken, nicely explained.
@udaypratapsingh2245
@udaypratapsingh2245 2 месяца назад
Sir you are best❤
@user-is4nl2fu7i
@user-is4nl2fu7i 2 месяца назад
I have something clear on my head now.. Thank you very much
@Bryan-zj7nr
@Bryan-zj7nr 2 месяца назад
I'm using MacOS, how to setup all this ??
@aminullahyousufi8142
@aminullahyousufi8142 2 месяца назад
Thank you for your lecture. I am following all of your course related to big data.
@Manisood001
@Manisood001 2 месяца назад
Any chance this course will come on undemy
@ScholarNest
@ScholarNest 2 месяца назад
No. This course is exclusively for ScholarNest platform
@sreenivasanpalaniappan3640
@sreenivasanpalaniappan3640 2 месяца назад
where can i enroll for or buy this snowflake course? thank you
@williamhaque6183
@williamhaque6183 2 месяца назад
Wonderful. Cleared a lot of doubt.
@mayurnagdev5545
@mayurnagdev5545 2 месяца назад
Kafka doesn't allow more than 2 consumers to read from the same partition to avoid the same message being read multiple times. Isn't this the case when 2 consumers listen to the dans partition ?
@nagamanickam6604
@nagamanickam6604 2 месяца назад
Thank you
@oleg20century
@oleg20century 2 месяца назад
thanks, it is great guide and simple at same time
@DeepaakashGupta
@DeepaakashGupta 2 месяца назад
When will more video come. Can you create playlist as well