Great video - so much to learn from this than regular system design videos haha. I got some front-end questions: which player are you using? Custom built or open source like Shaka/Videojs? Also any plans to shift to HLS/DASH in future?
This was awesome one question why not cut down the complexity related to maintaining SQS, launching EC2s and downscaling them, etc by using aws lambda instead. The flow would be like this: Content uploaded to source bucket -> triggers lambda -> lambda spins up ffmpeg does the conversion and copies to destination s3 bucket -> users request videos from destination bucket. With lambda, you have to pay only for what you use, so I think it's cheap and scales very well. The only possible issue might be aws lambda can run for max 15min, so converting big video files might be problematic but even for that, we can use "AWS batch" which is similar to lambda but does not have this 15min limitation. In any case, I think it would cut down the complexity of the current architecture.
Damn good.! Keep this architecture series alive... I am really curious about one particular thing where you chose EC2 and not Fargate? According to some rough calculation, Fargate seems to be a better option for this scenario.
I would love to see this expanded or "split" to show handling a live stream as well. This would be great for VODs but thinking about it there would (have to) be differences in a live stream as the file would be constantly in transit (RTSP?), but then you would almost need live transcoding for your viewers sake. Finally after the stream the same process would happen as above for storing the VODs
Oh, really Nice and detailed HLD explanation of an application like YT 🔥seems like it is more complex than we think but you made it very crystal clear for a newbie in HLD like me. One video on what design patterns should a react developer (or full-stack moving to NextJS) should know to be a better developer. 👍
Hi mehul thanks for the video amazing explanation !! Can you just make a practical video on how this works from front to back with MERN application cuz ter r no such videos out there which in terms of scalable apps , it will be much more helpful !
good video,i'm trying to gather info to understand what i need to know to make my first project which is a fake money poker site, something similar needs to happen since everytime a table is created there probably needs to be an es2 instance that runs the table or something.
Can you tell. How the video resume or continue watching capture with timestamp saved with user profile and again start with what reaming left to watch...
Thanks a lot for this informative video... But I have one question... As You said that anybody can upload a .txt instead of a video and then the thing which happens is that it keeps on failing multiple times until it is shifted to DLQ... Now as fro every processing,we need an EC2 Bucket to be created... So Failing multiple times causes multiple EC2 Buckets to be created... Now to avoid this, can't we check it beforehand before uploading it in the first S3 Container.. Like what I try to convey is that, what if we try to setup an error and exception check before even uploading the video in the S3 Container... maybe that could be setup using an EC2 Server as well before the S3 Containers........
You forgot to add the storage fees for s3 buckets. Basically in this structure you are paying twice on storage . Although it is quite low but you are paying twice, one to store raw videos on s3 bucket no 1 and another is to store processed videos...
Hey, So I had a doubt. Why do u use the first s3 bucket. Like u could do something like have a proxy server setup and while its uploaded, The video can be sent as a passthrough stream to backend server. And When one server maxes out, it can spin up another one. And add it to the proxy via runtime api and when the processing is finished it can directly upload it to the production s3 bucket.
what is the work of ffmpeg & ec2 there. When uploading we directly upload in s3 with suppose multer. Store the link in mongodb and when the user wants we can just send the link in front=end and show the video with a video tag. Can anyone please explain why it won't work?
It would work but raw video files are rarely something you want to ship to the user directly. For instance, a 4K60FPS video shot on iPhone would be 4GB for only 10 minutes of video. When you upload it on RU-vid, youtube would process it in the same resolution with same FPS but at a much much smaller file size so that everyone could watch it
Thanks for sharing this mehul. I'm curious about one thing, you use ASG to start/stop EC2 instances for processing as you mentioned, Did you consider using docker + kube to do this task? If yes then what was the factor to go ahead with this architecture over the docker + kube one.
You don’t. Instead, on the EC2 while you’re processing the video you set a timer to keep the message in queue “invisible”. If your EC2 crashes or dies for some reason sqs will stop receiving the visibility message and would automatically make the message visible in a few minutes