Is it true that s3 files would still have to get loaded over the network for something like AWS Athena? That seems to be a data warehousing strategy that relies on native s3 and not loading all of it across network
@@jordanhasnolife5163 Sorry, didn't mean to use you like Google :P. I researched it after I asked, and it's quite cool. Basically a serverless query engine that's direct-lined into S3
My interview is in 9 hours I hear your voice in my sleep I have filled a notebook with diagrams and concepts And I am taking a poopy at this very second We just prevail
@@DavidWoodMusic oh damn. That's a Shame. I'm kinda struggling with a similar decision right now. I passed all the interview stages but even at the offer stage I'm still learning new key pieces of info about the position that no one told me about before.... But hey, you beat the systems design interview! That's an amazing win and now you know you can do it 😉
I'm assuming you mean event consumers not producers. Yeah this is one of those things where it's kinda built into the stream processing consumer that you use, so under the hood I assume we'll be using long polling. I don't know that I see the case made here for web sockets since we don't need bidirectional communication. Server sent events may also be not great because we'll try to re-establish connections automatically, which may not be what we want if we rebalance our kafka partitions.
Thanks a lot for your videos. Currently looking for a new job, brushing up/learning a lot about system design, watched lots of your videos recently. Appreciate your work. Keep it up!
Hey jordan what is the data source in the last diagram here ? Is it the VM pushing logs / serialised Java objects etc to kafka ?? U mean the application when it logs a statement that statement makes a push to kafka ? Then what should be the partition key of this kafka cluster ? Should it be server id or a combination of server id + app name or how should be we structure this partition key ?
Yes the application is pushing to Kafka. I think that you should probably use the app/microservice name as the Kafka topic, and then within that partition by server ID in kafka
Hi Jordan, I am trying to cover infrastructure-based system design questions like this one first. Can you please clarify if I need to watch video 11, 12, 13 to understand this? Any prerequistes ?( I have covered concepts 2.0). Is it same for 17, 18, 19 videos as well?
Great video as always! Why do you store the files on S3 as well as a data warehouse? Why not just store on the data warehouse directly from Parquet files? Is it that we need a Spark consumer to transform the S3 files before putting the data into the data warehouse?
Depends on the format of the S3 data. If it's unstructured, then we'd likely need some additional ETL job to format it and load it into a data warehouse.
Hi Jordan, Regarding the post processing of unstructured data, can we do the batch processing in Flink itself, as it does support that, or it's not suitable for large scale of data? What could be the size of data which can dealt by flink itself, after which we might need to use HDFS/ Spark? PS :- Thanks for the amazing content, you're the best resource I've found till date for system design content :)
Flink isn't bounded in the amount of data it can handle, you can always add more nodes. The difference is that flink is for stream processing. Feel free to watch the flink concepts video, it may give you a better sense of what I mean here.
hey Jordan great video, does this require any sort of API design? given that we need to read through the data metrics does it makes sense to also describe the API structure, let me know your thoughts, thanks.
Sure. You need an endpoint to read your metrics by time range, and it probably returns paginated results. (Perhaps taking in a list of servers) Anything else you're looking for?
@@jordanhasnolife5163 right also for elastic search result you gonna need an API unless you wanna combine it with metrics which I don't think it's a good idea
Also a request for making a video for tracking autonomous cars + collecting other metrics sensors/etc, thanks man your work is gold and I love the depth them
Very informative video as always. Was just thinking how the metric is pulled by prometheus (which will eventually store in the DB). How the different clients responsibility is assigned to the aggregator pods so that metric is pulled exactly once from each client pods.
In this video you are using the push method, by having hosts connect to Kafka directly. This could be deemed too perturbing to the millions of hosts, so instead they can expose a /metrics endpoint that a consumer can use to fetch their current data. To answer the question above, we need to do some sort of consistent hashing to assign the millions of hosts to consumer instances and then put the data in Kafka (can create multiple messages, one for each metric). In the push method, we are putting the data directly to Kafka from each EC2 host where it is buffered before being consumed by our Spark Streaming instance that updates our DBs.