How can I ingest streaming content from my application's user media to youtube with WerbRTC? Is there any client available for that? I have not found any, please enlighten me if anyone knows
Regarding the question at the end of the talk about re-streaming to millions of clients over WebRTC instead of transcoding to HLS or DASH and delivering over CDN, that would require more compute servers instead of cheap CDN edge delivery. In addition, without an ingester, how would you playback the stream after it ends? ABR Transcoding a live stream is the primary method of chunking up and persisting a live stream as a VOD, which means that you would need to build this service regardless. Actually, doing an ingest and transcode as a service is the best option here.
soooo, how on earth do I learn how to build a video live streaming service? I can't find any good courses/books beyond 1-1 video chat apps. Please advise me.
You learn software engineering and figure it out by working through the problem yourself, same as how to build any type of service or any type of software. It's not for the faint of heart though and requires huge amounts of effort and expertise. Video technology is of course one giant rabbit hole of domain knowledge and is only one piece to the puzzle. Please do your own research if you want to be rewarded.
Usually you cannot get more than 3-4 simultaneous connections in a peer-to-peer mesh scenario - i.e. 4 callers who can all see/hear each other. Peleton has a much different architecture. They haven't published their architecture that I have seen, but I can speculate that they use WebRTC between their instructor(s) and a media server that feeds the stream into a broadcast network. I am not sure if they use WebRTC in that broadcast network (unlikely) but in any case they make sure of some kind of media server to assist. See my video here for more info on WebRTC server types: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Y1mx7cx6ckI.html
@@chizzhach thanks for your reply. so it seems peloton is doing a one-way stream to all its clients and that the clients can't communicate back to the instructors. to me it seems like they didn't even need to use webrtc and could just use some streaming service.
@Kenji Miwa WebRTC is a easy way to encode video and introduces minimal latency. From what I have seen wtih Peleton, the instructors can comment on individual riders activity, even if they don't have their video feed. Minimal latency is important for having riders hear those instructor comments as soon as possible after they actually happened. RU-vid explains how they use WebRTC in their live streaming service here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-htN-gIPOkP0.html. There are some others ones on the WebRTC Boston channel from Red5 and Comcast on this topic too.
What is the glass-to-glass latency of using WebRTC on an embedded device? For eg. in case of moving cameras, what is the delay in displaying the current frames vs the displayed frames? . If a usecase involves 30-60 fps, is the latency at least close to 100-150ms?
"Web R tuh C" "Web artisy" -- the middle letter is T and it sounds like "tee" and should rhyme with the last letter "C" which sounds like "see". Web Are Tee See
What is the expected behavior if FEC packet is lost? Also, let us consider a hypothetical scenario where an intermediate node considers FEC packet to be invalid, drops it every time it is received. Would QUIC be forced to fall back?