An overview on how to use a read and write stream in node when you want to read in a very large csv file and process it quickly. ------------ 🔔 Newsletter eepurl.com/hnderP 💬 Discord / discord 📁. GitHub github.com/codyseibert/youtube
Hello Sir, I have a question, much appreciated If you can address my doubt. Do chunks come in sequence, in other words, does the read stream wait for the current chunk to finish the operations before it emit the "data" event to process the next chunk? You have a potentially long running iteration there and more importantly, each chunk is interdependent to each other as the last If statement is used to assign the last incomplete row to "unprocessed" variable for the next chunk to use.
there is a core module called readline that works with streams to read one line at a time, no need to write the logic yourself, otherwise great explanation and video. Cheers
also no sub, your variable is overWatermark, you're waiting for an event emitter if not overWatermark, the function therefore must return true if it is not over the watermark and var should be named underWatermark. Or it would run if not overWatermark, which is under the watermark, which? shit you just ramble, bye.
Hi, I really enjoy your videos. One of the most productive publisher on RU-vid. But in that video the name "overWatermark" is highly misleading. I struggled understand the whole structure only because of that naming. This would be better if you use instead "canWrite". canWrite is true when we can write and false when the buffer is full. Then we stop -> !canWrite. const canWrite = writeStream.write(`${i}, `); if (!canWrite) { await new Promise((resolve) => writeStream.once('drain', resolve)); } } I might be wrong of course. Other than that - great job again. Continue watching you content 🙂