This channel is dedicated to the intersection of Space, Data Viz, Data Analytics, and machine learning topics. My current main effort is a computer vision algorithm to detect craters on Mars. It's my first foray into computer vision, so it's definitely taking a while. Follow along as I figure out my next steps!
I hop onto RU-vid Studio about every two weeks as I prepare uploads. So check for responses to your comments during those times. I'm also open to feedback on what you want to see in future videos. Please engage in the comments!
Where can we get the image set? Should flip the RGB before grey scaling. Also edge detection and then do the oval approximation. Otherwise interesting video.
I got the images from www.kaggle.com/datasets/muratkokludataset/rice-image-dataset Can you clarify what you mean by flipping the RGB before converting to grayscale? Also, the code performs edge detection before building an oval approximation (line 32 is edge detection github.com/davidmvermillion/riceorientation/blob/main/rice_orientation_classifier.py). I may not have made that clear in the video. Was there a different approach you had in mind?
@@spacedataguy ah ok so I looked at the code and you are not using opencv. So opencv wants you to take your rgb image and convert it to bgr and then do your image processing. I am not sure if the skimage wants you to do that as well. I have not used it and do not know the documentation. Going to make my own implementation of this as it seems like a fun thing to do. But with opencv instead.
@@spacedataguy youtube deleted my comment :C but i tried to post my github its aquabouncer and the project is image processing. I got better fitting but didn't really understand the point of the direction thing. I was just more interested in the image set and playing with it and drawing a ellipse around the rice. Did the direction as a "extra credit". I have used OpenCV, tkinter and pytesserect to make a manga translation tool. Never finished it like maybe 60% done. I got it to detect speech bubbles in a Korean webtoon then extract the text and then translate it and place the translated text back on the bubble in english. Most of the work that needs to be done is just interface stuff with tkinter and better translation tools. But you can give it a image and it can find the bubble in most cases. If it cant you just tweak the settings and it will.
Neat! Yeah, the direction piece was the big point for why I created this project. Hypothetically, grain orientation *could* influence processing speed through steps like a sieve to remove chaff. That's the idea. I chose an ellipse because it's the simplest shape close to a rice edge outline. Ellipse orientation is also easy to calculate with simple trigonometry based on semi and major axes.
For me one of the biggest differences is that a Rmarkdown file rendered from R has access to all variables of the calling R session whereas Quarto does not, so that you have to either do all the calculations in quarto or save the output of calculation so rds and load them in quarto.
That's an interesting point. Usually, static reports are written in a way to ensure repeatability. What would be a sample use-case where you would want to access session variables not explicitly stated in the document?
Could you do a video on twinx? I have a csv file, and I'd like to on the fly add the columns to a single plot.. i.e. they columns all share the same x-axis, but have different scales/ranges. When I try to programmatically generate the axes, it has an error because I haven't declared an array of axes to hold the return of twinx(): import matplotlib.pyplot as plt #Single canvas and axes in which all the data will be placed fig, ax = plt.subplots(1, 1) #Generate additional Y-axes and save into array for easy access for i in range (5): axIdx [ i ] = ax.twinx() #ERROR How can I create an array of axes so that I can easily access and configure a particular y-series (moving its legend, setting its color, etc)?
Thank you for the very quick overview. I did not know about coolors. Personally I try not to overcomplicate things and to get lost in colors. So in 9 out of 10 cases I choose scale_color_colorblind() from the ggthemes package for a discrete palette. Not beautiful but very usable. Personal side note: I see how and why you try to keep your videos short and to the point. For my individual personal taste they could go a little deeper and take some more time, but that is just one voice.
Thank you for the short to-the-point video. Obviously we all have varying demands here and thinks could be said about details. For many, a good interaction with figure captions and Zotero or the availability of good templates could be more important then a mix of languages. When I decided to make the transition from Rmd to Quarto I encountered Problems with my RStudio installation and Quarto which thwarted my excitement. This has been fixed now so transitioned half-way, mostly using Quarto as if it were RMarkdown. Maybe your video will be the opportunity for me to dive deeper into the differences and make more use of the advantaqges that come with Quarto. Have you got any inclination to make a video about good templates?
Good point about Zotero. The RStudio integration is excellent. A bit challenging in VS Code. I haven't looked into templates. Any specific ones you have in mind?
@spacedataguy Not really. I found my quarto not numbering figures and tables - obviously, that can be switched on - and I have seen that Frank Harrell has his template downloadable on his page so I wondered a) is there a world of fantastic templates that I am not aware of? b) how do I make Harrell 's template my standard in RStudio and c) is there something comparable to tufte for quarto? That kind of things.
@@spacedataguy I liked it a lot more. Seeing it from start to end is nice. I think there were definitely some bits that could have been cut to make the run time more manageable, but that all comes with the territory. Balancing the two is the hard part of YouTubing.
Man I wish I had this in college. Regardless, would be cool to see you actually writing the code and debugging it as you go, since you might encounter similar issues to your viewers. Unless of course, you write bug-less code… in which case I salute your wizardly ways.
My code is always the most flawlessly perfect thing created in the history of humanity the first time I write it. In all seriousness, the debugging process takes a while and would likely require multiple recording sessions. It is an interesting idea!