unfortunately they don't deal with small channels and some channels are extremely bias. I do not trust anyone who does this as a full time job or does rebranding. Thankfully i have a local store that will let me borrow gear to review.
Does shooting 15sec subs not clip the blacks ? Also I mostly shoot gain 0 and longer subs (what ever time to get perfect history gram of ROI). Every time I bump up the gain, I feel I loose too much dynamic range
Good points! not shooting long enough can make it very hard to bring out the faint stuff, and you can end up clipping the darks! Your strategy of keeping gain low and shooting longer is a good one, I think. Of course every setup is going to be a bit different. So some people might want to have shorter exposures depending on their aperture, focal length, mount etc :)
Thanks for the informative video. How would you gather the mean ADU value in PixInsight for a selected area? I know how to collect the mean value for the entire frame, but I need help finding how to do it for a designated area. Dynamic Crop, maybe?
Much to learn, good job , lots of patience. Q?, what does the stars look like in PHD2 with your OAG? i m trying to get mine to work with the right spacing...
@@deepskydetail i followed ZWO OAG recommended spacers but my stars are bananas or seagulls, so PHD cant lock on a star. i gave up and put it all back in the box this spring, but i really want to get rid of the top guider. my Telescope is a SF 90mm F5.5 /500mm FL. Any hints would be welcomed or even a picture of your image train from the Camera to the focuser tube would help. i can give you my Email.
You can email me at deepskydetail at gmail. It could be that the prism is too far on the edge of the imaging circle. The OAG should allow you to adjust how far the prism goes into the tube. I would try putting the prism closer to the center of the tube and seeing how that goes. Also, changing the PHD2 star settings might help too (the minimum FHWM and HFD values). They may be too large. A guide scope's stars are usually bigger than in an OAG. It won't lock on because it wants a bigger star and is afraid it's locking onto a hot pixel.
Here it is :) ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-v6I7nTu6VIc.html&lc=UgyyNb0yQCUuRLSyoUF4AaABAg I thought it shows up as a link in the video, but maybe it depends on what device people view it on whether it shows up 🤷♂️
I just discovered your channel today, and looking at the play lists I see lots of videos I want to watch! As others have said below, I appreciate your scientific approach as well as inclusion of actual images.😊😊😊
I'm really enjoying your videos! When you are finished explaining color maybe you could do a video where you explain which end of space is "up" so I don't make the mistake of posting upside-down photos of deep-sky objects again. (kidding -- but I really did get negative comments about a photo of M31 that was rotated 180 degrees from a more popular view)
I know I'm just watching a short on my phone... but the gain seemed to make a big difference, at least visually. What's the difference between the SNR on the shots with the same gain?
To really see the benefit of really long exposures, you need dark skies and a target with dust or IFN. If using NB filters then a target with some faint Ha or Oiii will show why and when you want long exposures.
Hard to believe this topic is debated so heavily, it’s extremely easy to see in final results, longer is always better. AP is just full of myths and legends… Thank you for putting in the work…
For a camera with HGC, is the gain at which the HGC kicks in more important than unity gain? For your camera, it looks like the HGC mode trigger is at unity gain, but that isn't the case with all cameras (e.g., ZWO ASI224). For example, if HGC kicks in at 60 and unity gain is 135, and the DR is a full stop higher at the 60 gain compared to 135 (with lower read noise and higher full well depth), would this alter your conclusions about the effects of using lower than unity gain?
That's a good question! I'm not exactly sure to be honest. I would say that if HGC is at 60, and you have more dynamic range, then the question becomes how much less read noise is there at 135 than 60? If the difference is really small, then I'd say go with 60. But you could run some tests to check!
Thank you very much for this app! Did not know about it. A good deal of effort here. In order to help you with the good work, I will allow myself to give to you some feedback. So, I see that by default you are assuming a 16-bit sensor. My camera (an ASI533MM-PRO) is 14 bit though. So I guess this should have an impact on the total exposure time considering different sensors. I get these insane exposure time from your app for the O III filter (+100 hours of 5 minute subs) to reach a desired SNR of 30 (Lion Nebula, which has a good O III signal). I actually shot this nebula, and obtained a total exposure time for this filter only of 2 hours and 5 minutes. Although more time (maybe 7 hours or so) would have given a much better result, I do not think that 100+ hours are necessary to get much better O III signal. Just imagine how many months would someone require to get the O III nebula close to the Andromeda galaxy. Must say though, for completeness, that the data I am using as example was shot from a Bortle 7 sky, with the Optolong Ha (7.0 nm) and Optolong O III (6.5 nm) filters. I used the same amount of sub-exposure time for both filters: 300 s. Cheers and thank you so much for sharing!!!
Thanks for the feedback! The app should work with a 16 or 14 bit sensor. Do you mind sharing the numbers you put into the app? Also, was your image stretched or linear when you got the numbers (should be linear)?
@@deepskydetail hey! Thanks for your prompt reply. Sure. The figures are as follows (for the Lion Nebula): DSO+Sky Glow Signal: 3655 Sky Glow Background: 3620 Dark Signal: 1966 Bias Signal: 1909 Filter: O III (Optolong, 6.5 nm) Camera Gain (e-/ADU): 0.9 Camera's Read Noise (e-): 1.4 Desired SNR: 30 Exposure Duration (s): 300 The calculator gave me the following result: Number of Subs: 1428 Duration in Hours: 119 The hint, I believe, is the small amount of signal (3655-3620=35). Since I've got these figures from Pixinsight, I checked on Siril, but they are not "better": DSO+Sky Glow Signal: 3648 Sky Glow Background: 3616 Signal ~32 All this was peformed, of course, using individual subframes in linear state (not stacked, not even slightly processed at all, just the raw data). So my question remains. Though I am aware that the O III I was gathering was slightly above the noise, I was able to pull out a good amount of O III signal during the process. And with only 2 h 5 min total integration time with this filter. P.S. The values of a stacked, color combined and linearly processed image (without stretching and only the blue channel) are lesser: DSO+SGS=289.122, SGB=280.812, a difference of only 8.31 (=signal?). But, then again, I can see the whole nebula well resolved and with pretty much Ha signal in it, and a fair amount of OIII signal). Hope this helps to shed some light to this issue. Cheers!
It looks like your dark, bias and Camera Gain (e-/ADU) are the same as the default values in the app. Is that really the case? I suspect that your actual bias values should probably be higher, which would result in a lower total duration. The bias number is actually really important for getting accurate SNR estimates. Could you double check that? Thanks!
@@deepskydetail you were right. I left the Dark and Bias signals that are loaded by default in my previous test. Here the values for the new test: SDO+SGS= 3648 SGB=3619 Dark Signal=3196 Bias Signal=3194 Filter=OIII Camera Gain (e-/ADU)=0.9 Read Noise (electrons)=1.4 Desired SNR=30 Exposure Duration (s)=300 Number of Subs=543 Hours=45.25 Sounds more reasonable, but still a lot. Now the question is: can in all DSO a SNR of 30 in OIII be achieved? I don't think so. Just as a sanity check, lets review again but the H-alpha data. The only two parameters changed were the following values (darks, bias used were the same): SDO+SGS= 3620 SGB=3560 ... Filter=Ha ... Number of Subs=119 Hours=9.92 So, yes. this sounds more realistic. But the OIII though, 45 hours... man! Still seems like too much. Cheers!
Field curvature is similar to Coma where off axis light is is focused before it gets to the sensor like you'd need a curved sensor for the edges to stay in focus. What you showed instead is spherical aberration where light that went through the edge of the lens is focused before it gets to the sensor.
Yes, you're right! Thanks for pointing that out. The visual is misleading. I should have had incoming non-parallel to the axis rays being focused for the field curvature demonstration.
Thanks! Great stuff: I like numbers and I like comparisons. Subscribed and I will be watching many more of your videos. Thanks for all the work you put into this one!
Hey friend it's me again. What did you say if I told you I have the original last version of that spreedsheet 100% working + all the Craig's important articles from starkLabs here? Just say Hi
I could tell this right after one frame is captured. If nothing is seen on initial frame, but there should be something there, then integration on this filter is 16+ hours.
@@deepskydetail Thanks. Handy tool for sure. My input numbers for my PlayerOne Poseidon-M and subs(measured with PixInsight's statistic process) were just so much lower i wasn't 100% sure I was using it properly (ha filter, DSO signal: 267, skyglow 151, 161 dark signal, 159 bias) but after putting everything in and my measured sensor info from sharp cap analysis it does seem to add up to fairly close to the SNRs i end up with from the number of frames I have.
This is terrific! Thanks so much for all the hard work you must have put into this. If I'm understanding correctly, I will need to have taken some subframes of a target prior to using this calculator, correct? I wonder (but I have no idea how!) if it's possible to make the app entirely predictive, i.e., predictions about sub exposures before ever imaging a target, by using an actual skyglow value for a particular site as measured by an SQM meter or a light pollution map. Thank you once again for this calculator! Much appreciated.
Thank you! I wonder too if it could be automated. Thinking about it though it would be a pretty big task. You'd need to know aperture size, f-ratio, pixel size, brightness of the target... If I had enough data though with those variables, maybe I could do something like a statistical model to get a rough estimate! :)
Tried out the app but something doesn't make sense. Just to try it out I used your default values with 15s exposure. The result was 1429 subs for 5.95 hours. Then I changed it to 30s and got 1429 (teh same) subs for 11.91 hours! All other inputs remained the same.
If you change the exposure time to 30s, the app will think your sub exposures are 30s long and will adjust things accordingly (5.95 x 2 = 11.9). To test a different sub-exposure time, you need to either input data from another sub frame that is 30 seconds long and use a comma (15, 30 in the sub exposure length box), or you can try the theoretical estimate using the slider called "Mult. exposure by __ times:"
What if you have something like a SeeStar S50, which doesn’t take some of those types of calibration frames? Where would we find this kind of information?
Actually, before you revealed it, I immediately noticed that the first of four images was the best, and I was convinced that must be the one with the longest subexp and gain. And I was right.
Very cool! I wasn't sure how well YT compression would make things look, and I thought the three best all looked really similar. But it's good you could see the difference!
Just discovered your channel from a link to your Github repository in a PentaxForums post. Thanks for all you do! I always like this quote about statistics: "Statistics are like bikinis: What they show you is revealing. What they hide is VITAL." LOL
The data I've seen for my setup is that I need 1 hour 6 minutes of short exposures (8 seconds) to match 1 hour of long exposures 3 minutes. To me that is inconsequential. It also it a good thing since it allows my astrophotography to be possible since I'm using a GoTo dob with no guiding.
Nice work and something I've looked into for over 20 years now. I have done sub-exposures up to one hour using a Sky90 refractor at f#4.5 with an M26C OSC CCD camera. The shortest sub-exposures were typically 5-minutes. Result? The image quality continued to improve with increasing sub-exposure time, I did not hit a "flattening off" region where an increase in sub-exposure time did not give an improvement in image quality. I have worked over the 20 years with Noel Carboni, an image processing expert based in Florida, and together we came up with a very simple formula for getting the best image quality. Get as many subs at the longest sub-exposure you can take from your site. It's that simple. I'm not talking about imaging from sites with terrible light pollution - you have to use different approaches there. My site is about Bortle 4, and if you are lucky enough to work at a site with a lower Bortle number than that, then you can really go to town! I should also point out that f# is also a critical parameter in considering your sub-exposure time. I also run a Hyperstar rig at f#2 and an ASI 2600MC Pro CMOS camera. Now for a start the camera won't take subs more than half an hour long, so I can't do 1-hour long sub exposures on the Hyperstar - BUT - as the Hyperstar is 5 times faster than the Sky90 I only need to take a 12-minute sub on the Hyperstar to get the same image quality as a 1-hour sub on the Sky90! So clearly as you can get 5x as many subs in the SAME total imaging time from the Hyperstar as you can from the Sky90, then it is a no-brainer that you will end up with a higher quality image (for the same length of total imaging time) in a low f# optical system. So what I'm saying is - you have done a very nice experiment there - but there's a whole new can of worms to open up if you consider f# as well :)
This was very informative! Does it make sense to keep sub exposure length to below the point where “pixel” saturation starts occurring or is it okay to saturate some pixels?
You generally want to keep the sub-exposure length low enough so that your stars don't start saturating. I think it's probably ok for some pixels to be saturated sometimes, and hot pixels will always be saturated. But generally I'd keep sub length low enough so they don't saturate. Of course, you could blend in different sub-exposure lengths. Some people do it with good effect to help get fainter detail without blowing out the stars.
Thanks. That makes sense. I noticed recently on M57 with the EdgeHD 8 at F10 and the 2600 MC DUO at gain 100, saturation started happening at about 25 seconds to my surprise.
I don't think most people can achieve even 30 SNR. I tried the calculator in SkyTools and with my f2.8 scope from a dark sky, it is almost impossible to exceed 25 in one night for most popular targets
@@deepskydetail don't know, never learned the math. But the guy makes lots of videos explaining his software, take a look at the exposure calculator video
@deepskydetail give me a target, scope and camera combination, sky magnitude, temperature and altitude and I will run the skytools calculator and send you a screen shot
I done tons of 15 second exposures, using a noisy dslr (Canon t2i). Yes I got an image, but every image was faint, with the full amount of camera noise. So I figured doing 3 minute exposures should give me way more signal, but still only 1 image worth of noise, and not even that much noise, because over the course of 3 minutes, a lot of pixels that would have been noise, were hit by photons, giving me less noise. I can even push the contrast of a single 3 minute image, and get it to almost look like the desired result. That is impossible with 15 second images (too much noise, too little signal).
Many are less interested in “the perfect” picture vs the “best reasonable” that allows us to then go image other stuff. A law of diminishing returns. So while total exposure time might be king, a rule that could establish “getting the job done” would be cool. I wonder at what point does an image have so much noise that processing artefacts are inevitable. I am always amazed at what software tools can do for short (<60 mins total) exposures!
I like your data driven approach - even though my statistics skills are abysmal. In the field, I usually use 180 - 300 seconds for DSO's. This helps mitigate the flub factor of losing 10 minutes of integration time after bumping into your tripod. The other reason, is I really don't know how much well depth I use up with with a given exposure - so I try to be conservative. Thanks for your video. It got me thinking!
@chrislee8886 Great comment! I wonder that too! I'm sure there are calculators out there that can help out with that. Tim's presentation gets into a lot of it. The "best reasonable" seems like a great video idea!
@ekalbkr thanks for the comment! I use around that range too. It feels really bad to throw out a long sub, doesn't it? It just goes to show you that there is so much that goes into finding the perfect sub length!