Questions for V20.11.26

Questions for V20.11.26
« on: November 18, 2020, 12:55:44 PM »
I noticed in the past few weeks there has been a new QHYCCD SDK released (V20.11.26).  I believe you may have mis-labelled the release notes for this new SDK.  I believe the first release notes listed are for V20.11.26 but they are labelled as if they are for V20.08.26.  You may want to correct this.

One line of these release notes states "294Pro-M/C/485 support and firmware update".  Can you please give some more explanation of what this means.  I own a QHY294C...does this SDK include a firmware update for my QHY294C?  Will this upgrade my QHY294C to a QHY294C-Pro.  What are the improvements contained in this firmware update.

Thank- you



Re: Questions for V20.11.26
« Reply #1 on: November 18, 2020, 07:33:56 PM »
Thank you very much for the typo correction!

About the 294C  and 294M/C-Pro:
Basically the "pro" is for mono sensor, it could be a color sensor, but not now. The pro model have a pixel expended mode switch, which is pretty much never used on a color camera(it will be a  weird bayer filter)
It could mislead other people, maybe I should fix that too.

Anyway, thanks again for the typo report.

Re: Questions for V20.11.26
« Reply #2 on: November 19, 2020, 08:49:46 AM »
Thank-you for your reply,

To clarify...this SDK does not include any firmware update for the QHY294C? Is this true?

I have been following the QHY294M/C-Pro posts on this forum and I thank-you for confirming that the pixel expanded mode switch is of little use for the color Pro version of this camera because it will require custom debayer software that does not exist at present.

My research into my QHY294C camera tells me that the High Dynamic Range [HDR] mode for this color sensor could be very useful.  Are there any plans to investigate the HDR mode for the QHY294C-Pro or QHY294C?

« Last Edit: November 19, 2020, 08:51:29 AM by pmwolsley »


Re: Questions for V20.11.26
« Reply #3 on: November 19, 2020, 08:06:14 PM »
There are few changes for 294C firmware, but they are mostly for troubleshooting so you can ignore that.

About the gain, you can check this page
294C use HDR when you set gain bigger than 1600
294Pro is different, and will update, I think it will update in this page too(I believe in normal mode, it's the same, and pixel expended mode it's using 200 instead of 1600).

Re: Questions for V20.11.26
« Reply #4 on: November 20, 2020, 04:04:36 PM »
I think we are talking about different features of the 294C... 

When you discussed gain you were discussing the LGC/HGC switch point which occurs at gain =1600.

I have been doing some research regarding the IMX294CJK sensor which is the imaging sensor in the 294C.  It has the LGC/HGC operating modes which QHYCCD has made available to users.  It also has a High Dynamic Range or HDR mode which is useful for both color and mono versions of the 294.  This sensor has the ability to take two images simultaneously.  Each image is at full size 4164*2796 and I suspect each image has 13 bit resolution. SONY has designed the sensor to have 2 virtual cameras that can be independently controlled via software.

The High Dynamic Range capability is achieved because these two virtual cameras can take simultaneous images with different exposure lengths.  As an example for astrophotography, this means that the camera can take one 300 second image while simultaneously taking a 60 second image.  These two images can then be combined to create a High Dynamic Range image.  This would be like having a full 16 bit camera.  It would be my wish that QHYCCD would look into supporting this HDR mode for the 294C.



Re: Questions for V20.11.26
« Reply #5 on: November 22, 2020, 09:11:36 PM »
Thanks for your correction.
I checked the manual, they do support HDR, and SDK can handle that too, with some firmware and SDK update.
I think the major thing need to consider is what software does the SDK should work with, what kind of image data should SDK produce? Because the HDR mode is not much common in astronomy photography scenario.

Re: Questions for V20.11.26
« Reply #6 on: November 24, 2020, 03:38:13 PM »
The main issue with the HDR mode of the 294C sensor is that it creates two images.  Available image acquisition software is expecting only one photo so there are two options. 

a)Create a double wide image that appears to be two images oriented side by side.  These two images would be the Long Exposure and Short Exposure images.  This would be processed by the image aquisition software as one 8328*2796 pixel RGGB(RAW) image.  This would give the user the ability to choose the ratio between the long and short exposures times.  A separate program would be used to separate the 8328*2796 image into two 4164*2796 images.  The user could then calibrate and combine the acquired images from an imaging session into stacked long and short exposures.  These two stacked images could be digitally developed separately and finally combined into one HDR image.

b)The user would specify what the long exposure should be and the QHY firmware would set the short exposure to be 1/8th of the long exposure. Example: If the user specified a long exposure of 8 minutes the QHY firmware would specify a short exposure of one minute.  Once the exposure was complete, the QHY firmware would then combine the two images into one HDR image as indicated in the attached graphic.  The least significant 8 bits of the long exposure would be mapped to the least significant 8 bits of the HDR image.  The most significant 8 bits of the short exposure would be mapped to the most significant 8 bits of the HDR image.  Existing image aquisition programs will accept the HDR image as a 4164*2796 16b image.  I have arbitrarily chosen to take 8 bits from one image and 8 bits from the other image.  It may turn out that taking fewer bits from one image and more bits from the other image may improve the image quality.  Perhaps this could be adjustable by the user.

I am not a High Dynamic Range imaging expert. I just think that this HDR mode could be very useful for astrophotography. My vote is to implement approach b)

« Last Edit: November 24, 2020, 03:44:55 PM by pmwolsley »


Re: Questions for V20.11.26
« Reply #7 on: December 01, 2020, 02:50:07 AM »
After some discussion, we believe that we can support this in the SDK originally, but it could be a long term consider.

Re: Questions for V20.11.26
« Reply #8 on: December 01, 2020, 01:50:40 PM »
Thank-you for replying,
It is good news that QHY does have the ability to access the HDR mode of this camera.  I have been experimenting with this concept using two LIGHT frames of the Orion Nebula.  One taken at 200 second exposure and the other at 30 second exposure.  In my previous post I mention two possible methods for creating a HDR image.  The solution b) which simply mapped bits from the two exposures does not work.  I have been able to create a 16b HDR LIGHT frame from these two LIGHT frames but it involves more mathematics than is reasonable to implement in the camera firmware.

I would suggest that solution a) is the best approach because it allows more sophisticated software to be used which either QHY or a 3rd party could develop.

I also had an idea that may be very useful.  If the camera uses the same exposure length for both HDR images a unique situation occurs.  Each image is created by totally different pixels but they share the same optical path.  It's as if the user has two identical astrophotography telescope/camera systems that are simultaneously taking photos.  A 3 hour imaging session would yield 6 hours of photos.

This unique situation also occurs with the QHY294C Pro camera operating in it's "unlocked" mode.  When the 294C Pro camera takes a photo in "unlocked" mode this photo contains 4 simultaneous photos.  There is no software available to parse out these four images but this would be straightforward to do.  It would be possible for this software to offer 3 different formats
1 image  14b A/D 4164*2796
2 images 13b A/D 4164*2796
4 images 12b A/D 4164*2796

Depending on the user's needs a 3 hour imaging session could yield 3hrs, 6hrs or 12hrs of images.  All of these choices would be available "after" the imaging session is completed.  This situation only arises because of the QUAD bayer pixel structure. Each pixel has it's own unique noise content.


P.S. The 294M Pro camera in "unlocked" mode could also use this same 3rd party software to achieve the same 3hr, 6hr or 12hrs of monochrome photos(4164*2796) even though it does not have a QUAD bayer pixel structure. 
« Last Edit: December 01, 2020, 02:04:10 PM by pmwolsley »

Re: Questions for V20.11.26
« Reply #9 on: December 02, 2020, 01:50:08 PM »
I was tempted to delete my previous post...My Bad!

I did the math behind my "unlocked QUAD bayer pixel potentially yielding longer total integration times" scheme and was able to prove this idea has no merit.

Description                           Mean     SD         SNR
8340*5644 Full Array              1000      29.16    34.30
Full Array binned 2 x 2          4000      58.31     68.60
HDR Single Image                 2000      41.23    48.50
HDR Two Images Combined   4000       58.31    68.60

I created an experiment where I "invented" a BIAS frame for the "unlocked" 294C camera that had a mean of 1000 and a standard deviation of 29.16.  I decided that if there was any benefit it would show up as a better Signal-to-Noise (SNR) value.  The main benefit of increasing the total integration time is that it increases the SNR.

The Full Array binned 2 x 2 results would be equivalent to the statistics obtained using a "Locked" 294C.

The HDR Single Image results shows that the mean value is 1/2. The SD is less but the resulting SNR is also less.  This is because the pixel size is less because the effective pixel area is 1/2 that of the Full Array binned 2 x 2 result.

The HDR Two Images Combined results are identical to the Full Array binned 2 x 2 results.  This means that when these two HDR images are stacked they don't improve the SNR any more that the Full Array binned 2 x 2 results. So we are no further ahead.

I didn't bother to determine the values for creating 4 images...the math seems obvious.

The analogy of having two identical astrophotography systems simultaneously taking photos is still valid and does increase the total integration time.  The problem with my scheme is that both of the cameras would need to have smaller pixels so that each camera collected 50% less electrons.  Perhaps the moral of the story is that while total integration time is very important...the ultimate goal is to collect as many electrons as possible.


« Last Edit: December 02, 2020, 01:52:50 PM by pmwolsley »


Re: Questions for V20.11.26
« Reply #10 on: December 04, 2020, 01:27:48 AM »
I asked my supervisor, He said that  the most limit is it 10 bit data.
He also believed that we don't have enough programmer to develop and test for now, they currently focus on some platform support issues. Now this feature was already in the SDK to-do list.