Theory on why Night Sight wasn't available at launch.

On the contrary, the reason a month wouldnt have mattered is because there's a lot going into Nigh Sight.

In order to achieve their results they resorted to stacking, a defacto technique used by astrophotographers because long exposure has multiple problems (motion blur from both camera and subject, high sensor noise if you turn up the gain, and over-saturating any bright spots). See https://photographingspace.com/stacking-vs-single/.

Stacking, however, presents substantial challenges in terms merging many exposures taken on a handheld camera. Image stabilization is great, but there's a lot of motion over, say, 1 second on a hand-held camera. Much more than the typical IS algorithm is designed to handle.

The techniques they are using are non-trivial: http://graphics.stanford.edu/talks/seeinthedark-public-15sep16.key.pdf

There's a lot going on to accomplish this. It starts with the ability to do high-speed burst reads of raw data from the CCD (so that individual frames don't get motion blurred, and raw so you can process before you lose any fidelity by RGB conversion), and requires a lot of computational horsepower to perform alignment and do merging. I don't know what the Pixel's algorithms are, but merging of many images with hand-held camera motion benefits from state of the art results in applying CNNs to the problem, at least, from some of the results from Vladlen Koltun's group at Intel (who I'd put at the forefront of this, along with Marc Levoy's group at Google) http://vladlen.info/publications/learning-see-dark/

My point is, there's so much effort that went into Night Sight from the get-go and so a month's work is not going to make it or break it.

/r/GooglePixel Thread Parent