This project is amazing. There's a video about halfway down the project that gives a good summary.
They demo dynamic adjustment of frame rate, i.e., by choosing the time internal for summation of each output pixel. But for me, the really interesting part is when they start summation over more complex trajectories. For example, they know a falling object in the video has a certain initial velocity and acceleration, so they can completely remove motion blur.
It would not surprise me if next-gen ultra-high-speed cameras start using this tech.
CCD is a technology, a strategy or approach to a specific problem if you wish.
diodes are semiconductors, with a positive and negatively doped region (optionally an intrinsic region in between).
A CCD also relies on a diode semiconductor bandgap to liberate charge carriers when a photon exceeding the bandgap energy strikes.
What is your actual question?
Is it "Why don't we use SPAD's in our cell phones and web cams?" ?
Thats like "I don't know what it is, but I want it too".
You can't use these in brightly lit scenes, the lighting needs to dim enough in order to register electrical pulses due to individual photons. Instead of a single electron-hole pair, SPAD's are reverse biased beyond "breakdown voltage" such that the released charge carrier will in turn release others (the "A"valanche in SPAD). The breakdown voltage and reverse bias (together with a dark current) means the sensor will heat up. So it would need significant cooling and energy consumption (higher currents per photon at higher breakdown voltage) at normal daytime light intensities. The sheer rate of photons would not allow individual photon pulses to be discriminated as their pulses would overlap... The reason it's not in your phone is because it would be expensive, power hungry (decreasing battery life) and require cooling.
This explanation isn't consistent with my understanding. The key reasons are that SPADs have lower quantum efficiency than conventional CCDs and typical SPAD arrays have fewer pixels. Both are improving, but still not competitive for most use cases. Heating is an issue for data transfer, but as the video shows this can be addressed by on-chip processing.
Quantum Efficiency is defined as the number of electron hole pairs per photon.
Without avalanche effect, without quantum cutting phosphors etc, the quantum efficiency for visible light photons can not exceed 100%: a photon of suitable energy (i.e. higher than the bandgap) would liberate a single electron-hole pair with the excess energy lost as heat. Some electron hole pairs may recombine before completing a loop around the external circuit, lowering the total quantum efficiency.
A SPAD uses the avalanche effect: the applied reverse bias being so strong that the electron of the initial electron-hole pair is accelerated to sufficient energy to knock out and liberate more electron hole pairs, so that you can have quantum efficiencies exceeding 100%: as a function of the applied reverse bias there is an extra mean multiplier constant describing how many electron-hole pairs will be liberated due to a single photon striking.
>This explanation isn't consistent with my understanding.
Its probably because you misremember, the quantum efficiency of SPAD's is higher not lower than that of your average CCD sensor.
What is this answer? I'm very familiar with the devices in question but I can't understand why you gave this response.
CCDs count photons (in large numbers), SPADs work hard to register single photons. You can make a camera based on either one, depending on your requirements.