I'm the author of this note. While digging through the ALMA (Atacama Large Millimeter Array) Cycle 7 technical documentation, I found a logic flowchart (Fig 1 in the linked PDF) that explicitly instructs the pipeline to "renormalize" signals that exceed the expected dynamic range.
Essentially, if a spectral line is "too bright" relative to the off-source calibration, the correlator software treats it as a gain error and mathematically suppresses it.
I've identified that this seems to be happening systematically at 344 GHz (Band 7). I also found corroborating logs from the Submillimeter Array (SMA) in Hawaii, where engineers patched the code to handle "huge amplitudes" in the same frequency band that were breaking their 16-bit integer file format.
It looks like we might be filtering out a genuine physical anomaly (super-radiance) because it violates the noise model of the sensors. Curious if any DSP / RF engineers here have seen similar "hard-coded normalization" logic in other sensor arrays?
I'm the author of this note. While digging through the ALMA (Atacama Large Millimeter Array) Cycle 7 technical documentation, I found a logic flowchart (Fig 1 in the linked PDF) that explicitly instructs the pipeline to "renormalize" signals that exceed the expected dynamic range.
Essentially, if a spectral line is "too bright" relative to the off-source calibration, the correlator software treats it as a gain error and mathematically suppresses it.
I've identified that this seems to be happening systematically at 344 GHz (Band 7). I also found corroborating logs from the Submillimeter Array (SMA) in Hawaii, where engineers patched the code to handle "huge amplitudes" in the same frequency band that were breaking their 16-bit integer file format.
It looks like we might be filtering out a genuine physical anomaly (super-radiance) because it violates the noise model of the sensors. Curious if any DSP / RF engineers here have seen similar "hard-coded normalization" logic in other sensor arrays?