Rather than longer times, what about short times? I did some work on fast fading and you can see rapid swings in fade over <5s. That is hard for automated systems to respond to, so you normally respond by increasing the link margin. If you can predict this you could reduce the margin needed. That could potentially be very valuable.
Spot on. We categorize that <5s window as tactical fade mitigation.
Our current 3-5m window is for topology/routing, but the sub-5s window is for Dynamic Link Margin (DLM). If we can predict fast-fading signatures—like tropospheric scintillation or edge-of-cloud diffraction, we can move from reactive to proactive ACM.
pretty intriguing demo video. how do you ensure your telemetry ingestion happens operationally that will be daunting task. output will be as good as your telemetry any delay or break in data everything bound break.
Great point, telemetry reliability is the biggest hurdle for any mission-critical system. We address the "garbage in, garbage out" risk by prioritizing freshness (our pipeline treats latency as a failure).
We use a
"leaky" buffer strategy (if data is too old to be actionable for a 3-minute forecast, we drop it to ensure the models aren't lagging behind the physical reality of the link),
graceful degradation (when telemetry is delayed or broken, the system automatically falls back to physics-only models i.e. orbital propagation and ITU standards), and
edge validation (we validate and normalize data at the ingestion point, if a stream becomes corrupted or "noisy," the system flags that specific sensor as unreliable and adjusts the prediction confidence scores in real-time).
Not currently, we're planning on opening up our seed round in 4 weeks, feel free to shoot us a note at hello@constellation-io.com if you're interested in learning more.
Rather than longer times, what about short times? I did some work on fast fading and you can see rapid swings in fade over <5s. That is hard for automated systems to respond to, so you normally respond by increasing the link margin. If you can predict this you could reduce the margin needed. That could potentially be very valuable.
Spot on. We categorize that <5s window as tactical fade mitigation.
Our current 3-5m window is for topology/routing, but the sub-5s window is for Dynamic Link Margin (DLM). If we can predict fast-fading signatures—like tropospheric scintillation or edge-of-cloud diffraction, we can move from reactive to proactive ACM.
pretty intriguing demo video. how do you ensure your telemetry ingestion happens operationally that will be daunting task. output will be as good as your telemetry any delay or break in data everything bound break.
Great point, telemetry reliability is the biggest hurdle for any mission-critical system. We address the "garbage in, garbage out" risk by prioritizing freshness (our pipeline treats latency as a failure).
We use a
"leaky" buffer strategy (if data is too old to be actionable for a 3-minute forecast, we drop it to ensure the models aren't lagging behind the physical reality of the link),
graceful degradation (when telemetry is delayed or broken, the system automatically falls back to physics-only models i.e. orbital propagation and ITU standards), and
edge validation (we validate and normalize data at the ingestion point, if a stream becomes corrupted or "noisy," the system flags that specific sensor as unreliable and adjusts the prediction confidence scores in real-time).
Are you raising?
Not currently, we're planning on opening up our seed round in 4 weeks, feel free to shoot us a note at hello@constellation-io.com if you're interested in learning more.
Very cool company! Are y’all hiring?
Not right now but we will be soon! Send over your resume to hello@constellation-io.com if you're interested in joining.
Do you plan to work on orbital weapon systems like Golden Dome?
We're big believers in American Dynamism.