A couple of questions I expect here (based on similar discussions in other channels):
1) What about memory - is it shared too?
CPU is shared dynamically. Memory is still hard allocated as a guaranteed limit per workload. This was intentional because, unlike CPUs, memory oversubscription risk is significantly harder to mitigate safely at PaaS scale without introducing latency unpredictability and OOM risk. So: CPU = elastic, RAM = guaranteed / stable.
2) Is isolation compromised by this approach?
No - apps don’t run on the same container host. Every app runs on its own Kubernetes node (physical or VM). The Fair Scheduler coordinates CPU fairness across nodes under a single user resource plan. This eliminates noisy neighbors and preserves app-level blast radius reduction.
Here’s a bit more detail on how the scheduler works under the hood:
1. Each application still runs on its own Kubernetes node to guarantee isolation (so noisy-neighbor issues are eliminated).
2. We track CPU usage in real-time across all workloads and maintain a global usage map.
3. Idle CPU from any app/node becomes available for re-purchase by other workloads in the same resource plan.
4. CPU limits can be adjusted on the fly without restarts, enabling real-time response to changing load.
If anyone wants to dive into topics like threshold algorithms, node assignment heuristics, or Kubernetes API interactions - I'm happy to dig into that.
A quick example of why this matters for devs & startups: imagine you’ve got 5 small apps each using 0.3 CPU most of the time. In most PaaS you’d pay for 5 separate instances. On Miget you pay for one resource plan and those apps share the CPU dynamically - result: ~75% cost reduction.
If you’re curious about how this stacks up against platforms like Heroku, Render or Railway, I can post a cost-comparison table.
A couple of questions I expect here (based on similar discussions in other channels):
1) What about memory - is it shared too? CPU is shared dynamically. Memory is still hard allocated as a guaranteed limit per workload. This was intentional because, unlike CPUs, memory oversubscription risk is significantly harder to mitigate safely at PaaS scale without introducing latency unpredictability and OOM risk. So: CPU = elastic, RAM = guaranteed / stable.
2) Is isolation compromised by this approach? No - apps don’t run on the same container host. Every app runs on its own Kubernetes node (physical or VM). The Fair Scheduler coordinates CPU fairness across nodes under a single user resource plan. This eliminates noisy neighbors and preserves app-level blast radius reduction.
Here’s a bit more detail on how the scheduler works under the hood: 1. Each application still runs on its own Kubernetes node to guarantee isolation (so noisy-neighbor issues are eliminated).
2. We track CPU usage in real-time across all workloads and maintain a global usage map.
3. Idle CPU from any app/node becomes available for re-purchase by other workloads in the same resource plan.
4. CPU limits can be adjusted on the fly without restarts, enabling real-time response to changing load.
If anyone wants to dive into topics like threshold algorithms, node assignment heuristics, or Kubernetes API interactions - I'm happy to dig into that.
A quick example of why this matters for devs & startups: imagine you’ve got 5 small apps each using 0.3 CPU most of the time. In most PaaS you’d pay for 5 separate instances. On Miget you pay for one resource plan and those apps share the CPU dynamically - result: ~75% cost reduction.
If you’re curious about how this stacks up against platforms like Heroku, Render or Railway, I can post a cost-comparison table.