I took a cursory look and I like what I see – the service maps are really good, I love the level of detail. I will say, one thing I'm looking for with this kind of software, to maximise value, is structured logging support, and from what I could see, each log line just has the raw payload currently. Is that something you have on your roadmap?
Great work! It's nice seeing another observability tool. Demo is neat and easy to navigate.
Couple of questions:
What's the overhead of tracing + logging observed by users? I see many tools being built on top of the OpenTelemetry eBPF tracer, which is nice to see.
The OpenTelemetry eBPF tracer uses sampling to capture traces. Do other types of logging in the tool use sampling as well (HTTP traces)?
When finding SLO violations, can this tool find the bug if the latency spikes do not happen frequently (ie, latency spikes happens every 5minutes - 1hour)? I'm curious if the team have had experienced such events and even if those pmax latencies matter to customers since it may not happen frequently.
I see that the flamegraph is a CPU flamegraph - does off-cpu sampling matter (Disk/Network, etc...)? Or does the CPU flamegraph provide enough for developers to solve the issue?
1. Regarding overhead — we ran a benchmark focused on performance impact rather than raw overhead [1]. TL;DR: we didn’t observe any noticeable impact at 10K RPS. CPU usage stayed around 200 millicores (about 20% of a single core).
2. Coroot’s agent captures pseudo-traces (individual spans) and sends them to a collector via OTLP. This stream can be sampled at the collector level. In high-load environments, you can disable span capturing entirely and rely solely on eBPF-based metrics for analysis.
3. We’ve built automated root cause analysis to help users explain even the slightest anomalies, whether or not SLOs are violated. Under the hood, it traverses the service dependency graph and correlates metrics — for example, linking increased service latency to CPU delay or network latency to a database. [2]
4. Currently, Coroot doesn’t support off-CPU profiling. The profiler we use under the hood is based on Grafana Pyroscope’s eBPF implementation, which focuses on CPU time.
I looked into eBPF-based observability tools for k8s some time ago and found at least four tools that look incredibly similar: Pixie, Parca, Coroot, and Odigos. There are probably others I missed too. Do you have any thoughts about this?
From a user perspective, having several tools that overlap heavily but differ in subtle ways makes evaluation and adoption harder. It feels like if any two of these projects consolidated, they’d have a good shot at becoming the "default" eBPF observability solution.
From a user’s perspective, it doesn’t really matter how the data is collected. What actually matters is whether the tool helps you answer questions about your system and figure out what’s going wrong.
At Coroot, we use eBPF for a couple of reasons:
1. To get the data we actually need, not just whatever happens to be exposed by the app or OS.
2. To make integration fast and automatic for users.
And let’s be real, if all the right data were already available, we wouldn’t be writing all this complicated eBPF code in the first place:)
Speaking for Odigos (disclosure: I’m the creator), here are two significant differences between us and the other mentioned players:
- Accurate distributed traces with eBPF, including context propagation. Without going into other tools, I highly recommend trying to generate distributed traces using any other eBPF solution and observing the results firsthand.
- We are agent-only. Our data is produced in OpenTelemetry format, allowing you to integrate it seamlessly with your existing observability system.
Initially, we relied on the ClickHouse OTEL exporter and its schema, but for performance optimization, we decided to modify our ClickHouse schema, and they are no longer compatible :(
Basically anywhere you'd previously need to write a kernel module but now can have user space run arbitrary kernel code that's secure and won't crash the kernel.
You can also now write custom schedulers in eBPF with sched_ext.
(I'm a co-founder). At Coroot, we're strong believers in open source, especially when it comes to observability. Agents often require significant privileges, and the cost of switching solutions is high, so being open source is the only way to provide real guarantees for businesses.
Coroot builds a model of each system, allowing it to traverse the dependency graph and identify correlations between metrics. On top of that, we're experimenting with LLMs for summarization — here are a few examples: https://oopsdb.coroot.com/failures/cpu-noisy-neighbor/
Currently, you can define custom SLIs (Service Level Indicators, such as service latency or error rate) for each service using PromQL queries. In the future, you'll be able to define custom metrics for each application, including explanations of their meaning, so they can be leveraged in Root Cause Analysis
I took a cursory look and I like what I see – the service maps are really good, I love the level of detail. I will say, one thing I'm looking for with this kind of software, to maximise value, is structured logging support, and from what I could see, each log line just has the raw payload currently. Is that something you have on your roadmap?
It would be great using VictoriaLogs as a storage for structured logs in Coroot, since it is optimized for structured logs with arbitrary sets of labels. See https://docs.victoriametrics.com/victorialogs/keyconcepts/
In addition to raw logs, Coroot can extract recurring patterns to generate log-based metrics [1].
We also plan to convert structured logs into OpenTelemetry attributes [2].
[1] https://demo.coroot.com/p/tbuzvelk/applications/default:Depl... [2] https://github.com/coroot/coroot/issues/490
Great work! It's nice seeing another observability tool. Demo is neat and easy to navigate.
Couple of questions:
What's the overhead of tracing + logging observed by users? I see many tools being built on top of the OpenTelemetry eBPF tracer, which is nice to see.
The OpenTelemetry eBPF tracer uses sampling to capture traces. Do other types of logging in the tool use sampling as well (HTTP traces)?
When finding SLO violations, can this tool find the bug if the latency spikes do not happen frequently (ie, latency spikes happens every 5minutes - 1hour)? I'm curious if the team have had experienced such events and even if those pmax latencies matter to customers since it may not happen frequently.
I see that the flamegraph is a CPU flamegraph - does off-cpu sampling matter (Disk/Network, etc...)? Or does the CPU flamegraph provide enough for developers to solve the issue?
1. Regarding overhead — we ran a benchmark focused on performance impact rather than raw overhead [1]. TL;DR: we didn’t observe any noticeable impact at 10K RPS. CPU usage stayed around 200 millicores (about 20% of a single core).
2. Coroot’s agent captures pseudo-traces (individual spans) and sends them to a collector via OTLP. This stream can be sampled at the collector level. In high-load environments, you can disable span capturing entirely and rely solely on eBPF-based metrics for analysis.
3. We’ve built automated root cause analysis to help users explain even the slightest anomalies, whether or not SLOs are violated. Under the hood, it traverses the service dependency graph and correlates metrics — for example, linking increased service latency to CPU delay or network latency to a database. [2]
4. Currently, Coroot doesn’t support off-CPU profiling. The profiler we use under the hood is based on Grafana Pyroscope’s eBPF implementation, which focuses on CPU time.
[1]: https://docs.coroot.com/installation/performance-impact [2]: https://demo.coroot.com/p/tbuzvelk/anomalies/default:Deploym...
I looked into eBPF-based observability tools for k8s some time ago and found at least four tools that look incredibly similar: Pixie, Parca, Coroot, and Odigos. There are probably others I missed too. Do you have any thoughts about this?
From a user perspective, having several tools that overlap heavily but differ in subtle ways makes evaluation and adoption harder. It feels like if any two of these projects consolidated, they’d have a good shot at becoming the "default" eBPF observability solution.
From a user’s perspective, it doesn’t really matter how the data is collected. What actually matters is whether the tool helps you answer questions about your system and figure out what’s going wrong.
At Coroot, we use eBPF for a couple of reasons:
1. To get the data we actually need, not just whatever happens to be exposed by the app or OS.
2. To make integration fast and automatic for users.
And let’s be real, if all the right data were already available, we wouldn’t be writing all this complicated eBPF code in the first place:)
Speaking for Odigos (disclosure: I’m the creator), here are two significant differences between us and the other mentioned players:
- Accurate distributed traces with eBPF, including context propagation. Without going into other tools, I highly recommend trying to generate distributed traces using any other eBPF solution and observing the results firsthand.
- We are agent-only. Our data is produced in OpenTelemetry format, allowing you to integrate it seamlessly with your existing observability system.
I hope this clarifies the differences.
I wonder if anyone tried to integrate Odigos with Coroot - looks like it could be really powerful!
Can it parse Zeek logs to identify long-running TCP connections and/or identify user attempts to access a DNS blocked domain?
We could totally add that, but no one's asked for it so far
Can this also be used in a non-cloud environment? Or even say in promox based setup locally?
It only requires a modern Linux kernel. Note: The agent does not support Docker-in-Docker environments, such as KinD or Minikube (D-in-D plugin).
I already have Opentelemetry traces and logs going to Clickhouse with the Clickhouse otel exporter.
Can i use Coroot to show my existing data, without it taking control of my DDL?
Initially, we relied on the ClickHouse OTEL exporter and its schema, but for performance optimization, we decided to modify our ClickHouse schema, and they are no longer compatible :(
Bummer, it'd be awesome if i could point it at data i already have, even if that meant a reduced feature set.
How are you using this data right now ? If you plan to use Coroot for visualization why not to convert it to more efficient format Coroot uses ?
This is somewhat off topic, but are there any common uses for eBPF outside of observability/monitoring? Or is that kind of its whole thing?
Also commonly used for high-performance networking and security use cases, for example https://isovalent.com/blog/post/cilium-netkit-a-new-containe....
Basically anywhere you'd previously need to write a kernel module but now can have user space run arbitrary kernel code that's secure and won't crash the kernel.
You can also now write custom schedulers in eBPF with sched_ext.
Yes, one example: network bandwidth isolation is done more efficiently using ebpf https://netdevconf.info/0x14/pub/papers/55/0x14-paper55-talk...
Thanks for sharing! If the connections are TLS-enabled, can Coroot still display the associated telemetry?
Yes, it captures traffic before encryption and after decryption using eBPF uprobes on OpenSSL and Go’s TLS library calls.
I like what I see. What are the differences between the enterprise and community editions?
Enterprise Edition = Community Edition + Support + AI-based Root Cause Analysis + SSO + RBAC
Thank you!
We're on sentry today, but have been waiting for a fully OSS solution like this.
(I'm a co-founder). At Coroot, we're strong believers in open source, especially when it comes to observability. Agents often require significant privileges, and the cost of switching solutions is high, so being open source is the only way to provide real guarantees for businesses.
What's the data transformation story; for ML on metrics?
Coroot builds a model of each system, allowing it to traverse the dependency graph and identify correlations between metrics. On top of that, we're experimenting with LLMs for summarization — here are a few examples: https://oopsdb.coroot.com/failures/cpu-noisy-neighbor/
That looks like a built-in feature. I'm asking about extensibility. How do we use custom metrics transformations (libraries), for example?
Currently, you can define custom SLIs (Service Level Indicators, such as service latency or error rate) for each service using PromQL queries. In the future, you'll be able to define custom metrics for each application, including explanations of their meaning, so they can be leveraged in Root Cause Analysis