I *built Spliff, a high-performance L7 sniffing and correlation engine in pure C23. The goal is to build a fully working, Linux-native EDR that isn't a resource-hogging black box.
The core innovation – "Golden Thread" correlation:
Most eBPF sniffers capture SSL data OR packets. Spliff correlates both:
Linux-only – Requires kernel 5.x+ with BTF, XDP, libbpf.
---
The project is GPL-3.0 and we're inviting anyone interested to contribute—whether it's code, architecture feedback, security research, or ideas for EDR features that actually matter (not compliance theater).
Give it a test and let me know if you encounter any issues. Except the chrome/chromium with static binaries, that have BoringSSL shipped inside. The entire SSL/TLS code flow is a motherfucking spaghetti to provide acceleration and fast page loads. They even offload to system OpenSSL lib for some TLS parts and even with debug symbols (not you google that doesn't include them in repo) it is a headache to trace it
I think (not 100% sure) Cillium [0][1] kinda already does this. This loophole is good for packet processing/routing and even introducing XDP based ACL to bypass any ip/nf tables and get that almost wire speed benefit. I use Cilium with these features for custom made k8s clusters with Talos OS without any kube-proxy.
The code has the infrastructure for XDP hardware offload:
- XDP_MODE_OFFLOAD enum exists in bpf_loader.h:61
- XDP_FLAGS_HW_MODE flag mapping in bpf_loader.c:789
But it's not usable in practice because:
1. No CLI option – There's no way to enable offload mode; it defaults to native with SKB fallback
2. BPF program isn't offload-compatible – The XDP program uses:
- Complex BPF maps (LRU hash, ring buffers)
- Helper functions not supported by most SmartNIC JITs
- The flow_cookie_map shared with sock_ops (can't be offloaded)
3. SmartNIC limitations
– Hardware offload typically only supports simple packet filtering/forwarding, not the stateful flow tracking spliff does
What would be needed for SmartNIC support:
- Split XDP program into offloadable (simple classification) and non-offloadable (stateful) parts
- Use SmartNIC-specific toolchains (Memory-1, Netronome SDK, etc.)
- Me having a device with SmartNIC and full driver support to play with. I've done all my testing on Fedora 43 on my device
For now this could be a future roadmap item, but the current "Golden Thread" correlation architecture fundamentally requires userspace + kernel cooperation that can't be fully offloaded.
Here is a sample debug output when you run spliff -d and it tries to detect all your NICs:
---
[DEBUG] Loaded BPF program from build-release/spliff.bpf.o
[XDP] Found program: xdp_flow_tracker
[XDP] Found required maps: flow_states, session_registry, xdp_events
[XDP] Found optional map: cookie_to_ssl
[XDP] Found map: flow_cookie_map (for cookie caching)
[XDP] Found optional map: xdp_stats_map
[XDP] Initialization complete
[XDP] Discovered interface: enp0s20f0u2u4u2 (idx=2, mtu=1500, UP, physical)
[XDP] Discovered interface: wlp0s20f3 (idx=4, mtu=1500, UP, physical)
[XDP] Discovered interface: enp0s31f6 (idx=3, mtu=1500, UP, physical)
libbpf: Kernel error message: Underlying driver does not support XDP in native mode
[XDP] native mode failed on enp0s20f0u2u4u2, falling back to SKB mode
[XDP] Attached to enp0s20f0u2u4u2 (idx=2) in skb mode
libbpf: Kernel error message: Underlying driver does not support XDP in native mode
[XDP] native mode failed on wlp0s20f3, falling back to SKB mode
[XDP] Attached to wlp0s20f3 (idx=4) in skb mode
libbpf: Kernel error message: Underlying driver does not support XDP in native mode
[XDP] native mode failed on enp0s31f6, falling back to SKB mode
[XDP] Attached to enp0s31f6 (idx=3) in skb mode
[XDP] Attached to 3 of 3 discovered interfaces
XDP attached to 3 interfaces
[SOCKOPS] Using cgroup: /sys/fs/cgroup
[SOCKOPS] Attached socket cookie caching program
sock_ops attached for cookie caching
[XDP] Warm-up: Seeded 5 existing TCP connections
[DEBUG] Warmed up 5 existing connections
---
Hi HN,
I *built Spliff, a high-performance L7 sniffing and correlation engine in pure C23. The goal is to build a fully working, Linux-native EDR that isn't a resource-hogging black box.
The core innovation – "Golden Thread" correlation:
Most eBPF sniffers capture SSL data OR packets. Spliff correlates both:
This maps raw decrypted TLS data back to the exact TCP flow, PID, and process—something commercial EDRs struggle with.Technical highlights:
• XDP + sock_ops + Uprobes – Three BPF program types working together via shared maps
• Lock-free threading – Dispatcher/Worker model with Concurrency Kit SPSC queues
• Full HTTP/2 – HPACK decompression, stream multiplexing, request-response correlation
• No MITM – Hooks OpenSSL, GnuTLS, NSS, WolfSSL, BoringSSL directly via uprobes
• Static binary fingerprinting – Build ID matching for stripped binaries (Chrome)
• BPF-level filtering – AF_UNIX IPC filtered in kernel, not userspace
Current status: Working L7 visibility engine. Captures and correlates HTTPS traffic in real-time.
What's next: Process behavior tracking, file/network anomaly detection, event streaming (NATS/Kafka), threat intel integration.
Linux-only – Requires kernel 5.x+ with BTF, XDP, libbpf.
---
The project is GPL-3.0 and we're inviting anyone interested to contribute—whether it's code, architecture feedback, security research, or ideas for EDR features that actually matter (not compliance theater).
GitHub: https://github.com/NoFear0411/spliff
*Note: The codebase was written with Claude Opus. I provide the research, architecture decisions, and review every line.
This is super cool, I always wanted a system to peak App packets before encryption gets applied.
Give it a test and let me know if you encounter any issues. Except the chrome/chromium with static binaries, that have BoringSSL shipped inside. The entire SSL/TLS code flow is a motherfucking spaghetti to provide acceleration and fast page loads. They even offload to system OpenSSL lib for some TLS parts and even with debug symbols (not you google that doesn't include them in repo) it is a headache to trace it
Just came here to say this is awesome to see more folks do novel stuff with XDP!
After reading loophole labs post [0] a few months ago. I was hoping someone would cook on this for security research.
[0] https://loopholelabs.io/blog/xdp-for-egress-traffic
I think (not 100% sure) Cillium [0][1] kinda already does this. This loophole is good for packet processing/routing and even introducing XDP based ACL to bypass any ip/nf tables and get that almost wire speed benefit. I use Cilium with these features for custom made k8s clusters with Talos OS without any kube-proxy.
[0]https://docs.cilium.io/en/stable/operations/performance/tuni...
[1]https://isovalent.com/blog/post/cilium-netkit-a-new-containe...
Does this do flow offloading? From https://westurner.github.io/hnlog/#comment-45755142 re: awesome-ebpf:
> "eBPF/XDP hardware offload to SmartNICs",
Also this, re any eBPF FWIU: https://news.ycombinator.com/item?id=46412107 :
> So eBPF for a WAF isn't worth it?
Here are answers to both your questions:
The code has the infrastructure for XDP hardware offload:
- XDP_MODE_OFFLOAD enum exists in bpf_loader.h:61
- XDP_FLAGS_HW_MODE flag mapping in bpf_loader.c:789
But it's not usable in practice because:
1. No CLI option – There's no way to enable offload mode; it defaults to native with SKB fallback
2. BPF program isn't offload-compatible – The XDP program uses:
- Complex BPF maps (LRU hash, ring buffers)
- Helper functions not supported by most SmartNIC JITs
- The flow_cookie_map shared with sock_ops (can't be offloaded)
3. SmartNIC limitations
– Hardware offload typically only supports simple packet filtering/forwarding, not the stateful flow tracking spliff does
What would be needed for SmartNIC support:
- Split XDP program into offloadable (simple classification) and non-offloadable (stateful) parts
- Use SmartNIC-specific toolchains (Memory-1, Netronome SDK, etc.)
- Me having a device with SmartNIC and full driver support to play with. I've done all my testing on Fedora 43 on my device
For now this could be a future roadmap item, but the current "Golden Thread" correlation architecture fundamentally requires userspace + kernel cooperation that can't be fully offloaded.
Here is a sample debug output when you run spliff -d and it tries to detect all your NICs:
--- [DEBUG] Loaded BPF program from build-release/spliff.bpf.o [XDP] Found program: xdp_flow_tracker [XDP] Found required maps: flow_states, session_registry, xdp_events [XDP] Found optional map: cookie_to_ssl [XDP] Found map: flow_cookie_map (for cookie caching) [XDP] Found optional map: xdp_stats_map [XDP] Initialization complete [XDP] Discovered interface: enp0s20f0u2u4u2 (idx=2, mtu=1500, UP, physical) [XDP] Discovered interface: wlp0s20f3 (idx=4, mtu=1500, UP, physical) [XDP] Discovered interface: enp0s31f6 (idx=3, mtu=1500, UP, physical) libbpf: Kernel error message: Underlying driver does not support XDP in native mode [XDP] native mode failed on enp0s20f0u2u4u2, falling back to SKB mode [XDP] Attached to enp0s20f0u2u4u2 (idx=2) in skb mode libbpf: Kernel error message: Underlying driver does not support XDP in native mode [XDP] native mode failed on wlp0s20f3, falling back to SKB mode [XDP] Attached to wlp0s20f3 (idx=4) in skb mode libbpf: Kernel error message: Underlying driver does not support XDP in native mode [XDP] native mode failed on enp0s31f6, falling back to SKB mode [XDP] Attached to enp0s31f6 (idx=3) in skb mode [XDP] Attached to 3 of 3 discovered interfaces XDP attached to 3 interfaces [SOCKOPS] Using cgroup: /sys/fs/cgroup [SOCKOPS] Attached socket cookie caching program sock_ops attached for cookie caching [XDP] Warm-up: Seeded 5 existing TCP connections [DEBUG] Warmed up 5 existing connections ---
edit: formating is hard on my phone
> Me having a device with SmartNIC and full driver support to play with
Same. I have a Pi Pico with PIO, though
> but the current "Golden Thread" correlation architecture fundamentally requires userspace + kernel cooperation that can't be fully offloaded.
Hard limit, I guess.
(If you indent all lines of a block of text with two spaces (including blank newlines), HN will format it as monospace text and preserve line breaks.)
I've updated the Architecture diagrams to include everything: https://github.com/NoFear0411/spliff/blob/main/README.md#arc...
Thanks for the format tip.
So I went looking for TLS accelerator cards again:
/? TLS accelerators open: https://www.google.com/search?q=TLS+accelerators+open :
- "AsyncGBP+: Bridging SSL/TLS and Heterogeneous Computing with GPU-Based Providers" https://ieeexplore.ieee.org/document/10713226 .. https://news.ycombinator.com/item?id=46664295
/? XDP hardware offload to GPU: https://www.google.com/search?q=XDP+hardware+offload+to+a+GP... :
- eunomia-bpf/XDP-on-GPU: https://github.com/eunomia-bpf/XDP-on-GPU
Perhaps AsyncGBP+ + XDP-on-GPU would solve.
The AsyncGBP+ article mentions support for PQ on GPU.
But then process isolation on GPUs. And they removed support for vGPU unlock.
That is a rabbit hole that I don't wanna go down to again.