Nothing against AI - just to inform people about quality, maintainability and future of this library. No human has mental model of the code, so don’t waste your time creating it - the original author didn’t either.
A comment on libxml, not on your work:
Funny how so many companies use this library in production and not one steps in to maintain this project and patch the issues.
What a sad state of affairs we are in.
Yeah I agree, maintaining OS projects has been a weird thing for a long time.
I know a few companies have programs where engineers can designate specific projects as important and give them funds. But it doesn't happen enough to support all the projects that currently need work, maybe AI coding tools will lower the cost of maintenance enough to improve this.
I do think there are two possible approaches that policy makers could consider.
1) There could probably be tax credits or deductions for SWEs who 'volunteer' their time to work on these projects.
2) Many governments have tried to create cyber reserve corps, I bet they could designate people as maintainers of key projects that they rely on to maintain both the projects as well as people skilled with the tools that they deem important.
we need a tax on companies using or selling anything OSS, the funds of which go into OSS, the wealth it generated is insane, and it's nearly all just donations of experts
There is nothing in the open source licensees that prevents charging money, in fact, non-commercial clauses are seen as incompatible with the Debian Free Software Guidelines.
And there is a lot of companies out there that make their money based on open source software, red hat is maybe the biggest and most well known.
Feels more like you don’t understand the concept of the tragedy of the commons.
EDIT: Sorry, I’ve had a shitty day and that wasn’t a helpful comment at all. I should’ve said that as I understand it TOTC primarily relates to finite resources, so I don’t think it applies here. Sorry again for being a dick.
Yes, you can rip off any sucker who published a test suite when the AI is trained on existing code as well. Congratulations, you will be showered with praise and AI mafia money.
Amazing work! I'd love to hear more details about your workflow with Claude Code.
As a side note and this isn't a knock on your project specifically. I think the community needs to normalize disclaimers for "vibe-coded" packages. Consumers really need to understand the potential risks of relying on agent-generated code upfront.
Even more interesting is how much did the effort cost.
Unlike the development work of old (pre-2025), work with high-end models incurs a very direct monetary cost, one burns tokens which cost money, and you can't have something as powerful to be running locally (even if you happened to have a Mac Pro Ultra with RAM maxed out).
Some of my friends burned through hundreds of dollars a day while doing large amounts of (allegedly efficient) work with Claude Code.
Yeah its a fair point. I wondered if it might be irresponsible to publish the package because it was made this way, but I suspect I'm not the first person to try and develop a package with Claude Code, so I think the best I can do is be honest about it.
As for the workflow, I think the best advice I can give is to setup as many guardrails and tools as possible, so Claude and do as many iterations before needing any intervention. So in this case I setup pre-commit hooks for linting and formatting, gave it access to the full testing suite, and let it rip. The majority of the work was done in a single thinking loop that lasted ~3 hours where Claude was able to run the tests, see what failed, and iterate until they all passed. From there, there was still lots of iterations to add features, clean up, test, and improve performance - but allowing Claude to iterate quickly on it's own without my involvement was crucial.
> I do think there is something interesting to think about here in how coding agents like Claude Code can quickly iterate given a test suite.
This is a point I've tried to advocate for a while. Specially to empower non coders and make them see that we CAN approach automation with control.
Some aspects will be the classic unit or integration tests for validation. Others, will be AI Evals [1] which to me could be the common language for product design for different families/disciplines who don't quite understand how to collaborate with each other.
The amount of progress in a short time is amazing to see.
Please stop spreading this "AI evals" terminology. "evals" is what providers like OpenAI and Anthropic do with their models. If you wrote a test for a feature that uses an LLM, it's just a test, there's no need to say "evals." Having a separate term only further confuses people who already have no idea what that actually means.
I agree the wording is a bit strange, but a quick grep of the repo shows that it doesn't imply that.
The only usages of unsafe are in src/ffi, which is only compiled when the ffi feature is enabled. ffi is fundamentally unsafe ("unsafe" meaning "the compiler can't automatically verify this code won't result in undefined behavior") so using it there is reasonable, and the rest of the crate is properly free of unsafe.
Yeah I'm a bit confused because you can have an entirely unsafe code base with just the public interface marked as safe. No unsafe in the interface isn't a measure of safety at all.
It is a measure of the intended level of care that the users of your interface have to take. If there's no unsafe in the interface, then that implies that the library has only provided safe interfaces, even if it uses unsafe internally, and that the interface exposed enforces all necessary invariants.
It is absolutely a useful distinction on whether your users need to deal with unsafe themselves or not.
I guess I don't write enough rust to say this with confidence, but isn't that the bare minimum? I find it difficult to believe the rust community would accept using a library where the API requires unsafe.
>I guess I don't write enough rust to say this with confidence, but isn't that the bare minimum
I have some experience and yes, unless you're putting out a library for specifically low-level behavior like manual memory management or FFI. Trivia about the unsafe fn keyword missed the point of my comment entirely.
Sure, it's a useful distinction for whether users need to care about safety but not whether the underlying code is safe itself, which is what I wrote about.
No or very little but verified unsafe internal code is the bar for many Rust reimplementations. It would also be what keeps the code memory safe.
Genuine question about the verification story here: the test suite tells you the happy paths work, but for something as gnarly as XML parsing (billion laughs, deeply nested entities, encoding edge cases), how confident are you that an agent-generated parser handles the adversarial inputs correctly? Passing W3C conformance is necessary but probably not sufficient for a library that might replace one with known CVEs in security-sensitive contexts. Did you run any fuzzing against it, or is that the next step?
Yes, in testing I did add four fuzzing targets to the repo:
1. fuzz_xml_parse: throws arbitrary bytes at the XML parser in both strict and recovery mode
2. fuzz_html_parse: throws arbitrary bytes at the HTML parser
3. fuzz_xpath: throws arbitrary XPath expressions at the evaluator
4. fuzz_roundtrip: parse → serialize → re-parse, checking that the pipeline never panics
Because this project uses memory safe rust, there isn't really the need to find the memory bugs that were the majority of libxml2's CVEs.
There is a valid point about logic bugs or infinite loops, which I suppose could be present in any software package, and I'm not sure of a way to totally rule out here.
Because it was written in C, libxml2's CVE history has been dominated by use-after-free, buffer overflows, double frees, and type confusion. xmloxide is written in pure Rust, so these entire vulnerability classes are eliminated at compile time.
Is that true? I thought if you compiled a rust crate with, `#[deny(unsafe_code)]`, there would not be any issues. xmloxide has unsafe usage only in the the C FFI layer, so the rest of the system should be fine.
If by flaws you mean the security researchers spamming libxml2 with low effort stuff demanding a CVE for each one so they can brag about it – no, I don’t think anybody can fix that.
It would be interesting to try this approach out with mQuickJS, QuickJS or micropython. They could potentially run hoops around the ones that were first coded in Rust, such as Boa or RustPython.
Can you add “made with AI” to the GitHub repo?
It’s time to make this mandatory.
Nothing against AI - just to inform people about quality, maintainability and future of this library. No human has mental model of the code, so don’t waste your time creating it - the original author didn’t either.
A comment on libxml, not on your work: Funny how so many companies use this library in production and not one steps in to maintain this project and patch the issues. What a sad state of affairs we are in.
Yeah I agree, maintaining OS projects has been a weird thing for a long time.
I know a few companies have programs where engineers can designate specific projects as important and give them funds. But it doesn't happen enough to support all the projects that currently need work, maybe AI coding tools will lower the cost of maintenance enough to improve this.
I do think there are two possible approaches that policy makers could consider.
1) There could probably be tax credits or deductions for SWEs who 'volunteer' their time to work on these projects.
2) Many governments have tried to create cyber reserve corps, I bet they could designate people as maintainers of key projects that they rely on to maintain both the projects as well as people skilled with the tools that they deem important.
There should be public works grants to maintain them, or else a foundation specifically to maintain them funded with donations, grants, etc.
The alternative is another XZ backdoor.
we need a tax on companies using or selling anything OSS, the funds of which go into OSS, the wealth it generated is insane, and it's nearly all just donations of experts
That's a bit unclear on the concept. It's not open source if you have to pay for it. How about charging money for your code instead?
There is nothing in the open source licensees that prevents charging money, in fact, non-commercial clauses are seen as incompatible with the Debian Free Software Guidelines.
And there is a lot of companies out there that make their money based on open source software, red hat is maybe the biggest and most well known.
Feels like tragedy of the commons.
Feels more like you don’t understand the concept of the tragedy of the commons.
EDIT: Sorry, I’ve had a shitty day and that wasn’t a helpful comment at all. I should’ve said that as I understand it TOTC primarily relates to finite resources, so I don’t think it applies here. Sorry again for being a dick.
Yes, you can rip off any sucker who published a test suite when the AI is trained on existing code as well. Congratulations, you will be showered with praise and AI mafia money.
Amazing work! I'd love to hear more details about your workflow with Claude Code.
As a side note and this isn't a knock on your project specifically. I think the community needs to normalize disclaimers for "vibe-coded" packages. Consumers really need to understand the potential risks of relying on agent-generated code upfront.
Even more interesting is how much did the effort cost.
Unlike the development work of old (pre-2025), work with high-end models incurs a very direct monetary cost, one burns tokens which cost money, and you can't have something as powerful to be running locally (even if you happened to have a Mac Pro Ultra with RAM maxed out).
Some of my friends burned through hundreds of dollars a day while doing large amounts of (allegedly efficient) work with Claude Code.
Yeah its a fair point. I wondered if it might be irresponsible to publish the package because it was made this way, but I suspect I'm not the first person to try and develop a package with Claude Code, so I think the best I can do is be honest about it.
As for the workflow, I think the best advice I can give is to setup as many guardrails and tools as possible, so Claude and do as many iterations before needing any intervention. So in this case I setup pre-commit hooks for linting and formatting, gave it access to the full testing suite, and let it rip. The majority of the work was done in a single thinking loop that lasted ~3 hours where Claude was able to run the tests, see what failed, and iterate until they all passed. From there, there was still lots of iterations to add features, clean up, test, and improve performance - but allowing Claude to iterate quickly on it's own without my involvement was crucial.
> I do think there is something interesting to think about here in how coding agents like Claude Code can quickly iterate given a test suite.
This is a point I've tried to advocate for a while. Specially to empower non coders and make them see that we CAN approach automation with control.
Some aspects will be the classic unit or integration tests for validation. Others, will be AI Evals [1] which to me could be the common language for product design for different families/disciplines who don't quite understand how to collaborate with each other.
The amount of progress in a short time is amazing to see.
- [1] https://ai-evals.io/
Please stop spreading this "AI evals" terminology. "evals" is what providers like OpenAI and Anthropic do with their models. If you wrote a test for a feature that uses an LLM, it's just a test, there's no need to say "evals." Having a separate term only further confuses people who already have no idea what that actually means.
> arena-based tree with zero unsafe in the public API
Why "in the public API"? Does this imply it's using unsafe behind the hood? If so, what for?
I agree the wording is a bit strange, but a quick grep of the repo shows that it doesn't imply that.
The only usages of unsafe are in src/ffi, which is only compiled when the ffi feature is enabled. ffi is fundamentally unsafe ("unsafe" meaning "the compiler can't automatically verify this code won't result in undefined behavior") so using it there is reasonable, and the rest of the crate is properly free of unsafe.
It provides a libxml2-compatible C API and that accepted pointers, this would seem to necessitate unsafe at least.
Yeah I'm a bit confused because you can have an entirely unsafe code base with just the public interface marked as safe. No unsafe in the interface isn't a measure of safety at all.
It is a measure of the intended level of care that the users of your interface have to take. If there's no unsafe in the interface, then that implies that the library has only provided safe interfaces, even if it uses unsafe internally, and that the interface exposed enforces all necessary invariants.
It is absolutely a useful distinction on whether your users need to deal with unsafe themselves or not.
I guess I don't write enough rust to say this with confidence, but isn't that the bare minimum? I find it difficult to believe the rust community would accept using a library where the API requires unsafe.
>I guess I don't write enough rust to say this with confidence, but isn't that the bare minimum
I have some experience and yes, unless you're putting out a library for specifically low-level behavior like manual memory management or FFI. Trivia about the unsafe fn keyword missed the point of my comment entirely.
Sure, it's a useful distinction for whether users need to care about safety but not whether the underlying code is safe itself, which is what I wrote about.
No or very little but verified unsafe internal code is the bar for many Rust reimplementations. It would also be what keeps the code memory safe.
Intriguing work! Does it panic on any bad inputs? That's better than memory unsafety of libxml2, but still a DoS concern for some servers.
Genuine question about the verification story here: the test suite tells you the happy paths work, but for something as gnarly as XML parsing (billion laughs, deeply nested entities, encoding edge cases), how confident are you that an agent-generated parser handles the adversarial inputs correctly? Passing W3C conformance is necessary but probably not sufficient for a library that might replace one with known CVEs in security-sensitive contexts. Did you run any fuzzing against it, or is that the next step?
Yes, in testing I did add four fuzzing targets to the repo:
1. fuzz_xml_parse: throws arbitrary bytes at the XML parser in both strict and recovery mode
2. fuzz_html_parse: throws arbitrary bytes at the HTML parser
3. fuzz_xpath: throws arbitrary XPath expressions at the evaluator
4. fuzz_roundtrip: parse → serialize → re-parse, checking that the pipeline never panics
Because this project uses memory safe rust, there isn't really the need to find the memory bugs that were the majority of libxml2's CVEs.
There is a valid point about logic bugs or infinite loops, which I suppose could be present in any software package, and I'm not sure of a way to totally rule out here.
How does it compare to the original in terms of source code size (number of lines of code?)
It's significantly smaller. Because Rust doesn't require header files or memory management, xmloxide is ~40k lines while libxml2 is ~150k lines.
Does it fix the security flaws that caused the original project to be shut down?
Because it was written in C, libxml2's CVE history has been dominated by use-after-free, buffer overflows, double frees, and type confusion. xmloxide is written in pure Rust, so these entire vulnerability classes are eliminated at compile time.
Only if it doesn’t use any unsafe code, which I don’t think is the case here.
Is that true? I thought if you compiled a rust crate with, `#[deny(unsafe_code)]`, there would not be any issues. xmloxide has unsafe usage only in the the C FFI layer, so the rest of the system should be fine.
https://gitlab.gnome.org/GNOME/libxml2/-/commit/0704f52ea4cd...
Doesn't seem to have shut down or even be unmaintained. Perhaps it was briefly, and has now been resurrected?
If by flaws you mean the security researchers spamming libxml2 with low effort stuff demanding a CVE for each one so they can brag about it – no, I don’t think anybody can fix that.
Based on context, i kind of imagine they are more thinking of the issues surounding libxslt.
It would be interesting to try this approach out with mQuickJS, QuickJS or micropython. They could potentially run hoops around the ones that were first coded in Rust, such as Boa or RustPython.