Honest question: why would this code clamp the reported round-trip time? By default, min = 0.05 ms and max = 800 ms [1].
if (rtt < config.min_rtt)
rtt = config.min_rtt;
else if (rtt > config.max_rtt)
rtt = config.max_rtt;
Wouldn't this hide bugs in the code or network anomalies? Replies from localhost seem to typically arrive in less than 50 µs.
Comments in an earlier version [2] make no sense to me:
/* Use standard timersub for more accurate results */
if (rtt < 0)
rtt = 0;
/* Cap at reasonable maximum to handle outliers */
if (rtt > 1000)
rtt = 1000;
And the update message has a reference to "50µs localhost responses", indicating the comment calling the code out was directly fed into a prompt:
"Fixed
Removed artificial RTT clamping that was hiding legitimate network measurements
Previously clamped RTT between 0.05ms and 800ms
Now reports actual values including sub-50µs localhost responses and >800ms satellite/long-distance links
Added sanity check for negative RTT to detect clock issues without corrupting data
This fix restores full diagnostic capability for detecting network anomalies like bufferbloat and measuring true round-trip times across all network types."
It shouldn't be legal to vibe this hard, honestly. If convicted in court, you should face punishment of, say, XXX hours doing something actually useful to society with your own two hands.
Thanks for pointing this out. If someone reports a bug in my software, I usually acknowledge the reporter in the commit message. Seems like vibe coding also allows one to get rid of this pesky obligation.
It used to be that if someone released a tool that's 700 lines of C, they probably had an actual need and a problem they solved by writing that code because debugging even that amount of C tends to be non-trivial.
Today all bets are off. Does the tool do anything anybody needed? Does it work? Who knows. It might just be 700 lines of convincing-looking C churned out by a model.
Well I was surprised that it actually makes use of "-Wall -Wextra" as good practice.
However both PVS-Studio and clang-tidy have a few complaints about the code, since it is a single file, it is rather easy to try out on Compiler Explorer.
As for your remark, most folks seem to have not followed that C authors also created lint in 1979, Dennis Ritchie proposed fat pointers to WG14, Plan9 was going to use Alef, which failed but its ideas were re-used for Limbo on Inferno, and they were also involved with Go.
Finally Rust's borrow checker ideas steam from AT&T research with Cyclone, as way to create a safe C.
As such the real question is why still use C in new projects, when even the language authors have moved beyond it, or at least reduce their use of it on userspace applications.
We live in a world where now using "-Wall -Wextra" is a positive outlier. :D God damn. I have ALWAYS used these options, along with "-pedantic", "-std=c99" and so forth.
I picked up C for fun last year and this is exactly the flags I have always used by default. Can't remember where I picked that up, but glad to hear I'm doing it right
I always use "-std=c99 -Wall -Wextra -Wpedantic -Werror". You could replace "-Wpedantic" with "-pedantic" though (it is more supported). You may omit "-Werror".
Sometimes I also use "-D_XOPEN_SOURCE=700" and "-D_FORTIFY_SOURCE=2" along with "-fstack-protector-strong".
For debug builds you want "-O0 -g" at the very least.
I also have a make target that uses "scan-build", "cppcheck", and "clang-tidy".
While we are at it, here are some more useful warning flags I have used: https://github.com/cpp-best-practices/cppbestpractices/blob/.... Some C++-only though, some are a bit opinionated (like -Wsign-conversion) and some useful C-only flags might be missing.
Few C-specific references I found just now, but haven't tried myself yet:
Also a good idea to regularly run the program with sanitizers, using them in tests is a good way to do that I think. Why not during development as well if the performance is acceptable for that specific program.
I've looked at a couple of these complaints by clang-tidy, and as it unfortunately often is, all of them were false positives and overzealous nitpicks. All the complaints about memcpy and memset for example, clang-tidy could easily be improved to see that these are just fine and being dogmatic about using the "new and right way" to do things is not helpful.
In practice I've found -Wall with GCC to offer a good warning level and clang-tidy to not offer a lot of constructive feedback (besides it being very slow). For more ambitious projects, it's possible to fine-tune GCC warnings.
You can also, you know, just _use_ a program and see if there are any anomalies when running it. With some discipline to code structure, many problems get hit on the first run, and extensive testing can come a lot closer to static verification than you would think. For non-real-time constrained stuff there is also valgrind and other run-time instrumentation.
Instead of ranting, you should have realized that is the default output without configuration file, which isn't that easily to provide in compiler explorer, without going through the trouble of a project template.
Naturally on a real project there would be an heavily customised static analysis tool, that would only allow a build to succeed with the feedback from the SecDevOps team, alongside feedback loop from pentesters.
We have seen how far just _use_ the program has been a thing tracking down C security issues for the last 37 years, starting with Morris Worm.
And to quote Dennis Ritchie,
> To encourage people to pay more attention to the official language rules, to detect legal but suspicious constructions, and to help find interface mismatches undetectable with simple mechanisms for separate compilation, Steve Johnson adapted his pcc compiler to produce lint [Johnson 79b], which scanned a set of files and remarked on dubious constructions.
Instead of ranting and showing a huge warnings output to make a point fitting your agenda, you could have just disabled the false positives yourself (like I did, by the way) and you would have seen that that vastly reduces the warnings.
Oh, and to disprove your other claim, here is a link to the godbolt with added clang-tidy flag: https://godbolt.org/z/G31Ws8aa1 . This has the clang-tidy invocation changed to disable a single warning category : --checks='-clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling' . Running with that, there remains only a single warning. Which is probably a false positive as well.
If there are real concerns about this code, show them. I'm not saying there can't be any. But it doesn't help your credibility if you continue arguing your claims with evidence that is easily disproved. I have nothing against tooling that actually improve the situation. Btw. that `lint` from almost 50 years ago that you're referencing is probably easily covered by `-Wall` or `-Wextra` alone. I was also mentioning valgrind.
Bottom line, you're vastly exaggerating the gravity of the memory bugs inflicted upon us by memory-unsafe languages, compared to other bugs which exist too. (Maybe I like the term memory-dangerous better).
How come "which isn't that easily to provide in compiler explorer, " suddenly becomes me telling that was impossible?
If you enjoy typing a endless list of clang-tidy flags on a tiny text box, well fun is on the user.
The first concern is that some parts of the industry keep reaching to C when there are better alternatives, even the language authors have moved on to creating better languages.
I've looked at them -- not all of them but all that I've looked at were the same kind recommending memcpy_s instead of memcpy, and were ridiculously easily to classify as false positives. So yes.
> How come "which isn't that easily to provide in compiler explorer, " suddenly becomes me telling that was impossible?
You claimed it wasn't easy but it is easy. VERY easy. It's one flag to disprove your point. Be honest.
> The first concern is that some parts of the industry keep reaching to C when there are better alternatives, even the language authors have moved on to creating better languages.
Many still enjoy it, are productive, and are creating infrastructure for billions of people to use. Let's keep things in relation.
I always assume that anyone that says that something is a false positive without providing any rigorous proof has confirmation bias and are sadly deluding themselves about their ability and the correctness of their code.
I said "probably" because the other messages from clang-tidy were so low quality obvious false positives too, a waste of time. I've already given the remaining warning a look and the code seemed fine to me. I didn't follow the whole massive linter output. Did you?
For whom? What is the creator going to learn from pasting code they've never read? What are readers going to learn from reading code the "author" themselves didn't read, let alone write? If you want to learn about something, reading an LLM-generated repo seems to be about the worst possible way to do it. That's not even to say LLMs are useless for learning; you could ask directly about concepts without having it write all the code for you, but this is the lowest effort application of the tool and is more of a vehicle for anti-learning than anything.
I take issue with the Author section. You’re the only one listed. Shouldn’t you give ChatGPT credit, or even further afield, all the developers who wrote the code and answers that ChatGPT trained on to produce this, as far as I can tell, meaningless tool?
ChatGPT isn't an author, so it shouldn't be listed. Instead, every single piece of human creation that's been sloshed and slurried to produce this drab drivel should be put as authors. That would be fair.
If FSF trained a net on all the code that has Copyright assigned to FSF, could it be used to ethically vibe code free software retaining the same Copyright and license? Perhaps even pointing to a file on fsf.org with all the author's names?
(For me, this does not necessarily say anything about code quality.
However, if a whole project is AI-generated, the author has no enforceable copyright IMHO, and thus, the 2-clause BSD license is void.)
Honest question: why would this code clamp the reported round-trip time? By default, min = 0.05 ms and max = 800 ms [1].
Wouldn't this hide bugs in the code or network anomalies? Replies from localhost seem to typically arrive in less than 50 µs.Comments in an earlier version [2] make no sense to me:
[1] https://github.com/davidesantangelo/fastrace/blob/5b843a197b...[2] https://github.com/davidesantangelo/fastrace/commit/79d92744...
It has now been changed to
https://github.com/davidesantangelo/fastrace/blob/e8b19407a4...And the update message has a reference to "50µs localhost responses", indicating the comment calling the code out was directly fed into a prompt:
"Fixed Removed artificial RTT clamping that was hiding legitimate network measurements Previously clamped RTT between 0.05ms and 800ms Now reports actual values including sub-50µs localhost responses and >800ms satellite/long-distance links Added sanity check for negative RTT to detect clock issues without corrupting data This fix restores full diagnostic capability for detecting network anomalies like bufferbloat and measuring true round-trip times across all network types."
It shouldn't be legal to vibe this hard, honestly. If convicted in court, you should face punishment of, say, XXX hours doing something actually useful to society with your own two hands.
Thanks for pointing this out. If someone reports a bug in my software, I usually acknowledge the reporter in the commit message. Seems like vibe coding also allows one to get rid of this pesky obligation.
Because it's AI generated.
It used to be that if someone released a tool that's 700 lines of C, they probably had an actual need and a problem they solved by writing that code because debugging even that amount of C tends to be non-trivial.
Today all bets are off. Does the tool do anything anybody needed? Does it work? Who knows. It might just be 700 lines of convincing-looking C churned out by a model.
Well I was surprised that it actually makes use of "-Wall -Wextra" as good practice.
However both PVS-Studio and clang-tidy have a few complaints about the code, since it is a single file, it is rather easy to try out on Compiler Explorer.
https://godbolt.org/z/n4M1vGccq
As for your remark, most folks seem to have not followed that C authors also created lint in 1979, Dennis Ritchie proposed fat pointers to WG14, Plan9 was going to use Alef, which failed but its ideas were re-used for Limbo on Inferno, and they were also involved with Go.
Finally Rust's borrow checker ideas steam from AT&T research with Cyclone, as way to create a safe C.
As such the real question is why still use C in new projects, when even the language authors have moved beyond it, or at least reduce their use of it on userspace applications.
We live in a world where now using "-Wall -Wextra" is a positive outlier. :D God damn. I have ALWAYS used these options, along with "-pedantic", "-std=c99" and so forth.
I picked up C for fun last year and this is exactly the flags I have always used by default. Can't remember where I picked that up, but glad to hear I'm doing it right
Yes you are.
I always use "-std=c99 -Wall -Wextra -Wpedantic -Werror". You could replace "-Wpedantic" with "-pedantic" though (it is more supported). You may omit "-Werror".
Sometimes I also use "-D_XOPEN_SOURCE=700" and "-D_FORTIFY_SOURCE=2" along with "-fstack-protector-strong".
For debug builds you want "-O0 -g" at the very least.
I also have a make target that uses "scan-build", "cppcheck", and "clang-tidy".
While we are at it, here are some more useful warning flags I have used: https://github.com/cpp-best-practices/cppbestpractices/blob/.... Some C++-only though, some are a bit opinionated (like -Wsign-conversion) and some useful C-only flags might be missing.
Few C-specific references I found just now, but haven't tried myself yet:
https://github.com/systemd/systemd/blob/0885e4a6e7ca93d3aef8... https://github.com/airbus-seclab/c-compiler-security
Also a good idea to regularly run the program with sanitizers, using them in tests is a good way to do that I think. Why not during development as well if the performance is acceptable for that specific program.
I've looked at a couple of these complaints by clang-tidy, and as it unfortunately often is, all of them were false positives and overzealous nitpicks. All the complaints about memcpy and memset for example, clang-tidy could easily be improved to see that these are just fine and being dogmatic about using the "new and right way" to do things is not helpful.
In practice I've found -Wall with GCC to offer a good warning level and clang-tidy to not offer a lot of constructive feedback (besides it being very slow). For more ambitious projects, it's possible to fine-tune GCC warnings.
You can also, you know, just _use_ a program and see if there are any anomalies when running it. With some discipline to code structure, many problems get hit on the first run, and extensive testing can come a lot closer to static verification than you would think. For non-real-time constrained stuff there is also valgrind and other run-time instrumentation.
Instead of ranting, you should have realized that is the default output without configuration file, which isn't that easily to provide in compiler explorer, without going through the trouble of a project template.
Naturally on a real project there would be an heavily customised static analysis tool, that would only allow a build to succeed with the feedback from the SecDevOps team, alongside feedback loop from pentesters.
We have seen how far just _use_ the program has been a thing tracking down C security issues for the last 37 years, starting with Morris Worm.
And to quote Dennis Ritchie,
> To encourage people to pay more attention to the official language rules, to detect legal but suspicious constructions, and to help find interface mismatches undetectable with simple mechanisms for separate compilation, Steve Johnson adapted his pcc compiler to produce lint [Johnson 79b], which scanned a set of files and remarked on dubious constructions.
-- https://www.nokia.com/bell-labs/about/dennis-m-ritchie/chist...
Instead of ranting and showing a huge warnings output to make a point fitting your agenda, you could have just disabled the false positives yourself (like I did, by the way) and you would have seen that that vastly reduces the warnings.
Oh, and to disprove your other claim, here is a link to the godbolt with added clang-tidy flag: https://godbolt.org/z/G31Ws8aa1 . This has the clang-tidy invocation changed to disable a single warning category : --checks='-clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling' . Running with that, there remains only a single warning. Which is probably a false positive as well.
If there are real concerns about this code, show them. I'm not saying there can't be any. But it doesn't help your credibility if you continue arguing your claims with evidence that is easily disproved. I have nothing against tooling that actually improve the situation. Btw. that `lint` from almost 50 years ago that you're referencing is probably easily covered by `-Wall` or `-Wextra` alone. I was also mentioning valgrind.
Bottom line, you're vastly exaggerating the gravity of the memory bugs inflicted upon us by memory-unsafe languages, compared to other bugs which exist too. (Maybe I like the term memory-dangerous better).
Were they false positives though?
How come "which isn't that easily to provide in compiler explorer, " suddenly becomes me telling that was impossible?
If you enjoy typing a endless list of clang-tidy flags on a tiny text box, well fun is on the user.
The first concern is that some parts of the industry keep reaching to C when there are better alternatives, even the language authors have moved on to creating better languages.
> Were they false positives though?
I've looked at them -- not all of them but all that I've looked at were the same kind recommending memcpy_s instead of memcpy, and were ridiculously easily to classify as false positives. So yes.
> How come "which isn't that easily to provide in compiler explorer, " suddenly becomes me telling that was impossible?
You claimed it wasn't easy but it is easy. VERY easy. It's one flag to disprove your point. Be honest.
> The first concern is that some parts of the industry keep reaching to C when there are better alternatives, even the language authors have moved on to creating better languages.
Many still enjoy it, are productive, and are creating infrastructure for billions of people to use. Let's keep things in relation.
I always assume that anyone that says that something is a false positive without providing any rigorous proof has confirmation bias and are sadly deluding themselves about their ability and the correctness of their code.
I said "probably" because the other messages from clang-tidy were so low quality obvious false positives too, a waste of time. I've already given the remaining warning a look and the code seemed fine to me. I didn't follow the whole massive linter output. Did you?
It could be a vehicle to learn about traceroute / ICMP.
For whom? What is the creator going to learn from pasting code they've never read? What are readers going to learn from reading code the "author" themselves didn't read, let alone write? If you want to learn about something, reading an LLM-generated repo seems to be about the worst possible way to do it. That's not even to say LLMs are useless for learning; you could ask directly about concepts without having it write all the code for you, but this is the lowest effort application of the tool and is more of a vehicle for anti-learning than anything.
I meant that IFF the author wrote it themself. IFF.
I just built and ran it.
Unlike traceroute and mtr, this utility must be run as root.
fastrace 1.1.1.1
fastrace 0.2.1
Tracing route to 1.1.1.1 (1.1.1.1)
Maximum hops: 30, Probes per hop: 3, Protocol: UDP
TTL │ IP Address (RTT ms) Hostname
────┼───────────────────────────────────────────
Error creating ICMP socket. Are you running as root?: Operation not permitted
I take issue with the Author section. You’re the only one listed. Shouldn’t you give ChatGPT credit, or even further afield, all the developers who wrote the code and answers that ChatGPT trained on to produce this, as far as I can tell, meaningless tool?
ChatGPT isn't an author, so it shouldn't be listed. Instead, every single piece of human creation that's been sloshed and slurried to produce this drab drivel should be put as authors. That would be fair.
If FSF trained a net on all the code that has Copyright assigned to FSF, could it be used to ethically vibe code free software retaining the same Copyright and license? Perhaps even pointing to a file on fsf.org with all the author's names?
This only seems fair.
Is this vibe coded or is it just the readme that's AI-generated?
The commit messages (with dozens of semi- and unrelated changes in each commit) suggest so.
https://github.com/davidesantangelo/fastrace/commit/79d92744...
(For me, this does not necessarily say anything about code quality. However, if a whole project is AI-generated, the author has no enforceable copyright IMHO, and thus, the 2-clause BSD license is void.)
I was skeptical at first, but this does stink of auto-generated code. I want to believe it is not.
I wish ChatGPT could have told the author about the existence of mtr before starting this :-)
Even if it's simple or AI-made, projects like this still help people learn and explore, every start matters.