I recently noticed an issue where tailscale was placing its iptables rules above my system firewall (`ts-input all -- anywhere anywhere` before `nixos-fw`). The tailscale input chain looked like this:
Chain ts-input (1 references)
target prot opt source destination
ACCEPT all -- pan anywhere
RETURN all -- 100.115.92.0/23 anywhere
DROP all -- 100.64.0.0/10 anywhere
ACCEPT all -- anywhere anywhere
That seemed bad, but I'm not super familiar with iptables, so I ask Claude 4.5 Sonnet through Kagi. It gives me:
No, nixos-fw IS run on most packets. Let me explain the iptables chain flow:
How the Chain Processing Works
In iptables, when a user-defined chain (like ts-input) is called:
- ACCEPT: Accepts the packet within that chain and returns to the calling chain (INPUT), where processing continues with the next rule
- RETURN: Simply returns to the calling chain and continues processing
- DROP: Drops the packet entirely (no further processing anywhere)
Which is very very wrong. I highlighted the similarity between ACCEPT and RETURN in its explanation, not even with that much conviction or forcefulness, and it turned on a dime.
You're absolutely right to question that - I made an error! Let me correct it:
...
Most packets DON'T reach nixos-fw because:
...
I also had another incident where it looked over a backup script, and gave it an ok, but failed to alert me that `gpg -as --encrypt-to <ID>` requires `-e` to actually do the encryption. By that point I was wiser, and had o3-pro look over the script, which caught it right away.
I'm not sure why AI is so completely trash at security. In fairness, the average software dev is also worse at security compared to writing code, and the answer to many stackoverflow questions is "add --insecure --no-check --bypass-tsl", but I'm still a little shocked at how bad AI is.
It was intended. Since my original comment, I have learned that the output of `iptables -L` is incomplete when using `iptables-nft`. Specifically, if hide that the rule
ACCEPT all -- anywhere anywhere
is configured to only match on the interface `tailscale0`.
The folks at security@tailscale.com were prompt to set me straight when I reported it, and I greatly appreciate that.
I would say most technical people are by now aware that this software (LLMs) make stuff up. If someone wasn't sure, and to find the answer asked the LLM in a manner analogous to yours, and just ran with it, then the problem here are the people.
If you click through to the study, it actually says that they did a survey and asked 'Has your organization ever identified a security vulnerability introduced by AI-generated code'.
20% of respondents answered 'Yes, a serious incident'. Another 49% responded with 'Yes, a minor issue'.
I recently noticed an issue where tailscale was placing its iptables rules above my system firewall (`ts-input all -- anywhere anywhere` before `nixos-fw`). The tailscale input chain looked like this:
That seemed bad, but I'm not super familiar with iptables, so I ask Claude 4.5 Sonnet through Kagi. It gives me: Which is very very wrong. I highlighted the similarity between ACCEPT and RETURN in its explanation, not even with that much conviction or forcefulness, and it turned on a dime. I also had another incident where it looked over a backup script, and gave it an ok, but failed to alert me that `gpg -as --encrypt-to <ID>` requires `-e` to actually do the encryption. By that point I was wiser, and had o3-pro look over the script, which caught it right away.I'm not sure why AI is so completely trash at security. In fairness, the average software dev is also worse at security compared to writing code, and the answer to many stackoverflow questions is "add --insecure --no-check --bypass-tsl", but I'm still a little shocked at how bad AI is.
Was that Tailscale firewall rule intended, a bug or a security issue?
It was intended. Since my original comment, I have learned that the output of `iptables -L` is incomplete when using `iptables-nft`. Specifically, if hide that the rule
is configured to only match on the interface `tailscale0`.The folks at security@tailscale.com were prompt to set me straight when I reported it, and I greatly appreciate that.
I would say most technical people are by now aware that this software (LLMs) make stuff up. If someone wasn't sure, and to find the answer asked the LLM in a manner analogous to yours, and just ran with it, then the problem here are the people.
Headline is wrong.
If you click through to the study, it actually says that they did a survey and asked 'Has your organization ever identified a security vulnerability introduced by AI-generated code'.
20% of respondents answered 'Yes, a serious incident'. Another 49% responded with 'Yes, a minor issue'.
Bottom line: Before relying on AI generated code, you should ask yourself one question, "Do you feel lucky?".
If your workflow requires any real accuracy, consistency, or security, LLMs are a liability.