Awk is still one of my favorite tools because its power is underestimated by nearly everyone I see using it.
ls -l | awk '{print $3}'
That’s typical usage of Awk, where you use it in place of cut because you can’t be bothered to remember the right flags for cut.
But… Awk, by itself, can often replace entire pipelines. Reduce your pipeline to a single Awk invocation! The only drawback is that very few people know Awk well enough to do this, and this means that if you write non-trivial Awk code, nobody on your team will be able to read it.
Every once in a while, I write some tool in Awk or figure out how to rewrite some pipeline as Awk. It’s an enrichment activity for me, like those toys they put in animal habitats at the zoo.
>To Perl connoisseurs, this feature may be known as Autovivification. In general, AWK is quite unequivocally a prototype of Perl. You can even say that Perl is a kind of AWK overgrowth on steroids…
Before I learned Perl, I used to write non-trivial awk programs. Associative arrays, and other features are indeed very powerful. I'm no longer fluent, but I think I could still read a sophisticated awk script.
Even sed can be used for some fancy processing (i.e scripts), if one knows regex well.
> this means that if you write non-trivial Awk code, nobody on your team will be able to read it.
Sort of! A lot of AWK is easy to read even if you don't remember how to write it. There are a few quirks like how gsub modifies its target in-place (and how its default target is $0), and of course understanding the overall pattern-action layout. But I think most reasonable (not too clever, not too complicated) AWK scripts would also be readable to a typical programmer even if they don't know AWK specifically.
I wrote a BASIC renumberer and compactor in bash, using every bashism I could so that it called no externals and didn't even use backticks to call child bashes, just pure bash itself (but late version and use every available feature for convenience and compactness).
I then re-wrote it in awk out of curiosity and it looked almost the same.
Crazy bash expansion syntax and commandline parser abuse was replaced by actual proper functions, but the whole thing when done was almost a line by line in-place replacement, so almost the same loc and structure.
Both versions share most of the same advantages over something like python. Both single binary interpreters always already installed. Both versions will run on basically any system any platform any version (going forward at least) without needing to install anything let alone anything as gobsmacking ridiculous as pip or venv.(1)
But the awk version is actually readable.
And unlike bash, awk already pretty much stopped changing very much decades ago, so not only is it forward compatible, it's pretty backwards compatible too.
Not that that is generally a thing you have to worry about. We don't make new machines that are older than some code we wrote 5 years ago. Old bash or awk code always works on the next new machine, and that's all you ever need(2).
There is gnu vs bsd vs posix vs mawk/nawk but that's not much of a problem and it's not a constantly breaking new-version problem but the same gnu vs posix differences for the last 30 years. You have to knowingly go out of your way to use mawk etc.
(1) bash you still have for example how everything is on bash 5 or at worst 4, except a brand new Mac today still ships with bash3, and so you can actually run into backwards compatibility in bash.
(2) and bash does actually have plugins & extensions and they do vary from system to system so you do have things you either need to avoid using or run into exactly the same breakage as python or ruby or whatever.
For writing a program vs gluing other programs together, really awk should be the goat.
I feel the same about using Awk, it is just fun to use. I like that variables have defined initial values so they don't need to be declared. And the most common bits of control flow needed to process an input file are implicit. Some fun things I've written with awk
Although the latter just uses awk as a weird shell and maintains a couple child processes for converting md to html and executing code blocks with output piped into the document
> That’s typical usage of Awk, where you use it in place of cut because you can’t be bothered to remember the right flags for cut.
Even you remember the flags, cut(1) will not be able to handle ls -l. And any command that uses spaces for aligning the text into fixed-width columns.
Unlike awk(1), cut(1) only works with delimiters that are a single character. Meaning, a run of spaces will be treated like several empty fields. And, depending on factors you don't control, every line will have different number of fields in it, and the data you need to extract will be in a different field.
You can either switch to awk(1), because its default field separator treats runs of spaces as one, or squeeze them with tr(1) first:
Love awk. In the early days of my career, I used to write ETL pipelines and awk helped me condense a lot of stuff into a small number of LOC. I particularly prided myself in writing terse one-liners (some probably undecipherable, ha!); but did occasionally write scripts. Now I mostly reach for Python.
awk is so much better than sed to learn given its ability, the only unix tool it doesn't replace is tr and tail, but other than that, you can use it instead of grep, cut, sed, head.
Since you control the \\d format, why would you allow/support anything but a space as a separator?
That's just to distinguish it from a comment like "\\delete empty nodes" that is not the \\d debug
notation.
If tabs are supported,
[ \t]
is still shorter than
[[:space:]]
and if we include all the "isspace" characters from ASCII (vertical tab, form feed, embedded carriage return) except for the line feed that would never occur due to separating lines, we just break even on pure character count:
[_\t\v\f\r]
TVFR all fall under the left hand, backspace under the right, and nothing requires Shift.
The resulting character class does exactly the same thing under any locale.
ISO C99 says, of the isblank function (to which [:blank:] is related:
The isblank function tests for any character that is a standard blank character or is one of a locale-specific set of characters for which isspace is true and that is used to separate words within a line of text. The standard blank characters are the following: space (’ ’), and horizontal tab (’\t’). In the "C" locale, isblank returns true only for the standard blank characters.
[:blank:] is only the same thing as [\t ] (tab space) if you run your scripts and Awk and everything in the "C" locale.
So, I'm curious. What's the Nushell reimplementation of the 'crash-dump.awk' script at the end of the "Awk in 20 Minutes" article on ferd.ca ? Do note that "I simply won't deal with weirdly-structured data." isn't an option.
This was a great read, and the previous post in the series. I see a lot of very convincing arguments here (https://maximullaris.com/awk.html#why) but for me one of the biggest points in favour of python (and I say this is someone who, for learning, will always just reach for C++ because of my muscle memory) is its eminent readability. If I'm writing a script, quite a lot of the time, it's meant not just for myself but for my peers to use with some degree of regularity. I feel pretty confident that there would be much more operational overhead and a lot of time spent explaining internals with awk than with python.
The structure can be a bit confusing if you've only seen one liners because it has a lot of defaults that kick in when not specified.
The pleasant surprise from learning to use awk was that bpftrace suddenly became much more understandable and easier to write as well, because it's partially inspired by awk.
The portability hit me. I was working a closed corp net that at the time didn't have python and shell was so inconsistent. But awk just worked. Sed was also a really strong tool.
Everyone should read "The AWK Programming Language". It's so short, with both the first and second editions floating around 200 pages in an A5 (could be off on that page size) form factor.
Aside from AWK being a handy language to know, understanding the ideas behind it from a language design and use case perspective can help open your eyes to new constructs and ideas.
Good for extensibility is a claim I've never heard before, I've always found awk's "everything is global scope" as a huge limitation, but if its scripting then I suppose you could just... isolate each script's namespace and take the global namespace as the exported namespace, and since everything is static it really simplifies things further, but lua is still better of course, but if you don't need that much power I suppose it would be even smaller.
I am not a programmer, but I have used awk since the 1980's. And normally I would read this type of info or really many things about typical unix tools. I've done a small amount of helpful things with awk (again dating to the 1980's). (Wrote an estimating system using awk and flat txt files as an example).
However given what I've been able to acomplish with Claude Code, I no longer find it necessary to know any details, tips, or tricks, or to really learn anything more (at least for the types of projects I am involved in for my own benefit).
Update: Would love to know why this was downvoted...
HN is all about content that gratifies one’s intellectual curiosity, so if you are admitting you have lost the desire to learn, then that could be triggering the backlash.
Obviously, you won't understand or agree with the reason once explained, so really what's the point?
The reason is (yes I will be so bold as to speak for all on this one) both using ai to do your thinking for you, and essentially advocating to any readers to do the same simply by writing how well it works for you. Some people find this actively bad, of negative value, and some find it merely utterly uninteresting, of no value, and both responses produce downvotes.
But it's automatic that you can not see this. If you recognized any problem, you would not be doing it, or at the very least would not describe it as anything but an embarrasing admission, like talking about a guilty pleasure vs a wholesome good thing.
So don't bother asking "What's wrong with using this tool that works vs any other tool that works?" If you have to ask... There are several things wrong, not just one.
Or for some it could just be that "I used to use awk but now I just use ___" just doesn't add anything to a discussion about awk. "I used to use awk a lot but now I just use ruby". Ok? So what? Some people go as far as to downvote for that.
Also now that you whined about downvotes, I wouldn't be surprised if that isn't the cause of some itslef, because it absolutely does deserve it.
There might possibly also be at least some just from "I'm not a programmer but here's my thoughts on this programming topic" though that isn't very wrong in my own opinion. You even say you've actually used awk a lot so as far as I'm concerned you can absolutely talk about awk and probably don't need to be so humble as to deny yourself as a pragrammer. It's admirable to avoid making claims about yourself, but I bet a bystander would call you at least a programmer, even if we'll leave the actual level of sophistication unspecified.
Since I wrote this comment, I did not up or downvote myself. But for the record, I would have downvoted for the ai.
Awk is still one of my favorite tools because its power is underestimated by nearly everyone I see using it.
That’s typical usage of Awk, where you use it in place of cut because you can’t be bothered to remember the right flags for cut.But… Awk, by itself, can often replace entire pipelines. Reduce your pipeline to a single Awk invocation! The only drawback is that very few people know Awk well enough to do this, and this means that if you write non-trivial Awk code, nobody on your team will be able to read it.
Every once in a while, I write some tool in Awk or figure out how to rewrite some pipeline as Awk. It’s an enrichment activity for me, like those toys they put in animal habitats at the zoo.
>To Perl connoisseurs, this feature may be known as Autovivification. In general, AWK is quite unequivocally a prototype of Perl. You can even say that Perl is a kind of AWK overgrowth on steroids…
Before I learned Perl, I used to write non-trivial awk programs. Associative arrays, and other features are indeed very powerful. I'm no longer fluent, but I think I could still read a sophisticated awk script.
Even sed can be used for some fancy processing (i.e scripts), if one knows regex well.
> this means that if you write non-trivial Awk code, nobody on your team will be able to read it.
Sort of! A lot of AWK is easy to read even if you don't remember how to write it. There are a few quirks like how gsub modifies its target in-place (and how its default target is $0), and of course understanding the overall pattern-action layout. But I think most reasonable (not too clever, not too complicated) AWK scripts would also be readable to a typical programmer even if they don't know AWK specifically.
I wrote a BASIC renumberer and compactor in bash, using every bashism I could so that it called no externals and didn't even use backticks to call child bashes, just pure bash itself (but late version and use every available feature for convenience and compactness).
I then re-wrote it in awk out of curiosity and it looked almost the same.
Crazy bash expansion syntax and commandline parser abuse was replaced by actual proper functions, but the whole thing when done was almost a line by line in-place replacement, so almost the same loc and structure.
Both versions share most of the same advantages over something like python. Both single binary interpreters always already installed. Both versions will run on basically any system any platform any version (going forward at least) without needing to install anything let alone anything as gobsmacking ridiculous as pip or venv.(1)
But the awk version is actually readable.
And unlike bash, awk already pretty much stopped changing very much decades ago, so not only is it forward compatible, it's pretty backwards compatible too.
Not that that is generally a thing you have to worry about. We don't make new machines that are older than some code we wrote 5 years ago. Old bash or awk code always works on the next new machine, and that's all you ever need(2).
There is gnu vs bsd vs posix vs mawk/nawk but that's not much of a problem and it's not a constantly breaking new-version problem but the same gnu vs posix differences for the last 30 years. You have to knowingly go out of your way to use mawk etc.
(1) bash you still have for example how everything is on bash 5 or at worst 4, except a brand new Mac today still ships with bash3, and so you can actually run into backwards compatibility in bash.
(2) and bash does actually have plugins & extensions and they do vary from system to system so you do have things you either need to avoid using or run into exactly the same breakage as python or ruby or whatever.
For writing a program vs gluing other programs together, really awk should be the goat.
>and so you can actually run into backwards compatibility in bash.
let's have a bash and bash that backwards compatibility in bash.
I feel the same about using Awk, it is just fun to use. I like that variables have defined initial values so they don't need to be declared. And the most common bits of control flow needed to process an input file are implicit. Some fun things I've written with awk
Plain text accounting program in awk https://github.com/benjaminogles/ledger.bash
Literate programming/static site generator in awk https://github.com/benjaminogles/lit
Although the latter just uses awk as a weird shell and maintains a couple child processes for converting md to html and executing code blocks with output piped into the document
AWK, rc, and mk are the 3 big tools in my shell toolkit. It's great
Why mk instead of any of the other builders?
> That’s typical usage of Awk, where you use it in place of cut because you can’t be bothered to remember the right flags for cut.
Even you remember the flags, cut(1) will not be able to handle ls -l. And any command that uses spaces for aligning the text into fixed-width columns.
Unlike awk(1), cut(1) only works with delimiters that are a single character. Meaning, a run of spaces will be treated like several empty fields. And, depending on factors you don't control, every line will have different number of fields in it, and the data you need to extract will be in a different field.
You can either switch to awk(1), because its default field separator treats runs of spaces as one, or squeeze them with tr(1) first:
Love awk. In the early days of my career, I used to write ETL pipelines and awk helped me condense a lot of stuff into a small number of LOC. I particularly prided myself in writing terse one-liners (some probably undecipherable, ha!); but did occasionally write scripts. Now I mostly reach for Python.
awk is so much better than sed to learn given its ability, the only unix tool it doesn't replace is tr and tail, but other than that, you can use it instead of grep, cut, sed, head.
one of the best word-wrapping implementations I've seen (handles color codes and emojis just fine!) is written in pure mawk
very fast, highly underrated language
I'm not sure how good it would be for pipelines, if a step should fail, or if a step should need to resume, etc.
This sounds interesting. Could you give an example where you rewrote a pipeline in awk?
Somebody wanted to set breakpoints in their C code by marking them with a comment (note “d” for “debugger”):
You can get a list of them with a single Awk line. You can even create a GDB script, pretty easily.(IMO, easier still to configure your editor to support breakpoints, but I’m not the one who chose to do it this way.)
Why are you using the locale-specific [:space:] on source code? In your C source code, are you using spaces other than ASCII 0x20?
Would you have //d<0xA0>rest of comment?
Or some fancy Unicode space made using several UTF-8 bytes?
Tab characters can also be found in source code.
Since you control the \\d format, why would you allow/support anything but a space as a separator? That's just to distinguish it from a comment like "\\delete empty nodes" that is not the \\d debug notation.
If tabs are supported,
is still shorter than and if we include all the "isspace" characters from ASCII (vertical tab, form feed, embedded carriage return) except for the line feed that would never occur due to separating lines, we just break even on pure character count: TVFR all fall under the left hand, backspace under the right, and nothing requires Shift.The resulting character class does exactly the same thing under any locale.
There's also [:blank:], which is just space and tab. Both I think are perfectly readable and reasonable options that communicate intent nicely.
ISO C99 says, of the isblank function (to which [:blank:] is related:
The isblank function tests for any character that is a standard blank character or is one of a locale-specific set of characters for which isspace is true and that is used to separate words within a line of text. The standard blank characters are the following: space (’ ’), and horizontal tab (’\t’). In the "C" locale, isblank returns true only for the standard blank characters.
[:blank:] is only the same thing as [\t ] (tab space) if you run your scripts and Awk and everything in the "C" locale.
Not the op but here is an example: TOKEN=$(kubectl describe secret -n kube-system $(kubectl get secrets -n kube-system | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t' | tr -d " ")
This pipeline may be significantly reduced by replacing cut's with awk, accommodating grep within awk and using awk's gsub in place of tr.
Example of replacing grep+cut with a single awk invokation:
Conditions don't have to be regular expressions. For example:Stop using awk, use a real programming language+shell instead, with structured data instead of bytestream wrangling:
You don't need to memorize bad tools' quirks. You can just use good tools.https://nushell.sh - try Nushell now! It's like PowerShell, if it was good.
PowerShell is open source and available on Linux today for those who enjoy an OO terminal.
MIT licensed.
https://learn.microsoft.com/en-us/powershell/scripting/insta...
Also, the nushell code is self-explanatory. Who knows what $3 refers to?
While your recommendation is sound: this is not only a rudely-worded take, but also missing the point of the parent comment.
> try Nushell now!
So, I'm curious. What's the Nushell reimplementation of the 'crash-dump.awk' script at the end of the "Awk in 20 Minutes" article on ferd.ca ? Do note that "I simply won't deal with weirdly-structured data." isn't an option.
Once you get TSV and CSV related tools, nushell and psh are like toys.
https://www.nushell.sh/commands/docs/from_csv.html
For TSV, use the --separator flag.
Current AWK (One True AWK, under OpenBSD in base) got CSV support, you can read the man page for it.
This was a great read, and the previous post in the series. I see a lot of very convincing arguments here (https://maximullaris.com/awk.html#why) but for me one of the biggest points in favour of python (and I say this is someone who, for learning, will always just reach for C++ because of my muscle memory) is its eminent readability. If I'm writing a script, quite a lot of the time, it's meant not just for myself but for my peers to use with some degree of regularity. I feel pretty confident that there would be much more operational overhead and a lot of time spent explaining internals with awk than with python.
I used to be scared of Awk, and then I read through the appendix / chapters of "More Programming Pearls" (https://www.amazon.com/More-Programming-Pearls-Confessions-C...) and it became a much easier to reason about language.
The structure can be a bit confusing if you've only seen one liners because it has a lot of defaults that kick in when not specified.
The pleasant surprise from learning to use awk was that bpftrace suddenly became much more understandable and easier to write as well, because it's partially inspired by awk.
I learned the basics of AWK in a few minutes from here: https://learnxinyminutes.com/awk/ — and I agree with you, it was worth it!
The portability hit me. I was working a closed corp net that at the time didn't have python and shell was so inconsistent. But awk just worked. Sed was also a really strong tool.
Nice link to the canonical book on Awk within the first linked page in the article: https://ia903404.us.archive.org/0/items/pdfy-MgN0H1joIoDVoIC...
Everyone should read "The AWK Programming Language". It's so short, with both the first and second editions floating around 200 pages in an A5 (could be off on that page size) form factor.
Aside from AWK being a handy language to know, understanding the ideas behind it from a language design and use case perspective can help open your eyes to new constructs and ideas.
There’s a second edition: https://www.awk.dev/
Good for extensibility is a claim I've never heard before, I've always found awk's "everything is global scope" as a huge limitation, but if its scripting then I suppose you could just... isolate each script's namespace and take the global namespace as the exported namespace, and since everything is static it really simplifies things further, but lua is still better of course, but if you don't need that much power I suppose it would be even smaller.
Fun read. I always thought quick calculation like echo $((1+100)) is just a shell feature. Perhaps it was rooted from awk as well.
FreeCell written in AWK:
https://git.luxferre.top/nnfc/
AWK goodies (git clone --recursive) :
https://git.luxferre.top/awk-gold-collection
I am not a programmer, but I have used awk since the 1980's. And normally I would read this type of info or really many things about typical unix tools. I've done a small amount of helpful things with awk (again dating to the 1980's). (Wrote an estimating system using awk and flat txt files as an example).
However given what I've been able to acomplish with Claude Code, I no longer find it necessary to know any details, tips, or tricks, or to really learn anything more (at least for the types of projects I am involved in for my own benefit).
Update: Would love to know why this was downvoted...
HN is all about content that gratifies one’s intellectual curiosity, so if you are admitting you have lost the desire to learn, then that could be triggering the backlash.
Obviously, you won't understand or agree with the reason once explained, so really what's the point?
The reason is (yes I will be so bold as to speak for all on this one) both using ai to do your thinking for you, and essentially advocating to any readers to do the same simply by writing how well it works for you. Some people find this actively bad, of negative value, and some find it merely utterly uninteresting, of no value, and both responses produce downvotes.
But it's automatic that you can not see this. If you recognized any problem, you would not be doing it, or at the very least would not describe it as anything but an embarrasing admission, like talking about a guilty pleasure vs a wholesome good thing.
So don't bother asking "What's wrong with using this tool that works vs any other tool that works?" If you have to ask... There are several things wrong, not just one.
Or for some it could just be that "I used to use awk but now I just use ___" just doesn't add anything to a discussion about awk. "I used to use awk a lot but now I just use ruby". Ok? So what? Some people go as far as to downvote for that.
Also now that you whined about downvotes, I wouldn't be surprised if that isn't the cause of some itslef, because it absolutely does deserve it.
There might possibly also be at least some just from "I'm not a programmer but here's my thoughts on this programming topic" though that isn't very wrong in my own opinion. You even say you've actually used awk a lot so as far as I'm concerned you can absolutely talk about awk and probably don't need to be so humble as to deny yourself as a pragrammer. It's admirable to avoid making claims about yourself, but I bet a bystander would call you at least a programmer, even if we'll leave the actual level of sophistication unspecified.
Since I wrote this comment, I did not up or downvote myself. But for the record, I would have downvoted for the ai.