There are two kinds of bugs: the rare, tricky race conditions and the everyday “oh shucks” ones. The rare ones show up maybe 1% of the time—they demand a debugger, careful tracing, and detective work. The “oh shucks” kind where I am half sure what it is when I see the shape of the exception message from across the room - that is all the rest of the time. A simple print statement usually does the trick for this kind.
By definition a rare case probably will rarely show up in my dev environment if it shows up at all, so the only way to find them is to add logging and look at the logs next time someone reports that same bug after the logging was added.
Something tells me your debugger is really hard to use, because otherwise why would you voluntarily choose to add and remove logging instead of just activating the debugger?
Rare 1% bugs practically require prints debugging because they are only going to appear only 6 times if you run the test 600 times. So you just run the test 600 times all at once, look at the logs of the 6 failed tests, and fix the bug. You don’t want to run the debugger 600 times in sequence.
Record-and-replay debuggers like rr and UndoDB are designed for exactly this scenario. In fact it's way better than logging; with logging, in practice, you usually don't have the logs you need the first time, so you have to iterate "add logs, rerun 600 times" several times. With rr and UndoDB you just have to reproduce once and then you'll be able to figure it out.
I used to agree with this, but then I realized that you can use trace points (aka non-suspending break points) in a debugger. These cover all the use cases of print statements with a few extra advantages:
- You can add new traces, or modify/disable existing ones at runtime without having to recompile and rerun your program.
- Once you've fixed the bug, you don't have to cleanup all the prints that you left around the codebase.
I know that there is a good reason for debugging with prints: The debugging experience of many languages suck. In that case I always use prints. But if I'm lucky to use a language with good debugging tooling (e.g Java/Kotlin + IntelliJ IDEA), there is zero chance to ever print for debugging.
The tricky race conditions are the ones you often don't see in the debugger, because stopping one thread makes the behavior deterministic.
But that aside, for webapps I feel it's way easier to just set a breakpoint and stop to see a var's value instead of adding a print statement for it (just to find out that you also need to see the value of another var). So given you just always start in debugging mode, there's no downside if you have a good IDE.
Often you can also just use conditional breakpoints, which surprisingly few people know about (to be clear, it's still a breakpoint, but your application just auto continues if false. Is usually usable via right click on the area you're clicking on to set the breakpoint.
I've had far better luck print debugging tricky race conditions than using a debugger.
The only language where I've found a debugger particularly useful for race condition debugging is go, where it's a lot easier to synthetically trigger race conditions in my experience.
Well, if you have a race condition, the debugger is likely to change the timing and alter the race, possibly hiding it altogether. Race conditions is where print is often more useful than the debugger.
Log to a memory ring buffer (if you need extreme precision, prefetch everything and write binary fixed size "log entries"), flush asynchronously at some point when you don't care about timing anymore. Really helpful in kernel debugging.
No, wrong. Totally wrong. You're changing the conditions that prevent accurate measurement without modification. This is where you use proper tools like an In-Circuit Emulator (ICE) or its equivalent.
What I've found is that as you chew through surface level issues, at one point all that's left is messy and tricky bugs.
Still have a vivid memory of moving a JS frontend to TS and just overnight losing all the "oh shucks" frontend bugs, being left with race conditions and friends.
Not to say you can't do print debugging with that (tracing is fancy print debugging!), but I've found that a project that has a lot of easy-to-debug issues tends to be at a certain level of maturity and as times goes on you start ripping your hair out way more
Absolutely. My current role involves literally chasing down all these integration point issues - and they keep changing! Not everything has the luxury of being built on a stable, well tested base.
I'm having the most fun I've had in ages. It's like being Sherlock Holmes, and construction worker all at once.
Print statements, debuggers, memory analyzers, power meters, tracers, tcpump - everything has a place, and the problem space helps dictate what and when.
The easy-to-debug issues are there because I just wrote some new code, didn't even commit the code, and is right now writing some unit tests for the new code. That's extremely common and print debugging is alright here.
Unit and integration testing for long-term maintainable code that's easy and quick to prove it still works, not print debugging with laborious, untouchable, untestable garbage.
Even print debugging is easier in a good debugger.
Print debugging in frontend JS/TS is literally just writing the statement "debugger;" and saving the file. JS, unlike supposedly better designed languages, is designed to support hot reloading so often times just saving the file will launch me into the debugger at the line of code in question.
I used to write C++, and setting up print statements, while easier than using LLDB, is still harder than that.
I still use print debugging, but only when the debugger fails me. It's still easier to write a series of console.log()s than to set up logging breakpoints. If only there was an equivalent to "debugger;" that supported log and continue.
no it's not lol. hmr is an outrageous hack of the language. however, the fact JS can accommodate such shenanigans is really what you mean.
sorry I don't mean to be a pedantic ass. i just think it's fascinating how languages that are "poorly" designed can end up being so damn useful in the future. i think that says something about design.
ESM has Hot Module Reloading. When you import a symbol it gives you a handle to that symbol rather than a plain reference, so that if the module changes the symbol will too.
> the rare, tricky race conditions [...]. The rare ones show up maybe 1% of the time—they demand a debugger,
Interesting. I usually find those harder to debug with a debugger. Debuggers change the timing when stepping through, making the bug disappear. Do you have a cool trick for that? (Or a mundane trick, I'm not picky.)
If I find myself using a debugger it’s usually one two things:
- freshly written low level assembly code that isn’t working
- basic userspace app crash (in C) where whipping out gdb is faster than adding prints and recompiling.
Even never needed a debugger for complex kernel drivers — just prints.
Indeed, depends on deployment and type of application.
If the customer has their own deployment of the app (on their own server or computer), then all you have to go with, when they report a problem, are logs. Of course, you also have to have a way to obtain those logs. In such cases, it's way better for the developers to also never use debugger, because they are then forced to ensure during development that logs do contain sufficient information to pinpoint a problem.
Using a debugger also already means that you can reproduce the problem yourself, which is already half of the solution :)
One from work: another team is willing to support exactly two build modes in their projects: release mode, or full debug info for everything. Loading the full debug info into a debugger takes 30m+ and will fail if the computer goes to sleep midway through.
I just debug release mode instead, where print debug is usually nicer than a debugger without symbols. I could fix the situation other ways, but a non-reversible debugger doesn't justify the effort for me.
Exactly. At work for example I use the dev tools debugger all the time, but lldb for c++ only when running unit tests (because our server harness is too large and debug builds are too large and slow). I’ve never really used an IDE for python.
When using Xcode the debugger is right there and so it is in qt creator. I’ve tried making it work in vim many times and just gave up at some point.
No shade, this was my perspective until recently as well, but I disagree now.
The tipping point for me was the realisation that if I'm printing code out for debugging, I must be executing that code, and if I'm executing that code anyway, it's faster for me to click a debug point in an IDE than it is to type out a print statement.
Not only that, but the thing that I forgot to include in my log line doesn't require adding it in and re-spinning, I can just look it up when the debug point is hit.
I don't know why it took me so long to change the habit but one day it miraculously happened overnight.
> it's faster for me to click a debug point in an IDE than it is to type out a print statement
Interesting. I always viewed the interface to a debugger as its greatest flaw—who wants to grapple with an interface reimplementing the internals of a language half as well when you can simply type, save, commit, and reproduce?
When the print statements cause a change in asynchronous data hazards that leads to the issue disappearing, then what's the plan since you appear to "know it all" already? Perhaps you don't know as much as you profess, professor.
I don't see any evidence that the 1% of bugs can be reduced so easily. A debugger is unsuitable just as often as print debugging is. There is no inherent edge it gives to the sort of reasoning demanded. It is just a flathead rather than a phillips. The only thing that distinguishes this sort of bug from the rest is pain.
I’ll give you an example a plain vanilla ass bug that I dealt with today.
Teammate was trying to use portaudio with ALDA on one of cloud Linux machines for CI tests. Portaudio was failing to initialize with an error that it failed to find the host api.
Why did it fail? Where did it look? What actual operation failed? Who the fuck knows! With a debugger this would take approximately 30 seconds to understand exactly why it failed. Without a debugger you need to spend a whole bunch of time figuring out how a random third party library works to figure out where the fuck to even put a printf.
Printf debugging is great if it’s within systems you already know inside and out. If you deal with code that isn’t yours then debugger is more then an order of magnitude faster and more efficient.
It’s super weird how proud people are to not use tools that would save them hundreds of hours per year. Really really weird.
Every engineer should understand how to use a debugger and a time profiler (one that gives a call tree). Knowing how to do memory profiling is incredibly valuable too.
So many problems can be solved with these.
And then there's some more specialized tooling depending on what you're doing that can be a huge help.
For SQL, the query planner and index hit/miss / full table scan.
And things like valgrind or similar for cache hit/miss.
Proper observability (spans/ traces) for APIs...
Knowing that the tools exist and how to use them can be the difference between software and great software.
Though system design / architecture is very important as well.
So, uh, everything is important, and every engineer must know everything then?
I mean, don't get me wrong, I do agree engineers should at least be aware of the existence of debuggers & profilers and what problems they can solve. It's just that not all the stuff you've said belongs in the "must know" category.
I don't think you'll need valgrind or query planning in web frontend tasks. Knowing them won't hurt though.
I can tell you for a fact a lot of budding web developers don't even know a Javascript debugger exists, let alone something as complex/powerful as Valgrind.
All of these are useful skills in your toolkit that give you a way of reasoning about programs. Sure you can plop console.logs everywhere to figure out control/program flow but when you have a much more powerful tool specifically built for this purpose, wouldn't you, as an engineer, attempt to optimize your troubleshooting process?
Yeah, it's quite sad, considering it's already built-in on all major browsers. And it's not even hard to open it, like a click away on devtools tab.
But I think promoting profilers is much more important than debuggers. Far too many people I know are too eager to jump on "optimization" just because some API is too slow without profiling it first.
With native languages you'll almost always be using a compiler that can output debug symbols, and you can use the output of any compiler with (mostly) any debugger you want.
For JS in the browser, there's a often chain of transformations - TypeScript, Babel, template compilation, a bundler, a minifier - and each of these makes the browser debugger work worse -- and it's not that great to begin with, even on plain JS.
Add that to the fact that console.log actually prints objects in a structured form that you can click through and can call functions on them from the console, and you start to see why console.log() is the default choice.
I work on maintaining a 3D rendering engine written completely in Typescript, along with using a custom, stripped down version of three.js that I rely on for primitives; and no amount of console.logging will help when you're trying to figure out exactly what's going wrong in a large rendering pipeline.
I do use console.logs heavily in my work, but the debugger and profiler are instrumental in providing seamless devex.
> TypeScript, Babel, template compilation, a bundler, a minifier
During development you have access to source maps, devtools will bind breakpoints, show original typescript code and remap call stacks across bundlers.
All modern browsers support mapped debugging, also wrt profiling it can also be symbol mapped to the original sources which makes minified builds diagnosable if you ship proper source maps, which during development you ideally should.
-=-
edit: additional info;
I would also like to say console.log and debugging/profiling are not in a competition. both are useful in different contexts.
for example I will always console.log a response from an API because I like having a nice nested representation that I can click through, I'll console.log objects, classes and everything to explore them in an easier way. this is also great for devex.
I'll use the debugger when I want to pause execution at an intermediate step; for example see the result of my renderer before the postprocessing step kicks in, stop it and inspect shader code before its executed. it's pretty useful.
As mentioned originally; these are TOOLS in your toolkit, you don't have to do a either/or between them.
Well. React and SSR does break debugger a lot but that’s one case. Other web frameworks are much better citizens and the debugger there is much nicer and faster than console logs.
Understanding how to use these tools properly does not take very long. If you've never used them, spending an afternoon with each on real problems will probably change how you think.
If you don't already know which tool to use / how to diagnose the problem, you'll instead of banging your head against the wall, you'll think - "how do i figure out this thing - what is the right tool for this job"? and then you'll probably find it, and use it, because people are awesome and build incredibly useful free / open source software.
"try stuff until it works" is so common, and the experience needed to understand how to go about solving the problem is within reach.
Like especially with llms, "what's the right tool to use to solve problem x i'm having? this is what's going on. i'm on linux/macos, using python" or w/e
It may sound obvious to folks who already use a debugger, but in my experience a decent chunk of people don't use them because they just don't know about them.
Depending on the language or setup debuggers can be really crappy. I think people here would just flee away and go find a better fitting stack, but for more pragmatic workers they'll just learn to debug with the other tools (REPL, structured logging, APMs etc.)
I had a think about where I first learned to use a debugger. The combo of M$ making it easy for .NET and VB6 and working professionally and learning from others was key. Surprised it is less popular. Tests have made it less necessary perhaps BUT debugging a unit test is a killer move. You quickly get to the breakpoint and can tweak the scenario.
> I had a think about where I first learned to use a debugger
Is this not taught anymore? I started on borland C (the blue one, dos interface) and debugging was in the curriculum, 25+ years ago. Then moving to visual studio felt natural with the same concepts, even the same shortcuts mostly.
These days I'll just dump all relevant code into an LLM and have it explained to me instantly.
Being able to ask questions about the parts that are unclear (or just plain wrong) is so much easier than trying to cram the entire thing into my brain RAM.
In my experience it actually helps me learn faster too, since I rarely get stumped on random gotcha's anymore.
With VSCode it's often a 10 minute job to set up. We are spoiled! Back in the VS days using a Microsoft stack it was just there. Click to add breakpoint then F5.
Author missed one of the best features: easy access to hardware breakpoints. Breaking on a memory read or write, either a raw address or via a symbol, is one of the most time saving debugging tools I know.
windbg used to offer scripting capabilities that teams could use to trigger validation of any number of internal data structures essentially at every breakpoint or watchpoint trigger. it was a tremendous way to detect subtle state corruption. and sharing scripts across teams was also a way to share knowledge of a complex binary that was often not encoded in asserts or other aspects of the codebase.
thanks for the pointers glad to hear it’s all still there
i haven’t seen this type of capability used in too many companies tbh and it seems like a lot of opportunity to improve stability and debugging speed and even code exploration/learning (did i break something ?)
From the same toolbox: expression watch. Set a watch on the invariant being violated (say "bufpos < buflen") and get a breakpoint the moment it changes.
Oh my god, same. This literally catches bugs with a smoking gun in their hand in a way that's completely impossible with printf. I'd upvote this 100 times if I could.
Not printf exactly, but I've found bugs with a combination of mprotect, userfaultfd and backtrace_symbols when I couldn't use HW breakpoints.
Basically, mark a set of pages as non-writable so that any writes trigger a pagefault, then register yourself as a pagefault handler for those and see who is doing the write, apply the write and move on. You can do this with LD_PRELOAD without even recompiling the debugee.
Very roughly, hardware watchpoints are memory addresses you ask the processor to issue an "event" for when they're read from, written to, or executed. This event is processed by the kernel, and passed through to the debugger, which breaks execution of the program on the instruction that issued the read/write/exec.
A concrete use case for this is catching memory corruption. If your program corrupts a known piece of memory, just set a hardware watchpoint on that memory address and BOOM, the debugger breaks execution on exactly the line that's responsible for the corruption. It's a fucking godsend sometimes.
While a debugger is of high value, having access to a REPL also covers the major use cases.
In particular, REPL tools will work on remote session, on pre-production servers etc. _if_ the code base is organized in a somewhat modular way, it can be more pleasant than a debugger at times.
Makes me wonder if the state of debugging improved in PHP land. It was mostly unusable for batch process debugging, or when the server memory wasn't infinite, which is kinda the case most of the time for us mere mortals.
It's not a silver bullet, but Visual Studio is leaps and bounds ahead of gdb et. al. for debugging C/C++ code. "Attach to process" and being able to just click a window is so easy when debugging a large Windows app.
lol, agree to disagree here. While the interface to gdb is annoying, there are many gui frontend alternatives.
VS, on the other hand, gets worse with every release. It is intolerably slow and buggy at this point. It used to be a fantastic piece of software, and is now a fantastic pile of shit.
Any recommendations on gdb frontends? Have tried with emacs, but I just really enjoy the point and click stuff, emacs keybinds don't work for me there.
IME console-based debuggers work great for single-threaded code without a lot of console output. They don't work that well otherwise. GUI-based debuggers can probably fix both of those issues. I just haven't really tried them as much.
Thinking back, the issue I had with multi-threaded code was two-fold:
- Things like "continue", "step" are no longer a faithful reproduction of what the program does in real time, so it's more difficult to understand the program's behavior. Some timing-related bugs simplify disappear under a debugger.
- There's usually some background thread that's logging things to console, which reduces to problem 2 in my comment.
I haven't used Go that much. I imagine since goroutines are such a cornerstone of the language, the go debugger must have some nifty features to support multi-(green)-threaded debugging?
Print debugging is historical / offline debugging, just ad-hoc instead of systemic.
The ”debug” package on npm is something in between, as it requires inserting debug statements but they are hidden from output unless an envvar like DEBUG=scope.subscope.*,otherscope is used.
I've loved working with rr! Unfortunately the most recent project I've been contributing to breaks it (honestly it might just be Ubuntu, as it works on my arch install, but doesn't work when deployed where I need to test it).
Most languages let you print the stack, so you can easily see the stack using print debugging.
Anecdotally, dynamic expressions are impossibly slow in the cases I’ve tried them.
As the author mentions, there are also a number of cases where debuggers don’t work. Personally, I’m going to reach for the tool that always works vs. sometimes works.
> I’m going to reach for the tool that always works vs. sometimes works.
This is only logical if you're limited to one tool. Would you never buy a power tool because sometimes the power goes out and a hand tool is your only choice?
This is something that does not require a debugger perse. this is something that can be implemented by a "smart" log. beside the log entry there might be a button to see the trace + state at those points. could even allow log() to have an option for this.
1. stop the program
2. edit it to add the new log
3. rebuild the program
4. run it
5. get the program to the same state to trigger the log
3. can take quite a while on some projects, and 5. can take quite a while too for long-running programs.
And then you see the result of what you printed, figure out you need something else as well, and repeat. Instead you can just trigger a breakpoint and inspect the entire program's state.
...yes? You just print in the relevant stack frame.
There is an inherent tradeoff between interaction and reproducibility. I think the whole conversation of debugger vs print debugging is dumb. Just do whatever makes you the most productive. Often times it is immediately obvious which makes more sense.
> Some would’ve also heard about time travel debuggers (TTD) which let you step back in time. But most languages do not have a mature TTD implementation. So I am not writing about that.
Shame as that's likely the only option with significant universal UX advantage vs. sprinkling prints...
printing is never the appropriate tool. You can make your debugger print something when that line of code is reached anyway and automatically continue if you want. So what's the point of pritntf? It's just less information and features.
Let me enumerate. Printf survives debugger restarts, shows up in git diff, usually messes less with the timing, can be exchanged with coworkers or deployed to users and be toggled with logging rules, has the full power of the programming language, the output is easier to run "diff" on to compare runs, works in CI containers, has no problems with mixed language environments...
As far as I'm concerned, breakpoints and backtraces, especially of crashes, are the superpower of debuggers. Where they are not immediately applicable, I usually don't bother.
This is refreshing. I get triggered by people writing "I don't use a debugger because I'm too smart to need one".
Some other things I'd add:
Some debuggers allow you to add actions. For example logging at the breakpoint is great if I can't modify the source, plus there's nothing to revert afterward. This just scratches the surface. Some debuggers allow you to see entire GPU workloads, view textures etc.
Debuggers are extremely useful for exploring and helping edit code. I can't be the only person that sprinkles breakpoints during development which helps me visualise code flow and quickly jump between source locations.
Maybe someone can give me idea, how can I debug this particular rust app, which is extremely annoying. It's a one of Rustdesk.
It won't run if I compile with debug info. I think it's due to a 3rd party proprietary library. So, to run the app I have to use release profile, with debug info stripped.
So, when I fire up gdb, I can't see any function information or anything, and it has so many system calls it's really difficult to follow through blindly.
I'd investigate why it won't run with debug info in the first place. That feels like the core problem here, because it prevents you from using some debug tools.
Of course that may require digging down pretty low, which is difficult in itself.
Edit: also there's split-debuginfo which puts debug info in separate file. It could help if the reason you can't run it is the debug info itself. Which feels unlikely, but :shrug:.
Not related to OP, but debugging is often about finding where an invariant is broken, so it feels like using LLM to navigate a debugging loop may be useful as it's not a complicated but repetitive task. However in the morning I struggle to imagine how to do that.
Two of the benefits listed (call stack and catch exceptions at the source) are available in logging as well. A good logging framework lets you add the method name, source file and line number for the logging call-after a few debugging sessions you will construct the call stack quite easily. And C# at least lets you print the exception call stack from where it was thrown.
I agree that adhoc dynamic expression evaluation at run time is very useful and can only be done in a debugger.
Don’t tell Primeagen. Although he’s right about debugging sprawling systems in Prod. I’d argue the stateful architecture of these apps is the root cause.
things I can do with print statements but not a debugger: trace the flow of several values across a program, seeing their values at several different times and execution points in a single screen.
I had to avoid doing that inside other macros, or inside Struct or Class definitions, enums, etc. But it wasn't hard, and it was a pretty sizeable codebase.
The DEBUGVIKINGCODER macro, or whatever I called it, was a no-op in release. But in Debug or testing builds, would do something like:
So when I'd run the program, I'd get a directory full of files, one per thread.
Then I wrote another program that would read those all up, and would also read the code, and learn the File Name, Line Number of every GUID...
And, in Visual Studio, this tool program would print to the Output window, the File Name and Line Number, of every call and return.
And, in Visual Studio, you can step forward AND BACK in this Output window, and if you format it correctly, it'll open the file at that point, too.
So I could step forwards and backwards, through the code, to see who called where, etc. I could search in this Output window to jump to the function call I was looking for, and then walk backwards...
Then I added some code that would compare one run to another, and argued we could use that to figure out which of our automated tests formed a "basis set" to execute all of our code...
And to recommend which automated tests we should run, based on past analysis.
In addition to being able to time calls to functions, of course.
So then I added printing out some variables... And printing out lines in the middle of functions, when I wanted to time a section...
And if people respected the GUIDs, making a new one when they forked code, and leaving it alone if they moved code, we could have tracked how unit tests and other automation changed over time.
That got me really wishing that every new call scope really did have a GUID, in all the code we write... And I wished that it was essentially hidden from the developers, because who wants to see that? But, wow, it'd be nice if it was there.
I know there are debuggers that can go backwards and forwards in time... But I feel like being able to compare runs, over weeks and months, as the code is changing, is an under-appreciated objective.
Honestly, I feel like the print vs. debugger debate isn't about the tool, it's about the mindset. Print statements feel like you're just trying to patch a leak, while the debugger is about understanding the plumbing. I’m starting to think relying only on print is a symptom of not truly wanting to understand the system you're working in.
Call stacks and reading code give very different views of the codebase. The debugger tells you what's happening, reading tells you what can happen in many situations at once. You can generalize or focus, respectively, but their strengths and weaknesses remain.
Readable code, though, is written with the reading view in mind.
Interesting POV. I see it exactly the opposite: using a debugger most of the time feels like trying to see the current state of things without understanding what set of inputs led to it. Print debugging feels more like trying to understand the actual program logic that got us to this point, based on a few choice clues.
I’m not saying you’re wrong or I’m right, just that we have diametric opposite opinions on this.
I think the obvious benefit of a debugger is the ability to introspect when you have the misfortune of investigating the behavior of a binary rather than source code. In the vast, vast majority other instances, it is more desirable (to me) to encode evidence of investigation in the source itself. This has all the other benefits of source code—you can persist it, share it, let ai play with it, fork it, commit it to source control, use git bisect, etc.
There are a few other instances where the interaction offers notable benefits—bugs in the compiler, debugging assembly, access to registers, a half-completed runtime or standard library that occludes access to state so that you might print it. If you have the misfortune of working with C or C++, you have the benefit of breaking on memory access—but I tend to file this in the "half-completed runtime" category. There are also a few "heisenbugs" that may actually prevent the bug from occurring by using print itself; but I've only run into this I think twice. This is also possible with the debugger, but I've only run into that once. The only way out of that mess is careful reasoning, and i recommend printing the code out and using a pen.
I also strongly suspect that preference for print debugging vs interactive debuggers comes down to internal conception of the runtime and aesthetic preference. I abhor debuggers—especially thosr in IDEs. I think they tend to reimplement the runtime of a language a second time, except with more bugs and a less intuitive interface. But I have the wherewithal to realize that this is ultimately a preference.
I'm pretty sure in that interview at some point he realized becasue the debugger experience for developers using Linux sucks compared to Windows where he does most of his work.
Alot of programmers work in a Linux environment.
It seems like windows, ide and languages are all pretty nicely integrated together?
> It seems like windows, ide and languages are all pretty nicely integrated together?
Not only, and not really. After all, for all its warts Visual Studio is still a decent debugger for C/C++. IntelliJ has pretty good debuggers across all of their IDEs for almost all languages (including things automatically downloading and/or decompiling sources when you step into external libraries).
Even browsers ship with built-in debuggers (and Chrome's is really good). I still see a lot of people (including my colleagues) often spend inordinate amounts of time console.log'ing when just stepping though the program would suffice.
I think it's the question of culture: people are so used to using subpar tools, they can't even imagine what a good one may look like. And these tools constantly evolve. Here's RAD Debugger by Ryan Fleury: https://threadreaderapp.com/thread/1920345634026238106.html
I don't really get the hate that debuggers sometimes get from old hands. "Who needs screwdrivers if we always used knives?" - You can still use your knife, but screwdriver is a useful tool.
It seems to me that this is one of the many phenomena where people want to judge and belittle their peers over something completely trivial.
Personally, I get the appeal of printing out debugging information, especially if some bug is rare and happens in unpredictable times (such as when you are sleeping). But the amount of info you get this way is necessarily lower than what can be gleaned from a debugger.
I am surprised all the time in this industry how many software engineers still debug with printf. It's entirely baffling how senior / staff folks in FAANG can get there without this essential skill.
I think it would be interesting to view this from a different angle. Perhaps "Lots of people who know of debuggers still use printf debugging, maybe they're not all wrong and there are advantages that aren't so clear."
Good print statements can become future logging entries for when software ships and debugging statements need to be turned on without source code access.
I'm so used to bouncing between environments my code's running in (and which project I'm working on) that I tend to just assume I don't have debugger access, or at least don't have it configured for that environment, even when I do. Like I'm just in the habit of not reaching for it because so often it's not actually there. It rarely matters much anyway (though when it does, yeah, it really does).
No way, sorry. The bug you're trying to squash isn't complicated enough if print statements are as valuable as a debugger. And I get what you're after - this is coming from someone who regularly uses `grep` to answer questions faster than my clients' dopey ETL/DB setups.
Quite seriously, there will be whole categories of bugs you won't catch with a debugger (same way printf or CLI execution etc. have their limitations).
The debugger will never be completely transparent, it also eats resources in parallel to your application, and peeking into the session also introduces timing issues, short of the debugger itself having its own bugs.
I'm saying it would be dumb to dismiss all other tools for the love of debuggers, it's just one tool in the toolbox.
Print debugging is, checking patient's life signs, eye color, blood pressure, skin inflammation and so on. However using debuggers are like putting the patient through an MRI machine. It can provide you very advanced diagnostic information, but it's expensive, time consuming, requires specialized hardware and education. Alike medicinal doctors it's easier and logical to use the basics until absolutely necessary.
Meh. None of these sway me. I'm a die hard printf() debugger and always will be. But I do use debuggers regularly, for circumstances where printf() isn't quite up to the task. And there really are only two such categories (neither of which appear in the linked article!):
1. Code where the granularity of state change is smaller than a function call. Sometimes you actually have to step through things one instruction at a time, and I'm lucky enough to have such problems to solve. You can't debug your assembly with printf(), basically[1a].
2. State changes that can't be easily isolated. Sometimes you want to log when something change but can't for the life of you figure out when it's changing. Debuggers have watchpoints.
But... that's really it. If I'm not hitting one of those I'm not reaching for the debugger. Logging is just faster, because you type it in right at the code you're already reading.
[1a] Though there's a caveat: sometimes you need to write assembly and don't even have anything like a printk. Bootstrap code for a new device is a blast. You just try stuff like writing one byte to a UART address or setting one GPIO pin as the first instructions and hope it works, then use that one bit of output to pull the rest up.
Assuming you meant C's printf, why would you subject yourself to the pain of recompilation every time you need to look at a different part of code? Isn't the debugger easier than adding printf and then recompiling?
Do you use snippets or something to help speed this up? Manually typing `printf("longvarname=%s secondvarname=%d\n", longvarname, secondvarname);` adds up over a debugging session, compared to a graphical debugger setup with well-chosen breakpoints, watches etc.
It really doesn't? I mean, sure, typing is slower than clicking (though only marginally as complexity grows, there's a lot of clicking needed to extract the needed state, and with printf I only need to extract it once and it keeps popping out as I rerun the test).
But I spend far more time reading and thinking than I do typing. Input mechanics just aren't the limiting factor here.
The first thing I always do is define log. It's bonkers to use console.log() for js. a simple window.log=console.log.
Secondly, in your example, no need to label the names. This is almost always understood by context. So, pretty manageable. e.g. in JS:
log(`${longvarname}, ${secondvarname}`)
There are two kinds of bugs: the rare, tricky race conditions and the everyday “oh shucks” ones. The rare ones show up maybe 1% of the time—they demand a debugger, careful tracing, and detective work. The “oh shucks” kind where I am half sure what it is when I see the shape of the exception message from across the room - that is all the rest of the time. A simple print statement usually does the trick for this kind.
Leave us be. We know what we’re doing.
I see it the exact other way around:
- everyday bugs, just put a breakpoint
- rare cases: add logging
By definition a rare case probably will rarely show up in my dev environment if it shows up at all, so the only way to find them is to add logging and look at the logs next time someone reports that same bug after the logging was added.
Something tells me your debugger is really hard to use, because otherwise why would you voluntarily choose to add and remove logging instead of just activating the debugger?
So much this. Also in our embedded environment debugging is hit and miss. Not always possible for software, memory or even hardware reasons.
Then you need better hardware-based debugging tools like an ICE.
Rare 1% bugs practically require prints debugging because they are only going to appear only 6 times if you run the test 600 times. So you just run the test 600 times all at once, look at the logs of the 6 failed tests, and fix the bug. You don’t want to run the debugger 600 times in sequence.
Record-and-replay debuggers like rr and UndoDB are designed for exactly this scenario. In fact it's way better than logging; with logging, in practice, you usually don't have the logs you need the first time, so you have to iterate "add logs, rerun 600 times" several times. With rr and UndoDB you just have to reproduce once and then you'll be able to figure it out.
Trace points do exist.
I used to agree with this, but then I realized that you can use trace points (aka non-suspending break points) in a debugger. These cover all the use cases of print statements with a few extra advantages:
- You can add new traces, or modify/disable existing ones at runtime without having to recompile and rerun your program.
- Once you've fixed the bug, you don't have to cleanup all the prints that you left around the codebase.
I know that there is a good reason for debugging with prints: The debugging experience of many languages suck. In that case I always use prints. But if I'm lucky to use a language with good debugging tooling (e.g Java/Kotlin + IntelliJ IDEA), there is zero chance to ever print for debugging.
The tricky race conditions are the ones you often don't see in the debugger, because stopping one thread makes the behavior deterministic. But that aside, for webapps I feel it's way easier to just set a breakpoint and stop to see a var's value instead of adding a print statement for it (just to find out that you also need to see the value of another var). So given you just always start in debugging mode, there's no downside if you have a good IDE.
Using a debugger isn't a synonymous with single stepping.
Often you can also just use conditional breakpoints, which surprisingly few people know about (to be clear, it's still a breakpoint, but your application just auto continues if false. Is usually usable via right click on the area you're clicking on to set the breakpoint.
I've had far better luck print debugging tricky race conditions than using a debugger.
The only language where I've found a debugger particularly useful for race condition debugging is go, where it's a lot easier to synthetically trigger race conditions in my experience.
Use trace points and feed the telemetry data into the debugger for analysis.
Well, if you have a race condition, the debugger is likely to change the timing and alter the race, possibly hiding it altogether. Race conditions is where print is often more useful than the debugger.
> the debugger is likely to change the timing
And the print will 100% change the timing.
Yes, but often no where as drastic as the debugger. In Android we have huge logs anyways, a few more printf statements aren’t going to hurt.
Log to a memory ring buffer (if you need extreme precision, prefetch everything and write binary fixed size "log entries"), flush asynchronously at some point when you don't care about timing anymore. Really helpful in kernel debugging.
Formatting log still takes considerable computing, especially when working on embedded system, where your cpu is only a few hundreds MHz.
Hence the mention of binary stuff.... We use ftrace in linux and we limit ourselves a lot on what we "print".
No, wrong. Totally wrong. You're changing the conditions that prevent accurate measurement without modification. This is where you use proper tools like an In-Circuit Emulator (ICE) or its equivalent.
The same can be said about prints.
Yes, but to a lesser extent.
> The rare ones show up maybe 1% of the time
Lucky you lol
What I've found is that as you chew through surface level issues, at one point all that's left is messy and tricky bugs.
Still have a vivid memory of moving a JS frontend to TS and just overnight losing all the "oh shucks" frontend bugs, being left with race conditions and friends.
Not to say you can't do print debugging with that (tracing is fancy print debugging!), but I've found that a project that has a lot of easy-to-debug issues tends to be at a certain level of maturity and as times goes on you start ripping your hair out way more
Absolutely. My current role involves literally chasing down all these integration point issues - and they keep changing! Not everything has the luxury of being built on a stable, well tested base.
I'm having the most fun I've had in ages. It's like being Sherlock Holmes, and construction worker all at once.
Print statements, debuggers, memory analyzers, power meters, tracers, tcpump - everything has a place, and the problem space helps dictate what and when.
The easy-to-debug issues are there because I just wrote some new code, didn't even commit the code, and is right now writing some unit tests for the new code. That's extremely common and print debugging is alright here.
Unit and integration testing for long-term maintainable code that's easy and quick to prove it still works, not print debugging with laborious, untouchable, untestable garbage.
Even print debugging is easier in a good debugger.
Print debugging in frontend JS/TS is literally just writing the statement "debugger;" and saving the file. JS, unlike supposedly better designed languages, is designed to support hot reloading so often times just saving the file will launch me into the debugger at the line of code in question.
I used to write C++, and setting up print statements, while easier than using LLDB, is still harder than that.
I still use print debugging, but only when the debugger fails me. It's still easier to write a series of console.log()s than to set up logging breakpoints. If only there was an equivalent to "debugger;" that supported log and continue.
> JS (...) is designed to support hot reloading
no it's not lol. hmr is an outrageous hack of the language. however, the fact JS can accommodate such shenanigans is really what you mean.
sorry I don't mean to be a pedantic ass. i just think it's fascinating how languages that are "poorly" designed can end up being so damn useful in the future. i think that says something about design.
ESM has Hot Module Reloading. When you import a symbol it gives you a handle to that symbol rather than a plain reference, so that if the module changes the symbol will too.
> the rare, tricky race conditions [...]. The rare ones show up maybe 1% of the time—they demand a debugger,
Interesting. I usually find those harder to debug with a debugger. Debuggers change the timing when stepping through, making the bug disappear. Do you have a cool trick for that? (Or a mundane trick, I'm not picky.)
Fully agree.
If I find myself using a debugger it’s usually one two things: - freshly written low level assembly code that isn’t working - basic userspace app crash (in C) where whipping out gdb is faster than adding prints and recompiling.
Even never needed a debugger for complex kernel drivers — just prints.
I guess I struggle to see how it's easier to print debug, if the debugger is right there I find it way faster.
Perhaps the debugging experience in different languages and IDEs is the elephant in the room, and we are all just talking past eachother.
Indeed, depends on deployment and type of application.
If the customer has their own deployment of the app (on their own server or computer), then all you have to go with, when they report a problem, are logs. Of course, you also have to have a way to obtain those logs. In such cases, it's way better for the developers to also never use debugger, because they are then forced to ensure during development that logs do contain sufficient information to pinpoint a problem.
Using a debugger also already means that you can reproduce the problem yourself, which is already half of the solution :)
One from work: another team is willing to support exactly two build modes in their projects: release mode, or full debug info for everything. Loading the full debug info into a debugger takes 30m+ and will fail if the computer goes to sleep midway through.
I just debug release mode instead, where print debug is usually nicer than a debugger without symbols. I could fix the situation other ways, but a non-reversible debugger doesn't justify the effort for me.
Exactly. At work for example I use the dev tools debugger all the time, but lldb for c++ only when running unit tests (because our server harness is too large and debug builds are too large and slow). I’ve never really used an IDE for python.
When using Xcode the debugger is right there and so it is in qt creator. I’ve tried making it work in vim many times and just gave up at some point.
The environment definitely is the main selector.
> Leave us be. We know what we’re doing.
No shade, this was my perspective until recently as well, but I disagree now.
The tipping point for me was the realisation that if I'm printing code out for debugging, I must be executing that code, and if I'm executing that code anyway, it's faster for me to click a debug point in an IDE than it is to type out a print statement.
Not only that, but the thing that I forgot to include in my log line doesn't require adding it in and re-spinning, I can just look it up when the debug point is hit.
I don't know why it took me so long to change the habit but one day it miraculously happened overnight.
> it's faster for me to click a debug point in an IDE than it is to type out a print statement
Interesting. I always viewed the interface to a debugger as its greatest flaw—who wants to grapple with an interface reimplementing the internals of a language half as well when you can simply type, save, commit, and reproduce?
It is also much much easier to fix all kinds of all other bugs stepping through code with the debugger.
I am in camp where 1% on the easy side of the curve can be efficiently fixed by print statements.
When the print statements cause a change in asynchronous data hazards that leads to the issue disappearing, then what's the plan since you appear to "know it all" already? Perhaps you don't know as much as you profess, professor.
I don't see any evidence that the 1% of bugs can be reduced so easily. A debugger is unsuitable just as often as print debugging is. There is no inherent edge it gives to the sort of reasoning demanded. It is just a flathead rather than a phillips. The only thing that distinguishes this sort of bug from the rest is pain.
> Leave us be. We know what we’re doing.
No. You’re wrong.
I’ll give you an example a plain vanilla ass bug that I dealt with today.
Teammate was trying to use portaudio with ALDA on one of cloud Linux machines for CI tests. Portaudio was failing to initialize with an error that it failed to find the host api.
Why did it fail? Where did it look? What actual operation failed? Who the fuck knows! With a debugger this would take approximately 30 seconds to understand exactly why it failed. Without a debugger you need to spend a whole bunch of time figuring out how a random third party library works to figure out where the fuck to even put a printf.
Printf debugging is great if it’s within systems you already know inside and out. If you deal with code that isn’t yours then debugger is more then an order of magnitude faster and more efficient.
It’s super weird how proud people are to not use tools that would save them hundreds of hours per year. Really really weird.
Every engineer should understand how to use a debugger and a time profiler (one that gives a call tree). Knowing how to do memory profiling is incredibly valuable too.
So many problems can be solved with these.
And then there's some more specialized tooling depending on what you're doing that can be a huge help.
For SQL, the query planner and index hit/miss / full table scan.
And things like valgrind or similar for cache hit/miss.
Proper observability (spans/ traces) for APIs...
Knowing that the tools exist and how to use them can be the difference between software and great software.
Though system design / architecture is very important as well.
Renderdoc!
So, uh, everything is important, and every engineer must know everything then?
I mean, don't get me wrong, I do agree engineers should at least be aware of the existence of debuggers & profilers and what problems they can solve. It's just that not all the stuff you've said belongs in the "must know" category.
I don't think you'll need valgrind or query planning in web frontend tasks. Knowing them won't hurt though.
I can tell you for a fact a lot of budding web developers don't even know a Javascript debugger exists, let alone something as complex/powerful as Valgrind.
All of these are useful skills in your toolkit that give you a way of reasoning about programs. Sure you can plop console.logs everywhere to figure out control/program flow but when you have a much more powerful tool specifically built for this purpose, wouldn't you, as an engineer, attempt to optimize your troubleshooting process?
Yeah, it's quite sad, considering it's already built-in on all major browsers. And it's not even hard to open it, like a click away on devtools tab.
But I think promoting profilers is much more important than debuggers. Far too many people I know are too eager to jump on "optimization" just because some API is too slow without profiling it first.
With native languages you'll almost always be using a compiler that can output debug symbols, and you can use the output of any compiler with (mostly) any debugger you want.
For JS in the browser, there's a often chain of transformations - TypeScript, Babel, template compilation, a bundler, a minifier - and each of these makes the browser debugger work worse -- and it's not that great to begin with, even on plain JS.
Add that to the fact that console.log actually prints objects in a structured form that you can click through and can call functions on them from the console, and you start to see why console.log() is the default choice.
console.log works great. upto a point
I work on maintaining a 3D rendering engine written completely in Typescript, along with using a custom, stripped down version of three.js that I rely on for primitives; and no amount of console.logging will help when you're trying to figure out exactly what's going wrong in a large rendering pipeline.
I do use console.logs heavily in my work, but the debugger and profiler are instrumental in providing seamless devex.
> TypeScript, Babel, template compilation, a bundler, a minifier
During development you have access to source maps, devtools will bind breakpoints, show original typescript code and remap call stacks across bundlers. All modern browsers support mapped debugging, also wrt profiling it can also be symbol mapped to the original sources which makes minified builds diagnosable if you ship proper source maps, which during development you ideally should.
-=-
edit: additional info;
I would also like to say console.log and debugging/profiling are not in a competition. both are useful in different contexts.
for example I will always console.log a response from an API because I like having a nice nested representation that I can click through, I'll console.log objects, classes and everything to explore them in an easier way. this is also great for devex.
I'll use the debugger when I want to pause execution at an intermediate step; for example see the result of my renderer before the postprocessing step kicks in, stop it and inspect shader code before its executed. it's pretty useful.
As mentioned originally; these are TOOLS in your toolkit, you don't have to do a either/or between them.
Well. React and SSR does break debugger a lot but that’s one case. Other web frameworks are much better citizens and the debugger there is much nicer and faster than console logs.
Understanding how to use these tools properly does not take very long. If you've never used them, spending an afternoon with each on real problems will probably change how you think.
If you don't already know which tool to use / how to diagnose the problem, you'll instead of banging your head against the wall, you'll think - "how do i figure out this thing - what is the right tool for this job"? and then you'll probably find it, and use it, because people are awesome and build incredibly useful free / open source software.
"try stuff until it works" is so common, and the experience needed to understand how to go about solving the problem is within reach.
Like especially with llms, "what's the right tool to use to solve problem x i'm having? this is what's going on. i'm on linux/macos, using python" or w/e
It may sound obvious to folks who already use a debugger, but in my experience a decent chunk of people don't use them because they just don't know about them.
Spread the good word!
Depending on the language or setup debuggers can be really crappy. I think people here would just flee away and go find a better fitting stack, but for more pragmatic workers they'll just learn to debug with the other tools (REPL, structured logging, APMs etc.)
I had a think about where I first learned to use a debugger. The combo of M$ making it easy for .NET and VB6 and working professionally and learning from others was key. Surprised it is less popular. Tests have made it less necessary perhaps BUT debugging a unit test is a killer move. You quickly get to the breakpoint and can tweak the scenario.
> I had a think about where I first learned to use a debugger
Is this not taught anymore? I started on borland C (the blue one, dos interface) and debugging was in the curriculum, 25+ years ago. Then moving to visual studio felt natural with the same concepts, even the same shortcuts mostly.
Nothing useful I do in my job was taught by another person in a classroom.
Clearly, you have been in the wrong classroom.
Or the wrong jobs!
yeah, tons dont know they exist. But there's also a lot of people - new and veteran - who are just allergic to them, for various reasons.
Setting up a debugger is the very first thing i do when i start working with a new language, and always use it to explore the code on new projects.
These days I'll just dump all relevant code into an LLM and have it explained to me instantly.
Being able to ask questions about the parts that are unclear (or just plain wrong) is so much easier than trying to cram the entire thing into my brain RAM.
In my experience it actually helps me learn faster too, since I rarely get stumped on random gotcha's anymore.
With VSCode it's often a 10 minute job to set up. We are spoiled! Back in the VS days using a Microsoft stack it was just there. Click to add breakpoint then F5.
This also applies to testing. So much legacy code out there that's untested.
Author missed one of the best features: easy access to hardware breakpoints. Breaking on a memory read or write, either a raw address or via a symbol, is one of the most time saving debugging tools I know.
windbg used to offer scripting capabilities that teams could use to trigger validation of any number of internal data structures essentially at every breakpoint or watchpoint trigger. it was a tremendous way to detect subtle state corruption. and sharing scripts across teams was also a way to share knowledge of a complex binary that was often not encoded in asserts or other aspects of the codebase.
This still exists? You can also use JavaScript to script/extend and there is a native code API too.
Note: I do work at MSFT, I have used these capabilities but I’m not on the debugger team.
https://learn.microsoft.com/en-us/windows-hardware/drivers/d...
https://www.timdbg.com/posts/whats-the-target-model/
https://github.com/microsoft/WinDbg-Samples/tree/master
thanks for the pointers glad to hear it’s all still there
i haven’t seen this type of capability used in too many companies tbh and it seems like a lot of opportunity to improve stability and debugging speed and even code exploration/learning (did i break something ?)
From the same toolbox: expression watch. Set a watch on the invariant being violated (say "bufpos < buflen") and get a breakpoint the moment it changes.
Oh my god, same. This literally catches bugs with a smoking gun in their hand in a way that's completely impossible with printf. I'd upvote this 100 times if I could.
> completely impossible with printf
Not printf exactly, but I've found bugs with a combination of mprotect, userfaultfd and backtrace_symbols when I couldn't use HW breakpoints.
Basically, mark a set of pages as non-writable so that any writes trigger a pagefault, then register yourself as a pagefault handler for those and see who is doing the write, apply the write and move on. You can do this with LD_PRELOAD without even recompiling the debugee.
Especially with combined with reverse-execution in rr or UndoDB!
Is there somewhere where this approach is described in more detail?
Very roughly, hardware watchpoints are memory addresses you ask the processor to issue an "event" for when they're read from, written to, or executed. This event is processed by the kernel, and passed through to the debugger, which breaks execution of the program on the instruction that issued the read/write/exec.
A concrete use case for this is catching memory corruption. If your program corrupts a known piece of memory, just set a hardware watchpoint on that memory address and BOOM, the debugger breaks execution on exactly the line that's responsible for the corruption. It's a fucking godsend sometimes.
Search for "watchpoint debugging". Usually in most garbage-collected environments, it just observes & breaks on symbols though, not raw addresses.
Vscode (or an editor with ADP support) supports unconditional breakpoints, watchpoints, and logpoints (observe & log values to the debug console).
While a debugger is of high value, having access to a REPL also covers the major use cases.
In particular, REPL tools will work on remote session, on pre-production servers etc. _if_ the code base is organized in a somewhat modular way, it can be more pleasant than a debugger at times.
Makes me wonder if the state of debugging improved in PHP land. It was mostly unusable for batch process debugging, or when the server memory wasn't infinite, which is kinda the case most of the time for us mere mortals.
I am the author of the posted flamebait. I agree.
I use IPython / JShell REPLs often when the code is not finished and I have to call a random function without entrypoint.
In fact its possible to jump to the graphical debugger from the Python REPL when running locally. PyCharm has this feature natively. In VSCode you can use a simple workaround like this: https://mahesh-hegde.github.io/posts/vscode-ipython-debuggin...
It's not a silver bullet, but Visual Studio is leaps and bounds ahead of gdb et. al. for debugging C/C++ code. "Attach to process" and being able to just click a window is so easy when debugging a large Windows app.
lol, agree to disagree here. While the interface to gdb is annoying, there are many gui frontend alternatives.
VS, on the other hand, gets worse with every release. It is intolerably slow and buggy at this point. It used to be a fantastic piece of software, and is now a fantastic pile of shit.
Any recommendations on gdb frontends? Have tried with emacs, but I just really enjoy the point and click stuff, emacs keybinds don't work for me there.
IME console-based debuggers work great for single-threaded code without a lot of console output. They don't work that well otherwise. GUI-based debuggers can probably fix both of those issues. I just haven't really tried them as much.
pdb is great for python, though.
I frequently use the go debugger to debug concurrent go routines. I haven’t found it any different than single threaded debugging.
I simply use conditional break points to break when whatever go routine happens to be working on the struct I care about.
Is there more to the issue?
Thinking back, the issue I had with multi-threaded code was two-fold:
- Things like "continue", "step" are no longer a faithful reproduction of what the program does in real time, so it's more difficult to understand the program's behavior. Some timing-related bugs simplify disappear under a debugger.
- There's usually some background thread that's logging things to console, which reduces to problem 2 in my comment.
I haven't used Go that much. I imagine since goroutines are such a cornerstone of the language, the go debugger must have some nifty features to support multi-(green)-threaded debugging?
debuggers are hard to use outside of userland.
For really hairy bugs in programs that can't be stopped (kernel/drivers/realtime, etc) logging works.
And when it doesn't, like when you can't do I/O or switching of any kind, log non-blocking to a buffer that is dumped elsewhere.
also, related. It is harder than it should be to debug the linux kernel. Just getting a symboled stack trace is ridiculously hard.
Something I haven't seen discussed here that is another type of debugging that can be very useful is historical / offline debugging.
Kind of a hybrid of logging and standard debugging. "everything" is logged and you can go spelunk.
For example:
https://rr-project.org/
Print debugging is historical / offline debugging, just ad-hoc instead of systemic.
The ”debug” package on npm is something in between, as it requires inserting debug statements but they are hidden from output unless an envvar like DEBUG=scope.subscope.*,otherscope is used.
I've loved working with rr! Unfortunately the most recent project I've been contributing to breaks it (honestly it might just be Ubuntu, as it works on my arch install, but doesn't work when deployed where I need to test it).
Most languages let you print the stack, so you can easily see the stack using print debugging.
Anecdotally, dynamic expressions are impossibly slow in the cases I’ve tried them.
As the author mentions, there are also a number of cases where debuggers don’t work. Personally, I’m going to reach for the tool that always works vs. sometimes works.
> I’m going to reach for the tool that always works vs. sometimes works.
This is only logical if you're limited to one tool. Would you never buy a power tool because sometimes the power goes out and a hand tool is your only choice?
but can you go back in the stack and inspect the variables and related functions there in print debugging?
This is something that does not require a debugger perse. this is something that can be implemented by a "smart" log. beside the log entry there might be a button to see the trace + state at those points. could even allow log() to have an option for this.
But you have to
3. can take quite a while on some projects, and 5. can take quite a while too for long-running programs.And then you see the result of what you printed, figure out you need something else as well, and repeat. Instead you can just trigger a breakpoint and inspect the entire program's state.
...yes? You just print in the relevant stack frame.
There is an inherent tradeoff between interaction and reproducibility. I think the whole conversation of debugger vs print debugging is dumb. Just do whatever makes you the most productive. Often times it is immediately obvious which makes more sense.
> Some would’ve also heard about time travel debuggers (TTD) which let you step back in time. But most languages do not have a mature TTD implementation. So I am not writing about that.
Shame as that's likely the only option with significant universal UX advantage vs. sprinkling prints...
It isn't either/or. Good programmers know how to use both and know how to choose the appropriate tool for the job.
printing is never the appropriate tool. You can make your debugger print something when that line of code is reached anyway and automatically continue if you want. So what's the point of pritntf? It's just less information and features.
Let me enumerate. Printf survives debugger restarts, shows up in git diff, usually messes less with the timing, can be exchanged with coworkers or deployed to users and be toggled with logging rules, has the full power of the programming language, the output is easier to run "diff" on to compare runs, works in CI containers, has no problems with mixed language environments...
As far as I'm concerned, breakpoints and backtraces, especially of crashes, are the superpower of debuggers. Where they are not immediately applicable, I usually don't bother.
This is refreshing. I get triggered by people writing "I don't use a debugger because I'm too smart to need one".
Some other things I'd add:
Some debuggers allow you to add actions. For example logging at the breakpoint is great if I can't modify the source, plus there's nothing to revert afterward. This just scratches the surface. Some debuggers allow you to see entire GPU workloads, view textures etc.
Debuggers are extremely useful for exploring and helping edit code. I can't be the only person that sprinkles breakpoints during development which helps me visualise code flow and quickly jump between source locations.
They're not just for debugging.
Maybe someone can give me idea, how can I debug this particular rust app, which is extremely annoying. It's a one of Rustdesk.
It won't run if I compile with debug info. I think it's due to a 3rd party proprietary library. So, to run the app I have to use release profile, with debug info stripped.
So, when I fire up gdb, I can't see any function information or anything, and it has so many system calls it's really difficult to follow through blindly.
So, what is the best way to handle this?
I'd investigate why it won't run with debug info in the first place. That feels like the core problem here, because it prevents you from using some debug tools.
Of course that may require digging down pretty low, which is difficult in itself.
Edit: also there's split-debuginfo which puts debug info in separate file. It could help if the reason you can't run it is the debug info itself. Which feels unlikely, but :shrug:.
You can add debug info to release builds. In Cargo.toml:
https://doc.rust-lang.org/cargo/reference/profiles.html#debu...claude code cli
Could you expand on what you meant? I'm curious.
Not related to OP, but debugging is often about finding where an invariant is broken, so it feels like using LLM to navigate a debugging loop may be useful as it's not a complicated but repetitive task. However in the morning I struggle to imagine how to do that.
Two of the benefits listed (call stack and catch exceptions at the source) are available in logging as well. A good logging framework lets you add the method name, source file and line number for the logging call-after a few debugging sessions you will construct the call stack quite easily. And C# at least lets you print the exception call stack from where it was thrown.
I agree that adhoc dynamic expression evaluation at run time is very useful and can only be done in a debugger.
Don’t tell Primeagen. Although he’s right about debugging sprawling systems in Prod. I’d argue the stateful architecture of these apps is the root cause.
things I can do with print statements but not a debugger: trace the flow of several values across a program, seeing their values at several different times and execution points in a single screen.
I have counter-points to several of these... But this one is my favorite (This didn't go very far, but I loved the idea of it...):
I once wrote a program that opened up all of my code, and at every single code curly brace, it added a macro call, and a guid.
I had to avoid doing that inside other macros, or inside Struct or Class definitions, enums, etc. But it wasn't hard, and it was a pretty sizeable codebase.The DEBUGVIKINGCODER macro, or whatever I called it, was a no-op in release. But in Debug or testing builds, would do something like:
(Using the right macros to append __LINE__ to the variable, so there's no collisions.)The constructor for DebugVikingCoder used a thread-local variable to write to a file (named after the thread id). It would write, essentially,
The destructor, when that scope was exited, would write to the same file: So when I'd run the program, I'd get a directory full of files, one per thread.Then I wrote another program that would read those all up, and would also read the code, and learn the File Name, Line Number of every GUID...
And, in Visual Studio, this tool program would print to the Output window, the File Name and Line Number, of every call and return.
And, in Visual Studio, you can step forward AND BACK in this Output window, and if you format it correctly, it'll open the file at that point, too.
So I could step forwards and backwards, through the code, to see who called where, etc. I could search in this Output window to jump to the function call I was looking for, and then walk backwards...
Then I added some code that would compare one run to another, and argued we could use that to figure out which of our automated tests formed a "basis set" to execute all of our code...
And to recommend which automated tests we should run, based on past analysis.
In addition to being able to time calls to functions, of course.
So then I added printing out some variables... And printing out lines in the middle of functions, when I wanted to time a section...
And if people respected the GUIDs, making a new one when they forked code, and leaving it alone if they moved code, we could have tracked how unit tests and other automation changed over time.
That got me really wishing that every new call scope really did have a GUID, in all the code we write... And I wished that it was essentially hidden from the developers, because who wants to see that? But, wow, it'd be nice if it was there.
I know there are debuggers that can go backwards and forwards in time... But I feel like being able to compare runs, over weeks and months, as the code is changing, is an under-appreciated objective.
Looklike you invented "tracing", but since you added a hook at every "curly bracket", it would be much more detail than average tracing.
And slower of course, they are not free.
Looks like you invented telemetry
Didn't expect this to blow up, and now I realize it's bit of a flame bait topic, haha.
"you can’t use them when your application is running on remote environments"
This isn't always the case. Maybe it's really hard in a lot of cases, but it's always not impossible.
I read it as dealing with applications you only get shell access to and can't forward ports.
Honestly, I feel like the print vs. debugger debate isn't about the tool, it's about the mindset. Print statements feel like you're just trying to patch a leak, while the debugger is about understanding the plumbing. I’m starting to think relying only on print is a symptom of not truly wanting to understand the system you're working in.
https://lemire.me/blog/2016/06/21/i-do-not-use-a-debugger/
A bit of counterpoint here
Call stacks and reading code give very different views of the codebase. The debugger tells you what's happening, reading tells you what can happen in many situations at once. You can generalize or focus, respectively, but their strengths and weaknesses remain.
Readable code, though, is written with the reading view in mind.
Interesting POV. I see it exactly the opposite: using a debugger most of the time feels like trying to see the current state of things without understanding what set of inputs led to it. Print debugging feels more like trying to understand the actual program logic that got us to this point, based on a few choice clues.
I’m not saying you’re wrong or I’m right, just that we have diametric opposite opinions on this.
I think the obvious benefit of a debugger is the ability to introspect when you have the misfortune of investigating the behavior of a binary rather than source code. In the vast, vast majority other instances, it is more desirable (to me) to encode evidence of investigation in the source itself. This has all the other benefits of source code—you can persist it, share it, let ai play with it, fork it, commit it to source control, use git bisect, etc.
There are a few other instances where the interaction offers notable benefits—bugs in the compiler, debugging assembly, access to registers, a half-completed runtime or standard library that occludes access to state so that you might print it. If you have the misfortune of working with C or C++, you have the benefit of breaking on memory access—but I tend to file this in the "half-completed runtime" category. There are also a few "heisenbugs" that may actually prevent the bug from occurring by using print itself; but I've only run into this I think twice. This is also possible with the debugger, but I've only run into that once. The only way out of that mess is careful reasoning, and i recommend printing the code out and using a pen.
I also strongly suspect that preference for print debugging vs interactive debuggers comes down to internal conception of the runtime and aesthetic preference. I abhor debuggers—especially thosr in IDEs. I think they tend to reimplement the runtime of a language a second time, except with more bugs and a less intuitive interface. But I have the wherewithal to realize that this is ultimately a preference.
And that's why I never learned Elixir, despite being an interesting languge with an awesome web Framework, Phoenix.
The fact that there ist No Debugger is super unfortunate
What’s a good debugger for bash?
Don't show the discussion to John Carmack. He's baffled why people are so allergic to debuggers: https://youtu.be/tzr7hRXcwkw?si=beXGdoePRkbgfTtL
I'm pretty sure in that interview at some point he realized becasue the debugger experience for developers using Linux sucks compared to Windows where he does most of his work.
Alot of programmers work in a Linux environment.
It seems like windows, ide and languages are all pretty nicely integrated together?
> It seems like windows, ide and languages are all pretty nicely integrated together?
Not only, and not really. After all, for all its warts Visual Studio is still a decent debugger for C/C++. IntelliJ has pretty good debuggers across all of their IDEs for almost all languages (including things automatically downloading and/or decompiling sources when you step into external libraries).
Even browsers ship with built-in debuggers (and Chrome's is really good). I still see a lot of people (including my colleagues) often spend inordinate amounts of time console.log'ing when just stepping though the program would suffice.
I think it's the question of culture: people are so used to using subpar tools, they can't even imagine what a good one may look like. And these tools constantly evolve. Here's RAD Debugger by Ryan Fleury: https://threadreaderapp.com/thread/1920345634026238106.html
I don't really get the hate that debuggers sometimes get from old hands. "Who needs screwdrivers if we always used knives?" - You can still use your knife, but screwdriver is a useful tool.
It seems to me that this is one of the many phenomena where people want to judge and belittle their peers over something completely trivial.
Personally, I get the appeal of printing out debugging information, especially if some bug is rare and happens in unpredictable times (such as when you are sleeping). But the amount of info you get this way is necessarily lower than what can be gleaned from a debugger.
I am surprised all the time in this industry how many software engineers still debug with printf. It's entirely baffling how senior / staff folks in FAANG can get there without this essential skill.
I think it would be interesting to view this from a different angle. Perhaps "Lots of people who know of debuggers still use printf debugging, maybe they're not all wrong and there are advantages that aren't so clear."
Good print statements can become future logging entries for when software ships and debugging statements need to be turned on without source code access.
Yeah and by good print statement you mean use a structured logging lib?
I'm so used to bouncing between environments my code's running in (and which project I'm working on) that I tend to just assume I don't have debugger access, or at least don't have it configured for that environment, even when I do. Like I'm just in the habit of not reaching for it because so often it's not actually there. It rarely matters much anyway (though when it does, yeah, it really does).
“All these senior/staff FAANG folks are using a different tool than the one I regard as essential.”
There are a couple of ways to resolve this conundrum, and you seem to be locked on the less likely one.
What if… that weren’t an essential skill?
Imagine I posted the bell curve meme with "print debugging" on both ends.
No way, sorry. The bug you're trying to squash isn't complicated enough if print statements are as valuable as a debugger. And I get what you're after - this is coming from someone who regularly uses `grep` to answer questions faster than my clients' dopey ETL/DB setups.
Quite seriously, there will be whole categories of bugs you won't catch with a debugger (same way printf or CLI execution etc. have their limitations).
The debugger will never be completely transparent, it also eats resources in parallel to your application, and peeking into the session also introduces timing issues, short of the debugger itself having its own bugs.
I'm saying it would be dumb to dismiss all other tools for the love of debuggers, it's just one tool in the toolbox.
Complicated enough for what?
I'm surprised that you can get that far without seeing value in print debugging.
Print debugging is, checking patient's life signs, eye color, blood pressure, skin inflammation and so on. However using debuggers are like putting the patient through an MRI machine. It can provide you very advanced diagnostic information, but it's expensive, time consuming, requires specialized hardware and education. Alike medicinal doctors it's easier and logical to use the basics until absolutely necessary.
Meh. None of these sway me. I'm a die hard printf() debugger and always will be. But I do use debuggers regularly, for circumstances where printf() isn't quite up to the task. And there really are only two such categories (neither of which appear in the linked article!):
1. Code where the granularity of state change is smaller than a function call. Sometimes you actually have to step through things one instruction at a time, and I'm lucky enough to have such problems to solve. You can't debug your assembly with printf(), basically[1a].
2. State changes that can't be easily isolated. Sometimes you want to log when something change but can't for the life of you figure out when it's changing. Debuggers have watchpoints.
But... that's really it. If I'm not hitting one of those I'm not reaching for the debugger. Logging is just faster, because you type it in right at the code you're already reading.
[1a] Though there's a caveat: sometimes you need to write assembly and don't even have anything like a printk. Bootstrap code for a new device is a blast. You just try stuff like writing one byte to a UART address or setting one GPIO pin as the first instructions and hope it works, then use that one bit of output to pull the rest up.
Assuming you meant C's printf, why would you subject yourself to the pain of recompilation every time you need to look at a different part of code? Isn't the debugger easier than adding printf and then recompiling?
Do you use snippets or something to help speed this up? Manually typing `printf("longvarname=%s secondvarname=%d\n", longvarname, secondvarname);` adds up over a debugging session, compared to a graphical debugger setup with well-chosen breakpoints, watches etc.
It really doesn't? I mean, sure, typing is slower than clicking (though only marginally as complexity grows, there's a lot of clicking needed to extract the needed state, and with printf I only need to extract it once and it keeps popping out as I rerun the test).
But I spend far more time reading and thinking than I do typing. Input mechanics just aren't the limiting factor here.
The first thing I always do is define log. It's bonkers to use console.log() for js. a simple window.log=console.log.
Secondly, in your example, no need to label the names. This is almost always understood by context. So, pretty manageable. e.g. in JS: log(`${longvarname}, ${secondvarname}`)
LLMs have mostly made this trivial, plus you have the added benefit of being able to iteratively dump out more each run.
This is a solid answer.