> I almost always leave a comment on each PR I review, even just observations: “This class is getting big, we might want to consider adding a presenter,” or praise: “Thanks for cleaning this up!”
Things like that I'd much rather leave as comments in the code, rather than dangling as off-hand things in some unrelated PR. Probably no one will see those PR comments unless they go splunking for some reason, but having them right in the code is much more evident to all the people passing through, or even reminding yourself in the future too.
In general I feel like PR reviews are basically reviews happening too late, and if there is a lot of stuff to review and agree upon in the PR, you'll be better off trying to reduce that up front by discussing and noting design decisions before the work even starts. If there is so many unknowns first, do one step to figure out how it can be done, then agree on that, again before starting the actual work.
That leads to PRs basically just being spot-checking, not deep analysis nor even space for nitpicks. If you as a reviewer spots things after the implementation rather in the discussion beforehand, that ends up being on both of you, instead of just the implementer who tried to move along to finish the thing.
> In general I feel like PR reviews are basically reviews happening too late, and if there is a lot of stuff to review and agree upon in the PR, you'll be better off trying to reduce that up front by discussing and noting design decisions before the work even starts.
I agree in general and tried to push for a more iterate approach in our team.
However, my fear is that this would multiply the effort because it's very easy to discuss many things that do not matter or turn out later to not matter.
It seems it's easier for people to only discuss the important stuff when presented as an actual implementation.
We are talking tendencies here, of course, general design decision must always be done beforehand.
> It seems it's easier for people to only discuss the important stuff when presented as an actual implementation.
LLMs help a lot here, create two prototypes with both designs, compare them together :) Could even evaluate how ergonomic some future potential change is, to see how it holds up in practice.
I used to use pen and paper to essentially do the same, minus having real code and instead just talking concepts, but it does suffer from the issue that some need to be confronted with code in front of them to visualize things better for themselves.
I'm not following this. PRs are the first time your reviewers have seen that change, so there is no opportunity downstream to do the things you're suggesting.
You're essentially suggesting pre-PRs, but it us circular, since those same pre-PRs would have the same criticism.
PRs are about isolated core changes, not feature or system design. They answer how not why.
Usually by the time a PR has been submitted it's too late to dig into aspects of the change that come from a poor understanding of the task at hand without throwing out the PR and creating rework.
So it's helpful to shift left on that and discuss how you intend to approach the solution. Especially for people who are new to the codebase or unfamiliar with the language and, thanks to AI, show little interest in learning.
Obviously not for every situation, but time can be saved by talking something through before YOLOing a bad PR.
Yes, it should be cheap to throw out any individual PR and rewrite it from scratch. Your first draft of a problem is almost never the one you want to submit anyway. The actual writing of the code should never be the most complicated step in any individual PR. It should always be the time spent thinking about the problem and the solution space. Sometimes you can do a lot of that work before the ticket, if you're very familiar with the codebase and the problem space, but for most novel problems, you're going to need to have your hands on the problem itself to get your most productive understanding of them.
I'm not saying it's not important to discuss how you intend to approach the solution ahead of time, but I am saying a lot about any non-trivial problem you're solving can only be discovered by attempting to solve it. Put another way: the best code I write is always my second draft at any given ticket.
More micromanaging of your team's tickets and plans is not going to save you from team members who "show little interest in learning". The fact that your team is "YOLOing a bad PR" is the fundamental culture issue, and that's not one you can solve by adding more process.
> If you as a reviewer spots things after the implementation rather in the discussion beforehand, that ends up being on both of you, instead of just the implementer who tried to move along to finish the thing
This is accurate, but it's still an important check in the communication loop. It's not all that uncommon for two engineers to discuss a problem, and leave the discussion with completely different mental models of the solution.
> Additionally, some repos can be configured to automatically merge PRs when all requirements are met, one of which might be your approval.
If anyone at GitHub is reading this, I’d love a fourth checkbox in the “leave a review” modal that is “Approve but disable auto merge” (alongside Comment/Approve/Request changes)! Even just surfacing “this PR has auto merge enabled” near the Approve button would be great.
I'm a big believer in this workflow and generally agree with all the guidelines about when to do and not do this.
Code review has value, but I don't think we are always honest about the costs. At most places I've worked, there has been an informal understanding that code should be reviewed within 24 hours or so, but there is a world of difference between getting a review within 2 hours and 23 hours.
If you have to go back and forth a second time, it's so much more likely that the approval comes much later due to straddling end-of-day in someone's timezone.
Tangentially, if you are designing policies for code review at your org, try to think both about minimum requirements (e.g. PRs should get a first look within 24 hours) and typical requirements (e.g. most PRs should get reviewed within 2-3 hours). What typically happens is what determines overall velocity for your org, so it's much more important than whatever strict SLA policy you have. These are also fixable problems if you have targeted conversations with people. E.g. "I noticed X, Y, Z times, you had unreviewed PRs at the end of the day. Can you set aside some time to review code before EOD? Or try installing PR reminder?"
Azure-dev ops explicitly has an "Approve with suggestions", I suspect it was designed for when there are multiple acceptances required, but I sometimes use it anyway.
Whether those comments get read once approved, I don't know.
This is more than half of my code reviews (when functionally correct). It works pretty well, when you can trust that the person doing the changes won't accidentally (or otherwise) slip in behavioral changes. Like half just got merged as-is and half of those got a separate cleanup later.
Most people are fine, some routinely abuse it to do much larger changes that should normally imply full review - double check afterward! Your name is still on it, it is still your responsibility.
I started doing this a few years ago, it works a treat with non-junior engineers. With the right testing in place, review becomes more about communicating the ideas clearly rather than gatekeeping.
I do this too. I generally skim PRs just to make sure the person isn't doing anything too crazy. I trust my team to write good code. Pulling a branch and testing the code is usually a waste of time unless it's a critical bug or feature.
> I almost always leave a comment on each PR I review, even just observations: “This class is getting big, we might want to consider adding a presenter,” or praise: “Thanks for cleaning this up!”
Things like that I'd much rather leave as comments in the code, rather than dangling as off-hand things in some unrelated PR. Probably no one will see those PR comments unless they go splunking for some reason, but having them right in the code is much more evident to all the people passing through, or even reminding yourself in the future too.
In general I feel like PR reviews are basically reviews happening too late, and if there is a lot of stuff to review and agree upon in the PR, you'll be better off trying to reduce that up front by discussing and noting design decisions before the work even starts. If there is so many unknowns first, do one step to figure out how it can be done, then agree on that, again before starting the actual work.
That leads to PRs basically just being spot-checking, not deep analysis nor even space for nitpicks. If you as a reviewer spots things after the implementation rather in the discussion beforehand, that ends up being on both of you, instead of just the implementer who tried to move along to finish the thing.
> In general I feel like PR reviews are basically reviews happening too late, and if there is a lot of stuff to review and agree upon in the PR, you'll be better off trying to reduce that up front by discussing and noting design decisions before the work even starts.
I agree in general and tried to push for a more iterate approach in our team. However, my fear is that this would multiply the effort because it's very easy to discuss many things that do not matter or turn out later to not matter.
It seems it's easier for people to only discuss the important stuff when presented as an actual implementation.
We are talking tendencies here, of course, general design decision must always be done beforehand.
> It seems it's easier for people to only discuss the important stuff when presented as an actual implementation.
LLMs help a lot here, create two prototypes with both designs, compare them together :) Could even evaluate how ergonomic some future potential change is, to see how it holds up in practice.
I used to use pen and paper to essentially do the same, minus having real code and instead just talking concepts, but it does suffer from the issue that some need to be confronted with code in front of them to visualize things better for themselves.
I'm not following this. PRs are the first time your reviewers have seen that change, so there is no opportunity downstream to do the things you're suggesting.
You're essentially suggesting pre-PRs, but it us circular, since those same pre-PRs would have the same criticism.
PRs are about isolated core changes, not feature or system design. They answer how not why.
Usually by the time a PR has been submitted it's too late to dig into aspects of the change that come from a poor understanding of the task at hand without throwing out the PR and creating rework.
So it's helpful to shift left on that and discuss how you intend to approach the solution. Especially for people who are new to the codebase or unfamiliar with the language and, thanks to AI, show little interest in learning.
Obviously not for every situation, but time can be saved by talking something through before YOLOing a bad PR.
Yes, it should be cheap to throw out any individual PR and rewrite it from scratch. Your first draft of a problem is almost never the one you want to submit anyway. The actual writing of the code should never be the most complicated step in any individual PR. It should always be the time spent thinking about the problem and the solution space. Sometimes you can do a lot of that work before the ticket, if you're very familiar with the codebase and the problem space, but for most novel problems, you're going to need to have your hands on the problem itself to get your most productive understanding of them.
I'm not saying it's not important to discuss how you intend to approach the solution ahead of time, but I am saying a lot about any non-trivial problem you're solving can only be discovered by attempting to solve it. Put another way: the best code I write is always my second draft at any given ticket.
More micromanaging of your team's tickets and plans is not going to save you from team members who "show little interest in learning". The fact that your team is "YOLOing a bad PR" is the fundamental culture issue, and that's not one you can solve by adding more process.
> You're essentially suggesting pre-PRs, but it us circular, since those same pre-PRs would have the same criticism.
Walking this road to the end you get pair programming.
You get to design committees where everything has to be approved in advance.
> If you as a reviewer spots things after the implementation rather in the discussion beforehand, that ends up being on both of you, instead of just the implementer who tried to move along to finish the thing
This is accurate, but it's still an important check in the communication loop. It's not all that uncommon for two engineers to discuss a problem, and leave the discussion with completely different mental models of the solution.
> Additionally, some repos can be configured to automatically merge PRs when all requirements are met, one of which might be your approval.
If anyone at GitHub is reading this, I’d love a fourth checkbox in the “leave a review” modal that is “Approve but disable auto merge” (alongside Comment/Approve/Request changes)! Even just surfacing “this PR has auto merge enabled” near the Approve button would be great.
You might try adding this branch protection rule to require conversation resolution: https://docs.github.com/en/repositories/configuring-branches...
I'm a big believer in this workflow and generally agree with all the guidelines about when to do and not do this.
Code review has value, but I don't think we are always honest about the costs. At most places I've worked, there has been an informal understanding that code should be reviewed within 24 hours or so, but there is a world of difference between getting a review within 2 hours and 23 hours.
If you have to go back and forth a second time, it's so much more likely that the approval comes much later due to straddling end-of-day in someone's timezone.
Tangentially, if you are designing policies for code review at your org, try to think both about minimum requirements (e.g. PRs should get a first look within 24 hours) and typical requirements (e.g. most PRs should get reviewed within 2-3 hours). What typically happens is what determines overall velocity for your org, so it's much more important than whatever strict SLA policy you have. These are also fixable problems if you have targeted conversations with people. E.g. "I noticed X, Y, Z times, you had unreviewed PRs at the end of the day. Can you set aside some time to review code before EOD? Or try installing PR reminder?"
I like doing this as well.
The 'auto merge on approval flag' PR authors can flip on GitHub breaks this flow though, as it will just merge as soon as you hit approve.
I also follow this approach. I just flip the flag on the PR I’m reviewing to off before submitting my approval.
We also have most of our repos set to block if unresolved comments. I think it’s a flag on branch protection rules
I approve of it.
I have now commented on it.
Azure-dev ops explicitly has an "Approve with suggestions", I suspect it was designed for when there are multiple acceptances required, but I sometimes use it anyway.
Whether those comments get read once approved, I don't know.
This is more than half of my code reviews (when functionally correct). It works pretty well, when you can trust that the person doing the changes won't accidentally (or otherwise) slip in behavioral changes. Like half just got merged as-is and half of those got a separate cleanup later.
Most people are fine, some routinely abuse it to do much larger changes that should normally imply full review - double check afterward! Your name is still on it, it is still your responsibility.
I started doing this a few years ago, it works a treat with non-junior engineers. With the right testing in place, review becomes more about communicating the ideas clearly rather than gatekeeping.
I do this too. I generally skim PRs just to make sure the person isn't doing anything too crazy. I trust my team to write good code. Pulling a branch and testing the code is usually a waste of time unless it's a critical bug or feature.