In last week's All Hands when talking about possible process
improvements, Mike Shaver asked if there was any interest in some sort
of automated review system. Google's Mondrian
is probably the most well known review system. Review Board seems to
be the best known open source review system. I've never used Review
Board, but as an ex-Googler, I have used Mondrian.
I think Mozilla would really benefit from a good review
system. My review system experience is limited to Mondrian, which is,
of course, proprietary to Google. I don't know how Review Board and
the other open source systems measure up, and, as I'm about to get to,
I think Mondrian itself has some notable deficiencies. My overall
point is that if we decide to adopt a review system, we should choose
carefully. I'll try to offer some guidance about how to do that.
Quick Overview of Mondrian
Mondrian has a dashboard view for outstanding changelists but to me the essence of Mondrian is its UI for reviewing a single changelist, and that's what I'm going to talk about here. At the highest level you have a list of files modified in the changelist. You can click on each file and get a diff of that file against the baseline in a side-by-side view. You can easily see which lines were added, modified, or deleted. OK, everybody is familiar with side-by-side diffs these days. Here's the cool part though: A reviewer can click on any line and leave an inline note or comment. Afterwards, the reviewee can go through the diff and respond to the comments. The way this often works in practice is the reviewer leaves an instruction in the comment and the reviewee responds by making the requested change and replying to the comment with "Done." In fact, there's a "Done" button as well as a "Reply" button on the comment just to make this last part easy. Once the reviewee has updated the changelist to the reviewer's satisfaction, the reviewer will approve the change and the reviewee can submit it to the source control system.What's not to like?
Mondrian's review UI is great for simple line-by-line reviews, things like:- "You don't need to check for null here."
- "You should check for an error here."
- "Fix indentation"
What would I want in a "good" review system?
It may seem like I'm arguing that a review system should actively discourage line-by-line review, and that's not actually the case. I think that review style is often useful, and pretty much any review system, good or bad, will support it. There are really two fundamental things that I do want out of a review system.- The system should go out of its way to support a bi-directional flow of information between the reviewer and the reviewee. In the extreme, it should provide a means of carrying on an actual conversation. This could be supported within the review system itself, but even a simple means of escalating a conversation to email (or even IRC) might be a big help.
- The system should support higher level discussions about the code under review. Actually I think it should go so far as to encourage this kind of information flow. You can sort of do this with Mondrian, but you are usually better off just going to email.
Some general guidance
I've put a fair amount of thought into how I'd like an ideal review system to work. Mostly I've been thinking in terms of concrete features, but that's of limited utility if what you want to do is choose between existing review systems rather than writing a review system yourself. So I'm trying to figure out some general guidance. I think what I'm trying to say in this post more than anything is that affordances matter. Mondrian, for example, seems to afford the line-by-line scrutinizing, nitpicking approach to code reviews. It also seems to afford a model where the reviewer simply gives instructions to the reviewee and the reviewee just automatically carries them out. Mondrian offers some minimal affordance for discussing (as opposed to simply commenting on) a particular line of code, but it could do a lot more. Notably it does not seem to offer any real affordance for discussion at the design level, which has always seemed to me like a serious omission.A simple concrete example
Here's one simple way that Mondrian could be improved which I hope will illustrate my point about affordances. As described above, Mondrian comments have two buttons, "Reply" and "Done". It could offer other choices as well, so you might have: "Reply", "Done", "Done w/comment", "Defend", etc. These latter two functions could easily be done with "Reply", but if you give them their own buttons, you explicitly tell the reviewee that there are other options that can be considered here. In particular, in this case, they give the reviewee permission to say "I had a reason for doing this the way I did, would you please consider my reasoning?". My larger point here is that a review system's UI strongly shapes the code review process, and it can shape it in good ways or bad ways. As a result, we want to think not just about what we want out of a code review system, but also what we want for the code review process itself.Super-short Summary
- A good review system needs to support two-way information flow, and it should probably go so far as to actively encourage it.
- A good review system should support review at a higher level than simply line-by-line, and it should probably go so far as to actively encourage it.
- Affordances matter.
Notes
- This post describes Mondrian as of about nine months ago when I last used it. It may have received a major upgrade in that time, I don't know. Mondrian may also have had useful features that I didn't know about -- not all of its features had good discoverability.
- Probably the most common code review case at Google is a reviewer and reviewee who are peers, have known each other for months if not longer, and who sit near each other, often in the same room. Mondrian's limitations are a lot less important in this scenario, since a lot of out-of-band communication can happen the old fashioned way, by talking face-to-face.
- In some circumstances you can have reviewers and reviewees who work in different locations and who have never met in person. This is not uncommon at Google, and it is in fact very common at Mozilla. In this scenario, the way the review system works becomes much more important.
- The obvious way to get around Mondrian limitations is to fall back to email. I ultimately started emailing concerned parties requesting informal reviews before requesting formal reviews through Mondrian. Mondrian can still be used in this case -- it can see pretty much any outstanding changelist. Nevertheless, by making an informal request in email, you could get a high-level review of a changelist without getting mired in low-level details. And since this was through email rather than Mondrian comments, you could hold an actual conversation about it. It turned out my team's superstar intern had independently started doing the same thing. I jokingly referred to this as "doing a review before the review".
- This post might lead you to believe that I'm a Mondrian-hater.
Actually, I think Mondrian is a very good tool, I just feel like they
quit working on it before they were done.