In the last post I went over some of the reasons why code reviews are important in building a stable and secure project. I wanted to follow that by covering the different review methods we’ve used in the past and the ways in which each one worked or in some cases didn’t.
Paper Based Reviews
One of the simplest, and first, methods we used – simply take the code you want to review, print it out in whatever format you want (we found small A5 booklets worked best) pass them around and then start reviewing. Usually people would be given a couple of days to review the code before everyone involved would sit around a table and discuss what they had found. The advantage to this is that it gets people talking. You are sat around a table together and it’s a great way to get people participating. Unfortunately it also forces people into thinking they must contribute rather than simply being happy with what is in front of them and it’s a drain on resources and time. It’s surprising how long it takes to get code from an IDE into a suitable (and readable) format, especially if all you have is a word processor. And wasting all that paper is something that I simply don’t want to continue doing.
We also tried posting code onto an internal forum when the review didn’t justify a full sit-down session. This worked very well in making the code easily visible – even to people outside the review group though that might not be suitable in every case. It also reduced the time it takes to set up and cuts down on waste. But discussing specific parts of the code simply didn’t work. Referencing line 316 of PlayerCamera.cpp in a post makes it very hard for people to link to the work, and while forums are good for limited threaded discussions, trying to quote and re-quote just becomes hard to track, read and find out what needs to actually change.
Automated Code Reviews
By automated I mean using a tool that has been created specifically for code reviews, one which can display code correctly, generate or show diffs and allow people to reference files and lines of code easily. I won’t go over the actual tools we have used (I’ll do that next time) but suffice to say this is what we use and the method we have stuck with.
Of course, when using automated tools you have more scope as to when to do you reviews (do you review individual diffs, whole files or something in-between?) and I’ll spend the rest of this article going over the different methods we’ve used so far.
Blocking Code Reviews: Blocking code reviews are used when you want every piece of code reviewed before it is even committed to the repository. When a developer has made any changes to the code base they fire off a review and the developer cannot check in their work until everyone in the review group has flagged it as shippable. Any changes or requests need to be done before the code review can be submitted and checked into the repository.
This didn’t work. Developers need to get into a work-flow and having to constantly stop and wait for permission to continue or to review someone else’s code stops them in their tracks and is a serious drain on their productivity. And let’s face it. Game development might be a multi-million dollar industry but lives are not on the line if someone checks in something that might not be perfect.
Non-Blocking Code Reviews: The next obvious step. Every check-in needs to be reviewed so the review is generated, sent off to a selected group, but the developer is then free to commit the code with any changes or improvements being done in another commit. This is beneficial on two counts. The original developer is able to continue working, either on the same area of code or another without it conflicting on any waiting commits, and the reviewer is able to continue their work, leaving any reviews the need to complete for when it’s suitable, maybe performing a group of them in one go.
While this worked well in regards to making sure that everything is reviewed, we found that you needed to monitor the size of the review groups and to check that people are not being over-loaded. Filtering your reviews only to come to them at the end of the day with 10+ of them waiting for you can be quite disheartening.
Feature Based Reviews: This is generally where we sit when it comes to diff based code reviews and it puts the responsibility back into the hands of the developer. If the developer is working on a new feature then they are free to check in their changes with a view to putting the whole feature up for review when it is finished. This greatly reduces the number of reviews and for new code allows the developer to put the whole code into context. It’s very similar to paper based reviews but in a more suitable environment. But there is one caveat. It only works if there has been a decent design stage to the feature being written. Having people bring up basic design flaws when the thing it practically finished can be a killer blow but this is showing up the work flow process more than anything else.
This can obviously be extended to large scale changes that someone might be working on. The can check in their small changes, which a review done on the final product, maybe generating the diff based on a collection of commits rather than just one.
Bug fixes are another matter. Once the main implementation is done and people are into a bug fixing stage then these should be reviewed as and when they are being committed (usually in the same manner as the non-blocking reviews). This is especially important when it’s coming up to a milestone and is the way continually developed projects (for example with Middleware or Technology) will eventually spend most of their time.
Obviously there are other methods we could try. Review buddies, review masters (one person assigned responsibility for checking all commits – sometimes this will usually be the lead or programming manager), e-mail based discussions, ”important changes only’ reviews, and plenty of other variations on the same theme. We haven’t tried these because the automated reviews have been working so well for us.
Review buddies would be the one I’d like to try (alongside, rather than replacing, the automated review process) but I would usually have this done between two developers working on similar parts of the code – though I would hope these developers would be talking to each other already!
One thing I’ve not covered is who does these check-in reviews, which can be just as important as what is being reviewed.
Initially we used review groups, which contained 2/3 developers and were mixed quite randomly. Every review would be assigned to a random group and fired off. While this worked by spreading the work and the knowledge of the code base, it wasn’t producing the best results. Having one person involved in 2 out of 3 reviews for a particular feature, only for someone else to replace them on the last one would produce duplicated, or sometimes conflicting, results. And the new person isn’t as up-to-date with the feature as the original set of reviewers, which can limit the scope of the review and lead to it taking longer.
While this does spread the knowledge throughout the team, it isn’t necessary for everyone to know everything about the code base.
So now reviews are generally more focused, being sent directly to specific people who will have the best input on a particular feature. This does need monitoring to make sure people are not being over-loaded (or over-looked) but that’s a pretty simple management issues. People can also be asked whether they want to be involved in a certain review. Do they have a particular interest in that area, or does it use a new piece of technology that they would like experience with?
So that’s a brief over-view of the code review systems we’ve used over the past couple of years. We’ve settled on our automated code reviews as they produce the best results in the shortest time and give you much more scope in regards to how and what is reviewed. Choosing the correct piece of software to run this system is an important decision and I’ll go through the tools we used and most importantly the ones we stuck with in the next post.