Modern Programming: Object Oriented Programming and Best Practices
上QQ阅读APP看书,第一时间看更新

Bug and work tracking

For most of their history, computers have excelled at doing things one at a time. Even a single client or customer can parallelize much better than that and will think of (and make) multiple requests while you're still working on one thing.

It's really useful to write all of these requests down, and keep track of where you and your colleagues are on each of them so that you don't all try to solve the same problem, and can let the client know which of them you've fixed. Bug trackers (sometimes more generally called issue trackers or work trackers) are designed to solve that problem.

What Goes in And When?

I've worked on projects where the bug tracker gets populated with all of the project's feature requests at the beginning (this discussion overlaps slightly with the treatment of software project management patterns, in Chapter 13, Teamwork). This introduces a couple of problems. One is that the Big List needs a lot of grooming and editing to stay relevant as features are added and removed, split between multiple developers, or found to be dependent on other work. The second is psychological: for a long time, members of the project team will be looking at a soul-destroying list of things that still haven't been done, like Sisyphus standing with his rock looking up from the base of the hill. The project will seem like a death march from the beginning.

My preference is to attack the work tracker with an iterative approach. When it's decided what will go into the next build, add those tasks to the work tracker. As they're done, mark them as closed. The only things that stay in the tracker from one iteration to the next are those things that don't get completed in the build when they were scheduled to. Now, the big list of items in the tracker is always the big list of what we've already completed, not the big list of things still remaining. This is something akin to the Kanban system, where a team will have a fixed "capacity" of pending work. As they pull work from the pending bucket to start working on it, they can request that the bucket get topped up—but never past its capacity.

My approach to reporting bugs is different. Unless it's something trivial in the code I'm working on now, so that I can fix the problem in under a couple of minutes and move on, I'll always report it straight away. This means I won't forget about the problem; the fix is implicitly planned for the next iteration, following the Joel Test rule of fixing bugs before adding new code, and we can see how many bugs are being discovered in each build of the product. (Now that I reflect on the Joel Test, I realize that this chapter covers a lot of points that are included in the test. Perhaps you should just measure your team's performance with respect to the Joel test's 12 points and fix any that you answer "no" to—http://www.joelonsoftware.com/articles/fog0000000043.html.).

How Precisely to Track?

So, you managed to fix that bug in 2 hours. But, was it actually 2 hours, or was it 125 minutes? Did you spend those 2 hours solely fixing the bug, or did you answer that email about the engineers-versus-sales whist drive during that time?

Being able to compare estimated time versus actual time can be useful. I'm not sure that "velocity" the ratio between the estimated time and the actual time spent on tasks – is particularly helpful, because in my experience estimates are not consistently wrong by a constant factor. It's knowing what work you're bad at estimating that's helpful. Do you fail to appreciate the risks involved in adding new features, or do you tend to assume all bug fixes are trivially simple?

So, precise measurements are not particularly helpful, which is useful to know, because the accuracy probably doesn't exist to back up that precision. I usually just look at my watch when I start work and when I end work, and round to the nearest quarter or half hour. That means my time records include all those interruptions and little tasks I did while fixing the bug – which is fine because they slowed me down and that needs recording.

Estimates aren't even that accurate. The game I play with my team goes like this: every developer on the team (and no one else) independently writes down an estimate of how long the tasks we're planning will take. They're allowed to pick one of these: 1 hour, 2 hours, 4 hours, 8 hours, or don't know. If we think a task will take longer than 8 hours, we break it down and estimate smaller chunks of the task.

For each task, everyone presents their estimates. If they're roughly the same, then we just pick the highest number and go with that. If there's a spread of opinion – maybe one developer thinks something will take an hour when someone else thinks it'll take a day – we'll discuss that. Probably, one (or more) of the team is relying on tacit knowledge that needs to be brought into the open. It's usually possible to resolve such differences quickly and move on to the next thing.