FTL, Programming, Unit Testing

Unit Testing an Object with Dependent Behaviour

It would be a good idea to start with a bit of history. In previous posts I’ve talked about our custom STL implementation which contains a basic vector, but also a fixed::vector which takes a fixed size and allocates all memory within that container (usually on the stack) and cannot increase or decrease in size. This comes with a lot of benefits and provides game teams with a more consistent memory model when using containers.

But it comes with a lot of duplication. Those two containers are classes in their own right, each with over 100 tests (written with UnitTest++) which is a lot to maintain and (more importantly) keep consistent. A change to one usually means a change to the other and additional tests and checks need to be rolled out. I wasn’t happy about this, so moved all the vector code into a base class and the fixed and normal vectors are now types based on that, with a different allocator being the only difference (I didn’t want to go for inheritance when a templated structure could do all the work).

That was the easy bit. But now, how do you unit test an object (in this case the vector) who’s behaviour greatly depends on an internal object (in this case the different allocator models). The allocators have been unit tested, so we know their behaviour is correct, but the behaviour of the vector will depend on the allocator it is using (in some cases it will allocate more memory – so it’s capacity varies – in others it won’t allocate memory at all – so it’s ability to grow is different). And this behaviour may be obvious or it may be much more subtle.

And it’s this which causes the problem. The behaviour should be more or less the same, but in some cases, the behaviour will be different. push_back/insert may fail, reserve may reserve more memory than has been requested and construction behaviour will greatly depend on the allocator being used.

So to start – the simple and easy option. Create all the unit tests for one type and then duplicate them for each different allocator we come up against. But we went down this road to reduce the amount of duplication so there is still going to be a large number of tests that are the same and need to be maintained. In fact you could say it would be harder to maintain because the behaviour in some tests would be suitably different rather than clearly the same.

So I started to look for a more intelligent approach to this.

The first thing was go back to the basics – what was unit testing for? Such basics are often lost in the grand scheme of things, but in this case the point of the tests was to make sure the behaviour was correct, or more importantly, what was expected. There are obviously other aspects to unit tests (which are outside the scope of this post) but this is my primary concern.

When thought about it from this level the conflicting behaviour becomes quite clear – or at least it should. We can split the various methods into fixed and dependant calls, ones which will be the same regardless of the allocator and those which depend on what is being used. If we cannot split the behaviour then we have a problem with either our interface or our implementation. Luckily the allocation code is quite contained, but there were little tendrils of code that assumed things about the allocation of memory that it shouldn’t. Not a problem as the tests will easily pick these out – I hope.

So I may as well start with the easy stuff to test, the stuff that I now know is common across a vector regardless of the allocator. Iteration, removal (since the vector doesn’t de-allocate on removal), access, equality checks (vectors are equal based on their contents) – in effect anything that reads, accesses or removes content, nothing else. And since I have tests for these already this is quickly done.

This is testing against the vectors behaviour as I know this behaviour is constant and will never change.

The hard part is now a little bit easier. Looking at what I have left I can easily see what I need to test. Since the changes in behaviour are based on nothing other than the allocator, we can easily extract that and create a couple of mocks specifying specific allocator behaviour. One which allocates the memory as you request, one which allocates a different amount of memory and one which returns no memory at all.

But now I’m not testing the behaviour of the vector, I’m testing the implementation of the vector – knowing how it works internally and how that affects its external state.

At first I tried to be a bit clever with this. Each test was extracted into a small templated helper function, with the checks being done in those. That way I only had one implementation of a test and simply passed in the vector, source data and expected values. I had to make a few tweaks to the UT++ source to enable checks to be done outside the TEST(…) call, but this was pretty simple.

At first this worked well. As long as the set of tests were the same across all objects using our various mocks it was fine. The same number of expected results, the same or similar  input values and the same general behaviour. But it started to get tricky when the tests needed to deviate. For example, when testing a vector that uses a fixed size allocator, I need to test that the vector works fine on the boundaries, boundaries which the other vectors I’m testing don’t have. So we either end up with a large number of superficial tests, or much worse, I forget the extract the specific test cases for the specific method of implementation.

But since there is nothing wrong with a bit of duplication, especially when each ‘duplicated’ test is testing a specific thing I don’t need to be so clever. By having very discrete tests based on the implementation of the vector means I am free to deviate the test suite as I see fit, when I know that the behaviour of the vector will be different based on the type of allocator it is using. It would be beautiful to be able to use the same tests to test each type, but frankly that’s just not possible when you have to understand the internal behaviour of the object to test it correctly.

So the final solution – duplicate some tests but only those that you know have different behaviour based on an implementation or an expected outcome. There might be a bit more code but it produces the best results, the best output (it’s much easier to see what has failed and why) and the easiest to understand.

If the object depended on multiple internal objects (a policy based smart pointer is an excellent example) then I don’t believe this solution would work. The more variations you have, the blurrier the lines between fixed and dependant behaviour become. In those cases I probably would expect to be writing many more tests based on the final object, simply for piece of mind.

You can only go so far with this, and unit tests are designed to aid you and give you a security blanket. In an ideal world (one where TDD rules all) you don’t need to know what the internal implementation is doing, only the final outcome. But if you are spending more time figuring out your tests (let along writing them) then you need to take a step back and reassess what you are doing.

Title image by fisserman.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s