Friday, April 28, 2006

The parallels between TDD derived code and Artificial Neural Networks

I'd like to draw an analogy between artificial neural networks and the code that often gets produced by poor TDD, i.e. TDD done with little respect paid to refactoring.

Artificial Neural Networks

An artificial neural network (ANN) is a graph like structure that maps some set of inputs to a set of outputs. The ANN can make very complex mappings due to having multiple layers and weightings between nodes in the network.

One example of an ANN is one which can recognise a human face. To obtain such a network one stimulates a randomized network with samples of a face and of non-faces. For each sample the network will produce an output. One takes the output and calculates the difference between it and the desired output and then that difference is back propogated through the network. Back propogation of the 'error' causes the network to learn, to produce smaller and smaller errors.

After training one has a very useful tool that can classify things sometimes more accurately than a human can. The faustain bargain one has agreed to is this: One doesn't know how the network classifies its input. The 'knowledge' is just a bunch of numbers, it's a black box.

Code produced by TDD

With TDD the tests take the same role as input to the code and expected output. The developer takes the difference in actual and expected output and back propogates changes to reduce this difference to zero.

At the point when all the tests pass it can be very tempting to move onto the next feature but if one succumbs to this temptation, and does so regularly then the resulting code will closely resemble the neural network, it will work, but you won't know how it works, it too will be a black box.

Danger of over-specificity

Once one has a trained network one might have the danger of having an over specific network: it can recognize all of the faces in the sample set but cannot recognize faces outside the set, even if we might feel those additional faces are too similar to warrant misclassification. To address this problem one usually divides the sample into two sets; one set being used for training and both being used to verify.

But with TDD there is no equivalent to having a sample of tests used to drive the code and additional tests used to verify the code. So not only will the code be a black box but it could very likely be an over specific black box.

Conclusion

I find the conclusion to all of this quite obvious: Refactor your code if you want to retain the ability to understand it! Even if you have great tests and great coverage you may still have a murky black box that nobody wants to work on.

(*) It may be that black box code cannot be tested with good tests and so the existence of black box code will be noticed by the developers as an increasing difficulty when trying to write good tests.

1 comment: