It’s hard to hire good developers. We face the same struggles everyone does in sorting the good from the bad. One of my favorite tropes is the resume as failed inductive proof. The EBay recruiter sees a stint at Microsoft, the Microsoft recruiter saw a stint at Yahoo, and the Yahoo recruiter saw an internship at Google. Each one assumed the previous marquee name gave the candidate the gravitas they wanted. But an inductive proof only works when the base case can be proven. The Google internship was a bust, but who calls back that far? He’s looking for a new job now because EBay didn’t like him either.
An important part of our hiring process is a programming test. Not an online multiple-choice quiz testing whether you know exactly which order the ASP.NET WebForms events fire in. You can look that up. But a StackOverflow search won’t tell you the real answer: don’t use WebForms anymore. (It’s for this reason Microsoft certifications on a resume can be a negative sign, but I digress.)
Instead it’s a programming quiz candidates do over the course of a few hours to implement a new algorithm. We’ve changed it a few times as solutions to old puzzles kept showing up on the Internet. In our most recent incarnation, we decided to make our lives a little easier by writing not just a general problem description, but providing some code for our candidates to complete. We give them an interface and some unit tests on that interface, and ask them write an implementation for us.
We’ve noticed a curious phenomenon of late: submissions which pass our tests but aren’t really right. They have fundamental flaws. We don’t expect perfection given that we don’t expect people to spend extravagant time on our tests. But nonetheless the phenomenon exposes the fundamental problem with test-driven development
"TESTS ARE CODE, TOO"
Is your code really finished when all your tests pass? No, in fact, the tests we provided turned out to be somewhat incomplete. The same thing can happen to any programmer. How do you know the tests themselves are complete? Blindly following tests, whether you or anyone else wrote them, leads to convoluted and fragile code. Tests that grow blindly (“oh, I forgot that case”) are just as convoluted and fragile.
There are different approaches to this conundrum. One is careful thought and discipline. Never hurts. Another interesting one is available on ever more platforms:QuickCheck. Instead of writing a variety of cases to check something important (“an acquire followed by a release followed by another release should throw an exception”), you instead write a statement of the rule “the number of observers should never be below zero” and let QuickCheck try lots of possibilities. It forces you to think about your invariants deeply.
This isn’t an article about QuickCheck — though we should probably write one of those — but really an admonition to not just blindly follow your tests.
If it’s worth being clever with your code, try being clever with your tests.
Tell us what you need and one of our experts will get back to you.