It's a bug! It's a feature! It's… a limitation of the fundamental design of your test framework?

As some of you probably know, I'm a big fan of NCrunch. When I'm coding in C#, NCrunch gets a CPU core and a whole screen to itself (yes, I don't really write code on a system that looks like this) and sits there quietly running all my tests, all the time, and telling me the second I break anything.

I'm also a big fan of testing things that are as close to production behaviour as you can. Unit tests are great for informing the design of your components, but without integration testing you can't be sure they're actually going to work when you stick them together.

So on my current project, there's a suite of unit tests using FakeItEasy and assertions, and then a suite of integration tests that connect to the live API, follow the various hypermedia links, throw assorted JSON objects at the PUT and POST endpoints to see how they respond, and then call DELETE to clean up when they're done. And, just to keep us honest, we've got a post-deploy step in our Octopus Deploy script that will actually run the integration test suite as part of the deployment process, and roll the whole thing back if any of the tests fail. Another small step on the road to truly continuous deployment.

Anyway. Last week, I push a release to our dev environment, and a whole load of tests fail. Which is weird, because it worked on MY machine. And it worked on my machine when I pointed my local codebase at the database in the dev environment. And – here's the fun part – it worked on my machine when I pointed the entire test suite at the dev environment. So I start eliminating variables. One of the first things I pick up on is that my local test runner is NCrunch, whereas the post-deploy step is using nunit-console. So I run the local integration tests using nunit-console and – bang. Failures. Which is good, because I know what's causing the weirdness, but weird, because tests are supposed to either pass or fail regardless of what test runner you're using.

So I dig a little deeper, and I end up with what looks to me like a bug in NCrunch. See, we're using the TestCaseSource attribute to generate test cases for the API tests, and – because all we need is a bunch of different JSON objects – we're just spinning up new anonymous objects and passing them in as test cases.

Here's two anonymous objects:

var testCase1 = new { forenames = null, surname = "Batman" }
var testCase2 = new { forenames = String.Empty, surname = "Batman" }

What I noticed is that if you generate these two test cases, NCrunch will only see them as a single test – which I assumed was because their ToString() representations are equal, because null and String.Empty both return String.Empty when you ToString() them in this situation. So I opened a post about it on the NCrunch forums, even going so far as to suggest using GetHashCode() when enumerating test names, and got this really interesting response from Remco Mulder, the NCrunch lead developer:

Tests must be uniquely identifiable between execution and discovery runs. This isn't important for a tool like the nunit console runner where a test can be discovered and executed within the same process call (and thus identified by its memory address), but for a tool like NCrunch, there's no way to run the test or collect data from it without this. As you've identified, generated tests with a null parameter and an empty string will return the same result under .ToString(), so NCrunch can't tell them apart.

The only way to solve this is to change the design of your code. Try using the NUnit .SetName() method to give each of your generated tests a distinctive name.

Unfortunately .GetHashCode() is not a reliable solution to this problem as this method is not designed to generate the same identifier across different processes. This method returns different results under x86 vs x64, and under .NET Core it will actually return an entirely different result for each process. Because your code is responsible for generating the tests, the problem can only be solved within your own code.

I thought this was a really interesting insight into how a tool like NCrunch has to deal with situations that an in-process test runner like nunit-console will probably never encounter. It also turns out I’d dismissed that very warning a few weeks earlier – when it cropped up in response to an unrelated issue which produced the same symptons – and sure enough, after clicking the “Show all hidden warnings” button on the NCrunch toolbar, the warning popped back up – along with a very detailed explanation of what was causing it:

image

Plus, I had no idea that NUnit has a TestCaseData interface with a SetName() method on it, which gives a much nicer way of presenting these test cases in both NCrunch and NUnit. I've ended up with something akin to:

public static IEnumerable TestData() {
  foreach (var data in new[] { null, String.Empty }) {
    var testCase = new { forenames = data, surname = "Batman" };
    var json = JsonConvert.SerializeObject(testCase);
    yield return new TestCaseData(testCase).SetName(json);
  }
}

Oh, and if you're interested, the deployment failures were because of a weird validation rule that treats null as missing, which is fine, but String.Empty as an empty string which violates a string length constraint. Which is wrong, and now the API doesn't do it any more. This is just another reason why integration testing is a good idea. So there you have it – a bug that wasn’t a bug, a crash-course in how NUnit and NCrunch actually work behind the scenes, and a TIL for naming your NUnit tests explicitly. Happy Friday.