james mckay dot net

Blah blah scribble scribble waffle waffle
16
Oct

Moving a problem from one part of your codebase to another does not eliminate it

This is, of course, a statement of the obvious, but I’ve come across quite a few “best practices” in recent years that violate it.

People come up with some design pattern or other, telling you that it solves some problem or other. At first sight, it appears that it does eliminate the problem from one part of your codebase, but on closer inspection it turns out that it merely shifts it to another, and sometimes even introduces other problems in the process.

I first noticed this in a Web Forms application, where our resident Best Practices Guy berated me for using inline data binding expressions in the .aspx files. These were actually simple data binding expressions, with no business logic, a bit like this:

<asp:Repeater id="rptData" runat="server">
  <p>
    <asp:Label Text="<%# Eval("Text") %>" runat="server" />
  </p>
</asp:repeater>

Just like you’ve seen in every Web Forms tutorial since 2001, but he said I should have been looking up the label in the DataBound event and assigning it there instead:

void rptData_DataBound(object sender, RepeaterItemEventArgs e)
{
    var label = e.Item.FindControl("lblParagraph") as Label;
    if (label != null)
    {
        label.Text = ((LineItem)e.Item.DataItem).Text;
    }
}

He claimed that it would prevent problems if I’d mistyped the property name in the .aspx file, because the C# compiler would catch it.

The reason this is a fallacy is that it just moves the problem into your code-behind file. You’re just as likely to mistype the name of the control — lblParagraph — in the string and end up with exactly the same problem. Only it’ll be easier to miss it in testing because the null check means that it will fail silently. On top of that, you’re using more than twice as many lines of code spread over two different files rather than just one to do the same thing.

I noticed a similar problem when I was evaluating OOCSS — a design pattern that’s supposed to reduce duplication in your CSS, by having you declare separate CSS classes for different functional aspects such as “button” or “highlighted” or “media”. Twitter Bootstrap uses it fairly heavily. Its selling point is that it’s supposed to make your CSS more maintainable and lightweight without using a pre-processor by reducing duplication in your stylesheets. Unfortunately, in the process, it introduces a lot of duplication and weight into your HTML because you now have to set additional class declarations on a huge number of elements.

Then of course there’s our old friend, the Repository Facade, whose proponents tell you that it reduces tight coupling between your business layer and your ORM. Of course a generic Repository Facade does this at the expense of making it impossible to optimise your queries for performance, but with a specialised one — where you’re moving your queries into your Repository Facade itself — you’re just moving the tight coupling from one part of your codebase to another. It doesn’t reduce the amount of work that you would have to do to switch your data source in the slightest, and in the process it prevents you from unit testing your business logic independently of the database.

09
Oct

The Repository Facade

Most developers use the term “Repository” to refer to a wrapper or abstraction layer around your O/R mapper, supposedly to let you switch out one persistence mechanism for another. However, if you look at its definition in its historical context, you’ll see that this isn’t what it refers to at all.

The Repository pattern is a part of your O/R mapper itself.

The Repository pattern was first described as follows in Martin Fowler’s Patterns of Enterprise Application Architecture:

Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects.

Patterns of Enterprise Application Architecture was written in 2003, at a time when O/R mapping technology was in its infancy. Most ORMs were commercial products, very simple by today’s standards — more akin to the likes of Dapper or PetaPoco than to modern heavyweights like NHibernate or Entity Framework. Hand-rolled data access layers were very much the order of the day. Furthermore, many of the patterns described in P of EAA — Table Data Gateway, Row Data Gateway, Data Mapper, Unit of Work, Identity Map, Lazy Load, and so on, all catalogue what are now different components of modern-day ORMs.

So when the Repository pattern talks about mediating between the domain and “data mapping layers,” it isn’t referring to your ORM as a whole, as most developers seem to assume, but to just one component of your ORM — specifically, the component that copies data from the results of the generated SQL query into your entities. This mediating layer is also an element of functionality provided by modern ORMs.

For example, Entity Framework’s DbSet<T> is a Repository. So too is NHibernate’s ISession, with methods such as QueryOver<T>().

So what is the wrapper class that people write around their ORMs then, the one that they tend to refer to as a Repository? A more accurate term for this is, in actual fact, a Repository Facade.

It’s important to draw the distinction, especially with the debate around whether this pattern has any value or not. Referring to your ORM itself as a Repository makes it easy for people to make the conceptual leap that allows them to just plug Entity Framework straight into their business service classes without the additional layer of abstraction, but on the other hand it can cause a bit of confusion if you then start saying that “the Repository pattern is harmful.” That’s why I’m now being careful to use the term “Repository” to refer to Entity Framework, NHibernate or the RavenDB client itself, and the term “Repository Facade” to refer to the practice of adding an extra abstraction layer around it.

02
Oct

Why feature switches?

The key feature of Dolstagis.Web is that it builds the concept of feature switches and Branch by Abstraction right into its core. To anyone who recalls the position that I adopted three years ago in The Great Feature Branch Debate, this will no doubt come as a bit of a surprise. Am I jumping the shark here?

Not really. I may have been objecting at the time to the black-and-white “feature branches bad, feature toggles good” line being adopted by Martin Fowler and his ThoughtWorks colleagues, but my own opinion is somewhat more nuanced than the polar opposite of “feature toggles bad, feature branches good.” There are actually a lot of clever things that you can do with feature toggles:

  • You can set them to flip at a specific date and time, for example, if you want to launch something at a conference.
  • You can have them go by IP address. For example, country-specific features.
  • You can use them for A/B testing.
  • You can use them to turn off features and degrade your site’s performance gracefully under load.
  • You may occasionally have a feature that needs to be turned off in order to perform a specific upgrade.

The problem with feature switches is that when they are used as an alternative to branching and merging, you are regularly deploying code into your production environment that you know for a fact to be buggy, immature and incorrect. If your feature switches aren’t properly architected, this can end up being exposed to the general public — with disastrous results. Breaking your tasks down into small user stories can help, but this is not always the case.

The commonest mistake here is that your feature switch doesn’t switch everything. On an ASP.NET web application, for example, it may switch out your controllers, or your routes, but you will still have the same views, the same JavaScript, the same CSS and the same images and other static assets available for both the “on” and “off” states. It’s difficult to turn static assets on or off when by default they’re all served up by IIS from your web application’s filespace.

Dolstagis.Web gets round this problem quite simply by requiring you to expose your static files and your views explicitly in your feature definition classes. If it hasn’t been added, it won’t be shown: it’s as simple as that. In fact, you aren’t even limited to using your application filespace: you could just as easily include your static files and views as resources within your assembly. You can even have the feature that you are toggling in a separate assembly altogether, and modify your build process so that your production version doesn’t even have a copy of the immature and buggy code.

Of course this doesn’t guarantee you that you’ll get your feature switches right, and there are still ways in which you can get them wrong, but hopefully this approach will help to make it easier to avoid running into problems.

25
Sep

Dolstagis.Web 0.3 is out

The latest version of my hobby project has now been released to NuGet for anyone who wants to tinker.

When I first started it off a year ago, it was mainly an experiment to see just how long it would take me to write a minimum viable alternative MVC framework. I was mainly inspired in this by tinkering with some of the alternatives to ASP.NET MVC such as NancyFX, FubuMVC and so on, but also because I’d run out of ideas for something to keep me entertained on my daily commute into London. In the end it took me about a month or so.

Of course, there’s a vast difference between “minimum viable” and “actually usable,” and all I was interested in at the time was a basic proof of concept, so once I had the bare minimum up and running, I decided to just park it. But then back in August I was inspired to take it a bit further, so since then I’ve been doing a bit more work on it, and now I think I’ve got it to a point at which I can realistically start dogfooding it on another hobby project.

Over the coming weeks I’ll be blogging about some of the features I’ve been building into it, but for now here’s a summary of what I’ve been working on over the summer:

  • Everything is now OWIN-based rather than going through a custom abstraction layer based around HTTP handlers and modules.
  • The feature switching API has now been implemented. You can base your features switchable on a configuration setting, or a go-live date, or you can even write your own custom feature switches based on various aspects of your request, such as user agent or IP address.
  • Session state, cookie handling and a rudimentary user authentication mechanism are now available.
  • The routing engine has been rewritten. Each Feature now gets its own route table, which can be switched out for a custom implementation if the out-of-the-box option isn’t suitable for your needs.
  • I’ve taken a first stab at implementing some basic model binding.
  • Version 0.1’s Modules have been renamed to Features for consistency with the concept of feature switches.

Here are some ideas that I’m thinking of implementing in due course:

  • A Razor view engine adapter
  • Anti-XSRF protection
  • A Ruby on Rails-style “message flash”
  • An asset pipeline for bundling and minification
  • Some form of content negotiation
  • An A/B testing plugin built around the feature switch API
  • The ability to declare dependencies between features

It’s still not quite production ready yet, so it’s best to stick to breakable toy projects for now. Suggestions are, of course, always welcome.

18
Sep

How not to do logging: catch-log-throw

Way back in the mists of time, I worked on a project whose log files started spiralling out of control.

This wasn’t actually surprising, because the codebase in question was riddled with method after method that looked something like this:

public Widget GetWidget(int id)
{
    log.Debug("Getting widget with id " + id);
    try {
        var result = repository.GetWidget(id);
        if (result != null) {
            log.Debug("Successfully got widget with id: " + id);
        }
        else {
            log.Debug("No widget found with id: " + id);
        }
        return result;
    }
    catch (Exception ex) {
        log.Warn("Error fetching widget with id " + id, ex);
        throw;
    }
}

I’d objected to this about a year previously, but had encountered some stiff resistance from our team’s Best Practices Guy, who had been responsible for it in the first place.

The problem here is that the same exception was being logged multiple times, complete with deep stack traces, cluttering up the log files, making them very difficult to read and in the process making them grow out of control.

This is what catch-log-throw does.

But it doesn’t just cause problems with your infrastructure. It makes your code hard to read, hard to review, and easy to miss things. Our Best Practices Guy denied this when I said so, claiming that it was perfectly clear what it was doing, but you’ll see what I mean when I strip out the logging statements:

public Widget GetWidget(int id)
{
    return repository.GetWidget(id);
}

Two things become obvious here:

If you really do need this level of detail in your logs (and you usually don’t), a cleaner way to do it is to use an aspect-oriented framework such as Castle DynamicProxy or PostSharp. There’s really no need to clutter up your codebase with noise like this.

As a general rule, you should only log exceptions in catch { } blocks where they are not being re-thrown. If you’re catching it to recover from it and continue, log it as a warning; if you’re reporting an error to the end user, log it as an error. In general, a catch { } block should either log the exception or re-throw it. Unless you have a very good reason to do so, it shouldn’t do both.

11
Sep

How not to do logging: unnecessary abstractions

This is a very common pattern that I see over and over again in project after project:

public class MyService
{
    private ILogger _logger;
    /* snip */

    public MyService(ILogger logger, /* snip */)
    {
        _logger = logger;
        /* snip */
    }
}

There are a few problems here.

1. Don’t use dependency injection to create your loggers.

The problem with using dependency injection to create your loggers is that it denies you access to one of the most useful features of these logging frameworks: hierarchical loggers. The recommended way to instantiate loggers is to have just one for each class, with each logger being named after the class in which it is used. For example, with log4net, you would do this:

namespace MyNamespace
{
    public class MyService
    {
        private static readonly ILog log
            = LogManager.GetLogger("MyNamespace.MyService");

        /* snip */
    }
}

With NLog it is even simpler:

namespace MyNamespace
{
    public class MyService
    {
        private static readonly Logger log
            = LogManager.GetCurrentClassLogger();

        /* snip */
    }
}

Why is this so important? Simple. It allows you to fine tune your logging output on a namespace-by-namespace or a class-by-class basis. For example, you could send debugging information from NHibernate’s internals to a separate file, or log debug information only for your e-mail handling classes.

On the other hand, when you’re using your IOC container to create a logger, you can only specify a single named logger right across the board for your entire application. Sure, some IOC containers give you a way to determine the type of object into which you are injecting your logger, but others don’t, and I’ve never seen this done anyway even with those that do. You end up completely losing access to the hierarchy.

Another problem with using an IOC container here is that it limits your use of logging to classes that were created by the container in the first place. Sure, if your container exposes a service locator as a singleton (as for example StructureMap does with ObjectFactory.Instance) you could use that, but it’s ugly, not all IOC containers do that, and those that do aren’t always used that way anyway.

Finally, injected loggers do have an impact on performance. If you are injecting your loggers, your IOC container has to do more work each time you instantiate a new service, in order to locate the right logger and pass it in as a parameter. On the other hand, by creating loggers as static readonly members, you are only creating a single logger once per AppDomain for each class. This performance difference is admittedly small, but with classes that are instantiated frequently, it can easily add up.

2. Don’t abstract your loggers in application code.

There’s a case for writing an abstraction layer around your loggers when you’re creating a NuGet package for third party distribution. Some .net developers use log4net because it’s the one everyone’s heard of; others swear by NLog because it’s the best; and then you have the Microsoft-only crowd who won’t touch anything other than the Logging Application Block. As a third party library developer, you have to support all three. (Well, maybe not so much the third, because the kind of people who use the Logging Application Block are often the kind of people who won’t touch your library with a barge pole because you’re not Microsoft.)

But as an application developer, you don’t have to support anyone other than yourself, so an abstraction layer is superfluous here.

Besides being superfluous, the main problem with abstracting your loggers is that most people get it wrong. Your typical logging facade looks like this:

public interface ILogger
{
    void LogFatal(string message);
    void LogError(string message);
    void LogWarning(string message);
    void LogInfo(string message);
    void LogDebug(string message);
}

That’s all. You’re denying access to a whole lot of important features of your logger. For example, consider this code:

foreach (PropertyInfo prop in type.GetProperties())
{
    _logger.LogDebug(String.Format("Property {0} has type {1}",
        prop.Name, prop.PropertyType.Name));

    DoSomething(prop);
}

Even if your logger’s logging level is set to something higher than Debug, you are still calling String.Format and various reflection properties in a loop. In some cases, this can have a significant performance impact. What you should be doing instead is making use of log4net’s IsDebugEnabled property:

foreach (PropertyInfo prop in type.GetProperties())
{
    if (log.IsDebugEnabled) {
        log.DebugFormat("Property {0} has type {1}",
            prop.Name, prop.PropertyType.Name);
    }

    DoSomething(prop);
}

Exceptions are another one. You need to be able to pass exceptions to your logger, especially at the warning, error and fatal levels.

3. Don’t mock your loggers in tests.

Of course, all this raises the question about testability. What about mocking your loggers, you may ask?

The answer is simple: you don’t need to.

Your logging statements shouldn’t affect the outcome of your tests. If they do, then you must be doing something pretty esoteric with your logging and getting it wrong, in which case, your tests should fail.

In any case, your tests are one place more than any other where you should be able to inspect your logging output. Mocking them loses you access to this vital information.

04
Sep

A maturity model for best practices

As developers, we’ve all encountered them. People who promote or even attempt to enforce a specific (usually inefficient, misunderstood, and inappropriately applied) way of working, and who dismiss any suggestion to the contrary out of hand with “you’re not sticking to best practices.”

Here’s my response to such people. It’s a quick and dirty scale to assess your maturity in how you deal with the concept of best practices:

Level 0: You don’t have any concept of best practices. You just fudge along and more or less busk it.

Level 1: You are aware of some common practices, which you follow simply because you are told that they are “best practices.” You attempt to master them because you believe that doing so will be advantageous to your career. You think that if you don’t stick to them, you’ll run into problems down the road, but you can’t say what those problems are.

Level 2: You are able to explain the benefits of your “best practices.” You promote them and seek to enforce them on your team, and dismiss any suggestion of alternatives as cowboy territory. You are aware of what problems they claim to solve, but not of how effective they are at actually solving them in practice.

Level 3: You are able to explain the disadvantages of your “best practices,” to propose alternatives, and to assess which ones are actually relevant to your specific situation.

Level 4: You look for evidence that your “best practices” actually deliver the benefits that they claim to deliver. You are able to identify those that do not, or whose supposed benefits are out of date, no longer relevant, fallacious, or confused with other concepts.

Level 5: You are able to come up with criteria by which to evaluate best practices for fitness for purpose.

Where do you think these Best Practices Guys fit in?

28
Aug

Sorting out the confusion that is OWIN, Katana and IAppBuilder

I’ve been doing some more work on my MVC hobby project lately, and one thing I’ve been working on has been replacing the rather poorly thought out abstraction layer around the host process with OWIN.

If you’ve never come across OWIN before, it’s the new standard way of decoupling .net-based applications and frameworks from the servers that run them, a bit like WSGI for Python or Rack for Ruby. It means that you can host your web application not only in IIS but also in a console application, or a Windows application, or even in Apache under Mono on a Linux server. The first version of the standard was finalised about two years ago.

The OWIN specification is elegantly simple. You just have to provide a delegate of type Func<IDictionary<string, object>, Task> — or in other words, something that looks like this:

public Task SomeAppFunc(IDictionary<string, object> environment);

where the environment dictionary provides a standard set of keys containing things such as the request and response headers, body, and so on. This delegate is called the AppFunc. The values are all BCL types, so you don’t have to take dependencies on anything else. In fact, the OWIN specification explicitly says this:

OWIN is defined in terms of a delegate structure. There is no assembly called OWIN.dll or similar. Implementing either the host or application side the OWIN spec does not introduce a dependency to a project.

So putting all this together, a “Hello World” OWIN application would look something like this:

public Task HelloWorldAppFunc
    (IDictionary<string, object> environment)
{
    var responseHeaders = environment["owin.ResponseHeaders"]
        as IDictionary<string, string[]>;
    var responseBody = environment["owin.ResponseBody"]
        as Stream;
    responseHeaders["Content-Type"] = new string[]
        { "text/plain" };
    var writer = new StreamWriter(responseBody);
    writer.WriteLine("Hello world");
}

That’s it. But now we need to find somewhere to host it — and here, we run up against a problem.

The problem is that most of the OWIN “hello world” tutorials that you see on the web simply don’t look like that. Take for instance the one you see on www.asp.net:

public void Configuration(IAppBuilder app)
{
    // New code:
    app.Run(context =>
    {
        context.Response.ContentType = "text/plain";
        return context.Response.WriteAsync("Hello, world.");
    });
}

Just a minute … what’s this IAppBuilder? And where in the OWIN specification are there classes with a Response property, or a ContentType property and a WriteAsync method?

What you are looking at is not OWIN itself, but a set of libraries created by Microsoft called Katana. These libraries provide, among other things, some strongly typed wrappers around the AppFunc defined in the OWIN specification, so in one sense they’re useful in reducing boilerplate code.

The problem here is that Katana is built on an obsolete pre-release draft of the OWIN specification. The IAppBuilder interface was originally described in initial drafts of the OWIN specification, but it has since been removed. IAppBuilder is defined in a NuGet package called owin.dll, but the community voted to sunset this back in May, and it’s now considered deprecated; new OWIN-related libraries should not use it. That’s why it’s so difficult to find any documentation on IAppBuilder itself: a Google search for IAppBuilder.Use merely leads to a couple of extension methods provided by Katana.

So…given our nice shiny AppFunc, how do we host it?

In theory, we should be able to just pass it to the host process. Some OWIN hosts, such as Nowin, let you do just that, by passing it into the ServerBuilder.SetOwinApp method. With Katana, it’s a little bit more complicated.

The IAppBuilder interface declares a method called Use, whose method signature looks like this:

void Use(object middleware, params object[] args)

Intuitively, you’d expect to be able to just pass your IAppBuilder method into the Use method. Unfortunately, if you try this, it throws an exception. What you actually have to do is to pass a middleware delegate. OWIN middleware (and this isn’t documented in the spec) is a delegate which takes one AppFunc and returns another AppFunc:

using AppFunc = Func<IDictionary<string, object>, Task>;
using MiddlewareFunc = Func<AppFunc, AppFunc>;

Confused? So was I at first.

The AppFunc that was passed in to the MiddlewareFunc is simply the next step in the chain. So your AppFunc should do what it has to do, then either call or ignore the AppFunc which was passed in. For example, this middleware would just log the start and end of each invocation to the console:

app.Use(new Func<AppFunc, AppFunc>(next => async env => {
    Console.WriteLine("Starting request");
    await next(env);
    Console.WriteLine("Ending request");
}));

If you are writing a self-contained application rather than middleware, your AppFunc will be the last step in the pipeline, so you will want to ignore the “next” AppFunc. You would therefore do this:

app.Use(new Func<AppFunc, AppFunc>(ignored => HelloWorldAppFunc));

There are other ways of registering OWIN apps or middleware with a Katana host, by passing a middleware type or instance with a specific signature, or by using one of Katana’s strongly-typed wrappers, but none of these are defined in the OWIN specification, so I won’t dwell on them here.

Fortunately, this is set to be clarified in ASP.NET vNext: there’s been a lot of feedback from the community that IAppBuilder shouldn’t be the only way of creating an OWIN pipeline, and that the Katana wrapper classes, OwinMiddleware, OwinRequest and OwinResponse, have been causing some confusion, so the means to host a raw OWIN application or middleware will become more transparent. In the meantime, I hope that this clears up some of the confusion.

21
Aug

Query Objects: a better approach than your BLL/repository

If you’ve been following what I’ve been saying here on my blog and on the ASP.NET forums over the past month or so, you’ll no doubt realise that I’m not a fan of the traditional layered architecture, with your presentation layer only allowed to talk to your business layer, your business layer only allowed to talk to your repository, only your repository allowed to talk to your ORM, and all of them in separate assemblies for no reason whatsoever other than That Is How You Are Supposed To Do It. It adds a lot of friction and ceremony, it restricts you in ways that are harmful, its only benefits are unnecessary and dubious, and every implementation of it that I’ve come across has been horrible.

Here’s a far better approach:

public class BlogController : Controller
{
    private IBlogContext _context;

    public BlogController(IBlogContext context)
    {
        _context = context;
    }

    public ActionResult ShowPosts(PostsQuery query)
    {
        query.PrefetchComments = false;
        var posts = query.GetPosts(_context);
        return View(posts);
    }
}

[Bind(Exclude="PrefetchComments")]
public class PostsQuery
{
    private const int DefaultPageSize = 10;

    public int? PageNumber { get; set; }
    public int? PageSize { get; set; }
    public bool Descending { get; set; }
    public bool PrefetchComments { get; set; }

    public IQueryable<Post> GetPosts(IBlogContext context)
    {
        var posts = Descending
            ? context.Posts.OrderByDescending
                (post => post.PostDate)
            : context.Posts.OrderBy(post => post.PostDate);
        if (PrefetchComments) {
            posts = posts.Include("Comments");
        }
        if (PageNumber.HasValue && PageNumber > 1) {
            posts = posts.Skip
                ((PageNumber - 1) * (PageSize ?? DefaultPageSize));
        }
        posts = posts.Take(PageSize ?? DefaultPageSize);
        return posts;
    }
}

A few points to note here.

First, you are injecting your Entity Framework DbContext subclass (the implementation of IBlogContext) directly into your controllers. Get over it: it’s not as harmful as you think it is. Your IOC container can (and should) manage its lifecycle.

Secondly, your query object follows the Open/Closed Principle: you can easily add new sorting and filtering options without having to modify either the method signatures of your controllers or its own other properties and methods. With a query method on your Repository, on the other hand, adding new options would be a breaking change.

Thirdly, it is very easy to avoid SELECT n+1 problems on the one hand while at the same time not fetching screeds of data that you don’t need on the other, as the PrefetchComments property illustrates.

Fourthly, this approach is no less testable than your traditional BLL/BOL/DAL approach. By mocking your IBlogContext and IDbSet<T> interfaces, you can test your query object in isolation from your database. You would need to hit the database for more advanced Entity Framework features of course, but the same would be true with query methods on your repository.

Fifthly, note that your query object is automatically created and populated with the correct settings by ASP.NET MVC’s model binder.

All in all, a very simple, elegant and DRY approach.

(Hat tip: Jimmy Bogard for the original inspiration. This version simply adds the twist of having your query objects created and initialised by ASP.NET MVC’s model binder.)

14
Aug

If your tests aren’t hitting the database, you might as well not write tests at all

Out of all the so-called “best practices” that are nothing of the sort, this one comes right up at the top of my list. It’s the idea that hitting the database in your tests is somehow harmful.

I’m quite frankly amazed that this one gets as much traction as it does, because it’s actively dangerous. Some parts of your codebase require even more attention from your tests than others — in particular, parts which are:

  1. easy to get wrong
  2. tricky to get right
  3. not obvious when you’re getting it wrong
  4. difficult to verify manually
  5. high-impact if you do screw up.

Your data access layer, your database itself, and the interactions between them and the rest of your application fall squarely into all the above categories. There are a lot of moving parts in any persistence mechanism — foreign key constraints, which end of a many-to-many relationship you declare as the inverse, mappings, migrations, and so on, and it’s very easy to make a mistake on any of them. If you’ve ever had to wrestle with the myriad of obscure, surprising and gnarly error messages that you get with both NHibernate and Entity Framework, you’ll know exactly what I mean.

If you never test against a real database, but rely exclusively on mocking out your data access layer, you are leaving vast swathes of your most error-prone and business-critical functionality with no test coverage at all. You might as well not be testing anything.

Yes, tests that hit the database are slow. Yes, it’s off-putting to write slow tests. But tests that don’t hit the database don’t test things that need to be tested. Sometimes, there are no short cuts.

(Incidentally, this is also why you shouldn’t waste time writing unit tests for your getters and setters or for anaemic business services: these are low-risk, low-impact aspects of your codebase that usually break other tests anyway if you do get them wrong. Testing your getters and setters isn’t unit testing, it’s unit testing theatre.)

“But that rule just applies to unit tests. Integration, functional and regression tests are different.”

I agree there, and I’m not contradicting that. But if you’re saying “don’t hit your database in your unit tests” and then trying to qualify it in this way, you’re just causing confusion.

Regardless of what you are trying to say, people will hear “don’t hit the database in your tests, period.” People scan what you write and pick out sound bites. They see the headline, and skip over the paragraph about integration tests and so on as if it were merely a footnote.

By all means tell people to test their business logic independently of the database if you like, but phrase it in a way that’s less likely to be misunderstood. If you’re leaving them with the impression that they shouldn’t be testing their database, their data access layer, and the interaction between them and the rest of your application, then even if that isn’t your intention, you’re doing them a serious disservice.