Why you should use a general purpose scripting language for your build scripts

For quite some time now, given the choice, I’ve opted to write my build scripts in plain Python using nothing but the standard libraries. I personally believe (with good reason) that build scripts are best served by a general purpose scripting language such as this, and that domain-specific languages or frameworks for build scripting have little if anything to offer. Most people use DSLs such as NAnt, MSBuild, Rake, Grunt or Gradle for their build scripts simply because they believe That Is How You Are Supposed To Do It, but in most cases it isn’t necessary, and in many cases it is even counterproductive.

In this post, I’d like to explain the reasons why I recommend using general purpose scripting languages and avoiding specialised build frameworks, and to address some commonly held misconceptions about build scripts in general. If I’m saying a lot about MSBuild, that’s because it’s the tool that I have the most experience with; however, many of my points apply to other tools as well, including those that aren’t XML-based.

1. Build scripts are code, not configuration.

Most build frameworks view build scripts as configuration first and code second. This is wrong. Badly wrong.

You see this in the way they adopt a declarative, rather than an imperative, approach, defining your script in terms of “targets” or “tasks” with dependencies between them. This approach can make sense in some situations — in particular, where you have a large number of similar tasks in a complex dependency graph, and you need to allow the build engine to determine the order in which they are run. This is the case, for example, with Visual Studio solutions that consist of a large number of projects.

But top level build scripts don’t work that way. At the topmost level, build scripts are inherently imperative in nature, so a declarative approach doesn’t make a whole lot of sense. Your typical build script consists of a sequence of diverse tasks which are run in an order that you define, or sometimes in a loop based on values in a collection. For example, it may look something like this:

  • Fetch the latest version of your code from source control
  • Delete any leftover files from the previous build
  • Write a file to disk containing version information
  • Fetch your project’s dependencies (NuGet packages, for example)
  • Compile your project
  • Bundle and minify your assets
  • Run your unit tests
  • Run your integration tests
  • Prepare installation packages
  • Deploy your build to the appropriate servers
  • Prepare reports (e.g. code coverage)

Writing your build script imperatively, with each of these steps as a function call in the top level of your code, allows you to see, at a glance, what your script is doing. On the other hand, writing them declaratively, with each task specifying its own dependencies, often requires you to jump around all over your build script just to get a handle on things.

One important thing that build scripts need is control flow structures — conditions, loops, iteration over arrays, parametrised subroutines, variables, and so on. You simply can’t represent these properly with a declarative language. Sure, you can define tasks to handle some of these requirements, such as regex-based find and replace, but that will never be as clear as a purely imperative approach.

I’ve never come across a definitive explanation why build frameworks should all be based around the declarative, configuration-like approach of tasks with dependencies, other than a vague, hand-waving and unsubstantiated claim that “people prefer it that way.” Personally I think it’s more likely that people just saw that this was how make, the granddaddy of build tools, was designed, assumed that it was a Best Practice, and blindly copied it without thinking.

2. Build scripts need to be maintained.

Build scripts don’t tend to change very often — perhaps once every three to six months or so. Consequently it’s tempting to view them as something that you write once and can forget about completely. However, they do change, so readability and maintainability are critical. A well written build script can make all the difference between a change taking half an hour and it taking half a sprint; between it working as intended and being riddled with bugs.

This means, of course, that XML-based build languages, such as MSBuild or NAnt, are a very bad idea. This is nothing to do with a lack of “cool” — it’s a lack of readability and maintainability, pure and simple. XML simply isn’t capable of expressing the kind of control flow structures that you need in a succinct, readable manner. MSBuild is particularly bad here. Its lack of support for looping, iteration or parametrised subroutines makes it difficult if not impossible to write anything more complex than the simplest of build scripts without resorting to painful amounts of copy and paste code. Since DRY is a vital discipline in keeping your code maintainable, anything that forces you to violate it as much as MSBuild does should be avoided with extreme prejudice.

To mitigate the problem, Ant, NAnt and MSBuild allow you to embed snippets of code in other languages, such as PowerShell. Besides the fact that the syntax to do so is so verbose and cumbersome that it’s scarcely worth it, this just raises the question: why not just use PowerShell end to end instead?

3. Build scripts need to be run from the command line.

It’s all too common to find build scripts that are very tightly integrated with the Continuous Integration server. This usually happens when you have vast swathes of configuration settings in TeamCity, TFS, Jenkins or what have you. This causes two problems: first, you have a lot of important and potentially breaking detail that isn’t checked into source control; second, it becomes very difficult if not impossible to run your build on your local machine, end to end, from the command line.

If you can’t run your build from the command line, debugging it will be painful. Every iteration of your edit-compile-test loop will require a separate check-in and a sit-on-your-hands wait for several minutes until it either completes or breaks. This is a very inefficient and wasteful way of doing things. It can also cause problems when you have to track down a regression with git bisect, because you’ll have a whole string of broken revisions to contend with.

4. Build scripts have few other domain specific requirements, if any.

Apart from this, there are only two other requirements that your build scripts have. Your build language needs to be interpreted rather than compiled (otherwise you’ll have a chicken-and-egg problem), and it needs to be able to run other programs: your compiler; NuGet to fetch your dependencies; your test runner; and so on. But that’s pretty much it. Just about any general purpose scripting language — Python, Ruby, PowerShell, bash, DOS batch files, heck even PHP if you’re that way inclined — will fit the bill.

What about the specific (N)Ant/MSBuild tasks that you need to call? Most of these can be implemented quite simply as calls to either the language’s standard library or a command line interface.

Some .NET developers don’t like this approach because they say that using, say, Python or PowerShell would mean having to learn a new language. Personally I find this a very strange argument, because if you’re using MSBuild, you’re doing that already anyway. Not only that, but the learning curve that you’re taking is actually steeper: the conceptual differences between, say, C# and Python are very superficial when compared to the conceptual differences between C# and MSBuild. Besides, learning a scripting language is a skill that can be transferred to other problem domains if necessary, whereas MSBuild is a very specialised and niche language that only ever gets used for build scripts for .NET projects.

Just because you are presented with something that describes itself as a build tool doesn’t mean to say you have to use it. Aim to choose tools and languages that allow you to write code that is easy to read, understand and maintain. You’ll be much more productive and much less stressed — and the people who have to maintain your code after you will thank you for it.

For further reading

Moving a problem from one part of your codebase to another does not eliminate it

This is, of course, a statement of the obvious, but I’ve come across quite a few “best practices” in recent years that violate it.

People come up with some design pattern or other, telling you that it solves some problem or other. At first sight, it appears that it does eliminate the problem from one part of your codebase, but on closer inspection it turns out that it merely shifts it to another, and sometimes even introduces other problems in the process.

I first noticed this in a Web Forms application, where our resident Best Practices Guy berated me for using inline data binding expressions in the .aspx files. These were actually simple data binding expressions, with no business logic, a bit like this:

<asp:Repeater id="rptData" runat="server">
  <p>
    <asp:Label Text="<%# Eval("Text") %>" runat="server" />
  </p>
</asp:repeater>

Just like you’ve seen in every Web Forms tutorial since 2001, but he said I should have been looking up the label in the DataBound event and assigning it there instead:

void rptData_DataBound(object sender, RepeaterItemEventArgs e)
{
    var label = e.Item.FindControl("lblParagraph") as Label;
    if (label != null)
    {
        label.Text = ((LineItem)e.Item.DataItem).Text;
    }
}

He claimed that it would prevent problems if I’d mistyped the property name in the .aspx file, because the C# compiler would catch it.

The reason this is a fallacy is that it just moves the problem into your code-behind file. You’re just as likely to mistype the name of the control — lblParagraph — in the string and end up with exactly the same problem. Only it’ll be easier to miss it in testing because the null check means that it will fail silently. On top of that, you’re using more than twice as many lines of code spread over two different files rather than just one to do the same thing.

I noticed a similar problem when I was evaluating OOCSS — a design pattern that’s supposed to reduce duplication in your CSS, by having you declare separate CSS classes for different functional aspects such as “button” or “highlighted” or “media”. Twitter Bootstrap uses it fairly heavily. Its selling point is that it’s supposed to make your CSS more maintainable and lightweight without using a pre-processor by reducing duplication in your stylesheets. Unfortunately, in the process, it introduces a lot of duplication and weight into your HTML because you now have to set additional class declarations on a huge number of elements.

Then of course there’s our old friend, the Repository Facade, whose proponents tell you that it reduces tight coupling between your business layer and your ORM. Of course a generic Repository Facade does this at the expense of making it impossible to optimise your queries for performance, but with a specialised one — where you’re moving your queries into your Repository Facade itself — you’re just moving the tight coupling from one part of your codebase to another. It doesn’t reduce the amount of work that you would have to do to switch your data source in the slightest, and in the process it prevents you from unit testing your business logic independently of the database.

The Repository Facade

Most developers use the term “Repository” to refer to a wrapper or abstraction layer around your O/R mapper, supposedly to let you switch out one persistence mechanism for another. However, if you look at its definition in its historical context, you’ll see that this isn’t what it refers to at all.

The Repository pattern is a part of your O/R mapper itself.

The Repository pattern was first described as follows in Martin Fowler’s Patterns of Enterprise Application Architecture:

Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects.

Patterns of Enterprise Application Architecture was written in 2003, at a time when O/R mapping technology was in its infancy. Most ORMs were commercial products, very simple by today’s standards — more akin to the likes of Dapper or PetaPoco than to modern heavyweights like NHibernate or Entity Framework. Hand-rolled data access layers were very much the order of the day. Furthermore, many of the patterns described in P of EAA — Table Data Gateway, Row Data Gateway, Data Mapper, Unit of Work, Identity Map, Lazy Load, and so on, all catalogue what are now different components of modern-day ORMs.

So when the Repository pattern talks about mediating between the domain and “data mapping layers,” it isn’t referring to your ORM as a whole, as most developers seem to assume, but to just one component of your ORM — specifically, the component that copies data from the results of the generated SQL query into your entities. This mediating layer is also an element of functionality provided by modern ORMs.

For example, Entity Framework’s DbSet<T> is a Repository. So too is NHibernate’s ISession, with methods such as QueryOver<T>().

So what is the wrapper class that people write around their ORMs then, the one that they tend to refer to as a Repository? A more accurate term for this is, in actual fact, a Repository Facade.

It’s important to draw the distinction, especially with the debate around whether this pattern has any value or not. Referring to your ORM itself as a Repository makes it easy for people to make the conceptual leap that allows them to just plug Entity Framework straight into their business service classes without the additional layer of abstraction, but on the other hand it can cause a bit of confusion if you then start saying that “the Repository pattern is harmful.” That’s why I’m now being careful to use the term “Repository” to refer to Entity Framework, NHibernate or the RavenDB client itself, and the term “Repository Facade” to refer to the practice of adding an extra abstraction layer around it.

How not to do logging: catch-log-throw

Way back in the mists of time, I worked on a project whose log files started spiralling out of control.

This wasn’t actually surprising, because the codebase in question was riddled with method after method that looked something like this:

public Widget GetWidget(int id)
{
    log.Debug("Getting widget with id " + id);
    try {
        var result = repository.GetWidget(id);
        if (result != null) {
            log.Debug("Successfully got widget with id: " + id);
        }
        else {
            log.Debug("No widget found with id: " + id);
        }
        return result;
    }
    catch (Exception ex) {
        log.Warn("Error fetching widget with id " + id, ex);
        throw;
    }
}

I’d objected to this about a year previously, but had encountered some stiff resistance from our team’s Best Practices Guy, who had been responsible for it in the first place.

The problem here is that the same exception was being logged multiple times, complete with deep stack traces, cluttering up the log files, making them very difficult to read and in the process making them grow out of control.

This is what catch-log-throw does.

But it doesn’t just cause problems with your infrastructure. It makes your code hard to read, hard to review, and easy to miss things. Our Best Practices Guy denied this when I said so, claiming that it was perfectly clear what it was doing, but you’ll see what I mean when I strip out the logging statements:

public Widget GetWidget(int id)
{
    return repository.GetWidget(id);
}

Two things become obvious here:

If you really do need this level of detail in your logs (and you usually don’t), a cleaner way to do it is to use an aspect-oriented framework such as Castle DynamicProxy or PostSharp. There’s really no need to clutter up your codebase with noise like this.

As a general rule, you should only log exceptions in catch { } blocks where they are not being re-thrown. If you’re catching it to recover from it and continue, log it as a warning; if you’re reporting an error to the end user, log it as an error. In general, a catch { } block should either log the exception or re-throw it. Unless you have a very good reason to do so, it shouldn’t do both.

How not to do logging: unnecessary abstractions

This is a very common pattern that I see over and over again in project after project:

public class MyService
{
    private ILogger _logger;
    /* snip */

    public MyService(ILogger logger, /* snip */)
    {
        _logger = logger;
        /* snip */
    }
}

There are a few problems here.

1. Don’t use dependency injection to create your loggers.

The problem with using dependency injection to create your loggers is that it denies you access to one of the most useful features of these logging frameworks: hierarchical loggers. The recommended way to instantiate loggers is to have just one for each class, with each logger being named after the class in which it is used. For example, with log4net, you would do this:

namespace MyNamespace
{
    public class MyService
    {
        private static readonly ILog log
            = LogManager.GetLogger("MyNamespace.MyService");

        /* snip */
    }
}

With NLog it is even simpler:

namespace MyNamespace
{
    public class MyService
    {
        private static readonly Logger log
            = LogManager.GetCurrentClassLogger();

        /* snip */
    }
}

Why is this so important? Simple. It allows you to fine tune your logging output on a namespace-by-namespace or a class-by-class basis. For example, you could send debugging information from NHibernate’s internals to a separate file, or log debug information only for your e-mail handling classes.

On the other hand, when you’re using your IOC container to create a logger, you can only specify a single named logger right across the board for your entire application. Sure, some IOC containers give you a way to determine the type of object into which you are injecting your logger, but others don’t, and I’ve never seen this done anyway even with those that do. You end up completely losing access to the hierarchy.

Another problem with using an IOC container here is that it limits your use of logging to classes that were created by the container in the first place. Sure, if your container exposes a service locator as a singleton (as for example StructureMap does with ObjectFactory.Instance) you could use that, but it’s ugly, not all IOC containers do that, and those that do aren’t always used that way anyway.

Finally, injected loggers do have an impact on performance. If you are injecting your loggers, your IOC container has to do more work each time you instantiate a new service, in order to locate the right logger and pass it in as a parameter. On the other hand, by creating loggers as static readonly members, you are only creating a single logger once per AppDomain for each class. This performance difference is admittedly small, but with classes that are instantiated frequently, it can easily add up.

2. Don’t abstract your loggers in application code.

There’s a case for writing an abstraction layer around your loggers when you’re creating a NuGet package for third party distribution. Some .net developers use log4net because it’s the one everyone’s heard of; others swear by NLog because it’s the best; and then you have the Microsoft-only crowd who won’t touch anything other than the Logging Application Block. As a third party library developer, you have to support all three. (Well, maybe not so much the third, because the kind of people who use the Logging Application Block are often the kind of people who won’t touch your library with a barge pole because you’re not Microsoft.)

But as an application developer, you don’t have to support anyone other than yourself, so an abstraction layer is superfluous here.

Besides being superfluous, the main problem with abstracting your loggers is that most people get it wrong. Your typical logging facade looks like this:

public interface ILogger
{
    void LogFatal(string message);
    void LogError(string message);
    void LogWarning(string message);
    void LogInfo(string message);
    void LogDebug(string message);
}

That’s all. You’re denying access to a whole lot of important features of your logger. For example, consider this code:

foreach (PropertyInfo prop in type.GetProperties())
{
    _logger.LogDebug(String.Format("Property {0} has type {1}",
        prop.Name, prop.PropertyType.Name));

    DoSomething(prop);
}

Even if your logger’s logging level is set to something higher than Debug, you are still calling String.Format and various reflection properties in a loop. In some cases, this can have a significant performance impact. What you should be doing instead is making use of log4net’s IsDebugEnabled property:

foreach (PropertyInfo prop in type.GetProperties())
{
    if (log.IsDebugEnabled) {
        log.DebugFormat("Property {0} has type {1}",
            prop.Name, prop.PropertyType.Name);
    }

    DoSomething(prop);
}

Exceptions are another one. You need to be able to pass exceptions to your logger, especially at the warning, error and fatal levels.

3. Don’t mock your loggers in tests.

Of course, all this raises the question about testability. What about mocking your loggers, you may ask?

The answer is simple: you don’t need to.

Your logging statements shouldn’t affect the outcome of your tests. If they do, then you must be doing something pretty esoteric with your logging and getting it wrong, in which case, your tests should fail.

In any case, your tests are one place more than any other where you should be able to inspect your logging output. Mocking them loses you access to this vital information.