james mckay dot net
because there are few things that are less logical than business logic

Posts tagged: programming languages

Why you should use a general purpose scripting language for your build scripts

For quite some time now, given the choice, I’ve opted to write my build scripts in plain Python using nothing but the standard libraries. I personally believe (with good reason) that build scripts are best served by a general purpose scripting language such as this, and that domain-specific languages or frameworks for build scripting have little if anything to offer. Most people use DSLs such as NAnt, MSBuild, Rake, Grunt or Gradle for their build scripts simply because they believe That Is How You Are Supposed To Do It, but in most cases it isn’t necessary, and in many cases it is even counterproductive.

In this post, I’d like to explain the reasons why I recommend using general purpose scripting languages and avoiding specialised build frameworks, and to address some commonly held misconceptions about build scripts in general. If I’m saying a lot about MSBuild, that’s because it’s the tool that I have the most experience with; however, many of my points apply to other tools as well, including those that aren’t XML-based.

1. Build scripts are code, not configuration.

Most build frameworks view build scripts as configuration first and code second. This is wrong. Badly wrong.

You see this in the way they adopt a declarative, rather than an imperative, approach, defining your script in terms of “targets” or “tasks” with dependencies between them. This approach can make sense in some situations — in particular, where you have a large number of similar tasks in a complex dependency graph, and you need to allow the build engine to determine the order in which they are run. This is the case, for example, with Visual Studio solutions that consist of a large number of projects.

But top level build scripts don’t work that way. At the topmost level, build scripts are inherently imperative in nature, so a declarative approach doesn’t make a whole lot of sense. Your typical build script consists of a sequence of diverse tasks which are run in an order that you define, or sometimes in a loop based on values in a collection. For example, it may look something like this:

  • Fetch the latest version of your code from source control
  • Delete any leftover files from the previous build
  • Write a file to disk containing version information
  • Fetch your project’s dependencies (NuGet packages, for example)
  • Compile your project
  • Bundle and minify your assets
  • Run your unit tests
  • Run your integration tests
  • Prepare installation packages
  • Deploy your build to the appropriate servers
  • Prepare reports (e.g. code coverage)

Writing your build script imperatively, with each of these steps as a function call in the top level of your code, allows you to see, at a glance, what your script is doing. On the other hand, writing them declaratively, with each task specifying its own dependencies, often requires you to jump around all over your build script just to get a handle on things.

One important thing that build scripts need is control flow structures — conditions, loops, iteration over arrays, parametrised subroutines, variables, and so on. You simply can’t represent these properly with a declarative language. Sure, you can define tasks to handle some of these requirements, such as regex-based find and replace, but that will never be as clear as a purely imperative approach.

I’ve never come across a definitive explanation why build frameworks should all be based around the declarative, configuration-like approach of tasks with dependencies, other than a vague, hand-waving and unsubstantiated claim that “people prefer it that way.” Personally I think it’s more likely that people just saw that this was how make, the granddaddy of build tools, was designed, assumed that it was a Best Practice, and blindly copied it without thinking.

2. Build scripts need to be maintained.

Build scripts don’t tend to change very often — perhaps once every three to six months or so. Consequently it’s tempting to view them as something that you write once and can forget about completely. However, they do change, so readability and maintainability are critical. A well written build script can make all the difference between a change taking half an hour and it taking half a sprint; between it working as intended and being riddled with bugs.

This means, of course, that XML-based build languages, such as MSBuild or NAnt, are a very bad idea. This is nothing to do with a lack of “cool” — it’s a lack of readability and maintainability, pure and simple. XML simply isn’t capable of expressing the kind of control flow structures that you need in a succinct, readable manner. MSBuild is particularly bad here. Its lack of support for looping, iteration or parametrised subroutines makes it difficult if not impossible to write anything more complex than the simplest of build scripts without resorting to painful amounts of copy and paste code. Since DRY is a vital discipline in keeping your code maintainable, anything that forces you to violate it as much as MSBuild does should be avoided with extreme prejudice.

To mitigate the problem, Ant, NAnt and MSBuild allow you to embed snippets of code in other languages, such as PowerShell. Besides the fact that the syntax to do so is so verbose and cumbersome that it’s scarcely worth it, this just raises the question: why not just use PowerShell end to end instead?

3. Build scripts need to be run from the command line.

It’s all too common to find build scripts that are very tightly integrated with the Continuous Integration server. This usually happens when you have vast swathes of configuration settings in TeamCity, TFS, Jenkins or what have you. This causes two problems: first, you have a lot of important and potentially breaking detail that isn’t checked into source control; second, it becomes very difficult if not impossible to run your build on your local machine, end to end, from the command line.

If you can’t run your build from the command line, debugging it will be painful. Every iteration of your edit-compile-test loop will require a separate check-in and a sit-on-your-hands wait for several minutes until it either completes or breaks. This is a very inefficient and wasteful way of doing things. It can also cause problems when you have to track down a regression with git bisect, because you’ll have a whole string of broken revisions to contend with.

4. Build scripts have few other domain specific requirements, if any.

Apart from this, there are only two other requirements that your build scripts have. Your build language needs to be interpreted rather than compiled (otherwise you’ll have a chicken-and-egg problem), and it needs to be able to run other programs: your compiler; NuGet to fetch your dependencies; your test runner; and so on. But that’s pretty much it. Just about any general purpose scripting language — Python, Ruby, PowerShell, bash, DOS batch files, heck even PHP if you’re that way inclined — will fit the bill.

What about the specific (N)Ant/MSBuild tasks that you need to call? Most of these can be implemented quite simply as calls to either the language’s standard library or a command line interface.

Some .NET developers don’t like this approach because they say that using, say, Python or PowerShell would mean having to learn a new language. Personally I find this a very strange argument, because if you’re using MSBuild, you’re doing that already anyway. Not only that, but the learning curve that you’re taking is actually steeper: the conceptual differences between, say, C# and Python are very superficial when compared to the conceptual differences between C# and MSBuild. Besides, learning a scripting language is a skill that can be transferred to other problem domains if necessary, whereas MSBuild is a very specialised and niche language that only ever gets used for build scripts for .NET projects.

Just because you are presented with something that describes itself as a build tool doesn’t mean to say you have to use it. Aim to choose tools and languages that allow you to write code that is easy to read, understand and maintain. You’ll be much more productive and much less stressed — and the people who have to maintain your code after you will thank you for it.

For further reading

In response to criticisms of CSS pre-processors

It turns out that not everybody likes CSS pre-processors. For some people, it’s a philosophical point (a bit like purist photographers who still insist on shooting print film on all-manual Leica cameras) but other people are scared of introducing an extra layer of abstraction.

Some people argue that the features of CSS pre-processors such as variables aren’t necessary if you write your CSS correctly, and indeed, there are design patterns that aim to reduce repetition and magic constants in vanilla CSS. One such example is Object Oriented CSS (OOCSS).

OOCSS sets down two main principles:

  • Separate structure and skin
  • Separate container and content

I won’t discuss these in any detail here (you can read about them elsewhere), but I’ll just give an example. Whereas with a CSS pre-processor, you might write code such as this:

@small: 12px;

.sidebar {
  background: #ccc;
  font-size: @small;
}

.permalink {
  border-top: 1px solid #ccc;
  color: #333;
  font-size: @small;
}
<div class="sidebar"></div>
<div class="permalink"></div>

in OOCSS, you would use a separate class called small instead of the variable:

.small {
  font-size: 12px;
}

.sidebar {
  background: #ccc;
}

.permalink {
  border-top: 1px solid #ccc;
  color: #333;
}
<div class="sidebar small"></div>
<div class="permalink small"></div>

There you go. Vanilla CSS. DRY vanilla CSS. There’s no need for a pre-processor after all, is there?

Is there?

There is one big problem here. You haven’t eliminated repetition altogether. You’ve just moved it from your stylesheet into your HTML. By avoiding named constants and moving your font size declaration into a separate class, you now have to add a reference to the small class everywhere in your code where you are using the sidebar class or the permalink class. For some classes in a large site, this can potentially be in dozens if not hundreds of places. Congratulations, you’ve just robbed Peter to pay Paul — and found out that he’s asking for ten times as much.

Another, more serious problem occurs if you need to change your existing class names in order to retro-fit OOCSS into an existing site. For example, you may need to replace this:

<button class="button">Ordinary Button</button>
<button class="submit-button">Submit button</button>
<button class="small-button">Small button</button>
<button class="small-submit-button">Small submit button</button>

with this:

<button class="button">Ordinary Button</button>
<button class="button submit-button">Submit button</button>
<button class="button small">Small button</button>
<button class="button submit-button small">Small submit button</button>

You see the problem here? If you are referencing any of the old class names, such as small-button, in JavaScript anywhere (think: jQuery selectors), that code will break. On a complex web application, this can be a high-risk refactoring, requiring changes in potentially dozens of places.

“When a single change to a program results in a cascade of changes to dependent modules, that program exhibits the undesirable attributes that we have come to associate with ‘bad’ design. The program becomes fragile, rigid, unpredictable and unreusable.” — Robert C Martin

Now don’t get me wrong here. I don’t think OOCSS is necessarily a bad thing. It is well worth considering as a framework for new projects. But it can be pretty tricky to retro-fit it to an existing website.

Advanced features.

Some people say that features such as mathematical expressions are not necessary, because you can easily use comments instead to document your thought processes. For example, rather than using this:

.contents {
  width: @site-width – (@sidebar-width + @gutter-width);
}

you can do something like this:

.contents {
  width: 600px; /* site width less combined widths of sidebar & gutter */
}

This is a very strange argument indeed — the practice it promotes has been universally condemned as an antipattern in every other programming discipline from the 1960s right through to the present day. Even more strange is the fact that one of its proponents is none other than Bert Bos, the former chairman of the W3C CSS Working Group, who considers symbolic constants in CSS to be harmful. This is a bit like the head of the General Medical Council telling us that using disinfectant in hospitals is harmful.

Again, the problem here is when you need to make a change. Let’s say that you wish to change your site width from 800 to 900 pixels, for example. Using this approach, you would have to recalculate potentially dozens of values throughout your stylesheets, and on top of that, you couldn’t use search and replace: you would have to do it manually, drastically increasing the risk of making a mistake. By contrast, an expression-based approach allows you to try out different widths safely by changing only one or two values at most.

Another problem comes when you are trying to add new features to your site. With a comment-based approach, you will need to hunt through your entire stylesheet to find the values you need to make the calculation. With an expression-based approach, on the other hand, you can just use IntelliSense, and don’t even need to know what the exact values are.

What about leaky abstractions?

The problem with the concept of leaky abstractions is that it can be used to argue against anything, since all abstractions are leaky to some degree or another. The important question is to what extent the value added by the abstraction outweighs the potential problems introduced by the leaks.

It seems that the biggest fear of CSS pre-processors is what effect they will have on the size of the generated stylesheets and on performance, or whether they’ll make debugging harder because what shows up in the browser isn’t what you edit. Oddly enough, people who express these fears are usually more than happy to use all sorts of technologies to pre-process their HTML, such as PHP, or ASP.NET, or even XSLT. In fact, CSS pre-processors often do a better job of things, since they pretty-print the resulting output whereas PHP and ASP.NET don’t. If you’ve ever tried to wade your way through generated HTML with nonsensical indentation and lines thousands of characters long, you’ll know exactly what I mean. One thing I would say about this, however, is that it’s better to run your pre-processor on the server rather than on the client, since that way you are able to view the generated CSS fairly easily.

In practice, I’ve found that the improvements elsewhere far more than make up for the friction introduced by the transformation step. Furthermore, in combination with a good organisational strategy (organise your class nesting to mirror the structure of your HTML documents), they can actually reduce your dependence on Firebug for sorting out CSS issues, since it’s easier to identify and eliminate conflicts between poorly specified class declarations in your source itself.

Personally, I think concerns about the size of your generated CSS are overblown. Yes, your generated stylesheets can grow quite a bit if you’re not careful, but the best way to tackle that is to use HTTP compression, and since most of the size increase you get from CSS pre-processors is in the form of low-entropy, repetitive data, it compresses very well. That doesn’t give you carte blanche to ignore file sizes altogether of course (HTTP compression isn’t available in all cases: buggy browsers and/or misconfigured proxy servers can stop it from happening about 5-10% of the time) but as long as you are aware of what causes the most bloat (mixins), and take a little bit of care, you’ll be fine. Maintainability versus performance is a trade-off that you have to make at every level of your code, not just this one, so it’s best to fine tune things here (and CSS pre-processors make fine-tuning of this nature fairly easy) only in response to known, measurable performance issues.

“Premature optimization is the root of all evil” — Donald Knuth

In conclusion.

CSS as a language has some pretty severe limitations which make it very difficult to avoid bad programming practices such as magic numbers and DRY violations, and which make your stylesheets very fragile in the face of changing requirements. While there are design patterns that can alleviate the problem, they are not in and of themselves a complete solution, and even if you do use techniques such as OOCSS, a pre-processor will still be necessary if you want your stylesheets to be easy to maintain and easy to refactor, especially if you are working with legacy code.

Of course, there are potential gotchas with CSS pre-processors, but the same can be said of any technology, and in this case none of them are deal-breakers by any stretch of the imagination: they are far from insurmountable, and the benefits far more than make up for them. Having researched the alternatives, my position on the matter is unchanged: if you’re not using a pre-processor to keep your CSS under control, you’re doing it wrong.