james mckay dot net

Blah blah scribble scribble waffle waffle
28
Aug

Sorting out the confusion that is OWIN, Katana and IAppBuilder

I’ve been doing some more work on my MVC hobby project lately, and one thing I’ve been working on has been replacing the rather poorly thought out abstraction layer around the host process with OWIN.

If you’ve never come across OWIN before, it’s the new standard way of decoupling .net-based applications and frameworks from the servers that run them, a bit like WSGI for Python or Rack for Ruby. It means that you can host your web application not only in IIS but also in a console application, or a Windows application, or even in Apache under Mono on a Linux server. The first version of the standard was finalised about two years ago.

The OWIN specification is elegantly simple. You just have to provide a delegate of type Func<IDictionary<string, object>, Task> — or in other words, something that looks like this:

public Task SomeAppFunc(IDictionary<string, object> environment);

where the environment dictionary provides a standard set of keys containing things such as the request and response headers, body, and so on. This delegate is called the AppFunc. The values are all BCL types, so you don’t have to take dependencies on anything else. In fact, the OWIN specification explicitly says this:

OWIN is defined in terms of a delegate structure. There is no assembly called OWIN.dll or similar. Implementing either the host or application side the OWIN spec does not introduce a dependency to a project.

So putting all this together, a “Hello World” OWIN application would look something like this:

public Task HelloWorldAppFunc
    (IDictionary<string, object> environment)
{
    var responseHeaders = environment["owin.ResponseHeaders"]
        as IDictionary<string, string[]>;
    var responseBody = environment["owin.ResponseBody"]
        as Stream;
    responseHeaders["Content-Type"] = new string[]
        { "text/plain" };
    var writer = new StreamWriter(responseBody);
    writer.WriteLine("Hello world");
}

That’s it. But now we need to find somewhere to host it — and here, we run up against a problem.

The problem is that most of the OWIN “hello world” tutorials that you see on the web simply don’t look like that. Take for instance the one you see on www.asp.net:

public void Configuration(IAppBuilder app)
{
    // New code:
    app.Run(context =>
    {
        context.Response.ContentType = "text/plain";
        return context.Response.WriteAsync("Hello, world.");
    });
}

Just a minute … what’s this IAppBuilder? And where in the OWIN specification are there classes with a Response property, or a ContentType property and a WriteAsync method?

What you are looking at is not OWIN itself, but a set of libraries created by Microsoft called Katana. These libraries provide, among other things, some strongly typed wrappers around the AppFunc defined in the OWIN specification, so in one sense they’re useful in reducing boilerplate code.

The problem here is that Katana is built on an obsolete pre-release draft of the OWIN specification. The IAppBuilder interface was originally described in initial drafts of the OWIN specification, but it has since been removed. IAppBuilder is defined in a NuGet package called owin.dll, but the community voted to sunset this back in May, and it’s now considered deprecated; new OWIN-related libraries should not use it. That’s why it’s so difficult to find any documentation on IAppBuilder itself: a Google search for IAppBuilder.Use merely leads to a couple of extension methods provided by Katana.

So…given our nice shiny AppFunc, how do we host it?

In theory, we should be able to just pass it to the host process. Some OWIN hosts, such as Nowin, let you do just that, by passing it into the ServerBuilder.SetOwinApp method. With Katana, it’s a little bit more complicated.

The IAppBuilder interface declares a method called Use, whose method signature looks like this:

void Use(object middleware, params object[] args)

Intuitively, you’d expect to be able to just pass your IAppBuilder method into the Use method. Unfortunately, if you try this, it throws an exception. What you actually have to do is to pass a middleware delegate. OWIN middleware (and this isn’t documented in the spec) is a delegate which takes one AppFunc and returns another AppFunc:

using AppFunc = Func<IDictionary<string, object>, Task>;
using MiddlewareFunc = Func<AppFunc, AppFunc>;

Confused? So was I at first.

The AppFunc that was passed in to the MiddlewareFunc is simply the next step in the chain. So your AppFunc should do what it has to do, then either call or ignore the AppFunc which was passed in. For example, this middleware would just log the start and end of each invocation to the console:

app.Use(new Func<AppFunc, AppFunc>(next => async env => {
    Console.WriteLine("Starting request");
    await next(env);
    Console.WriteLine("Ending request");
}));

If you are writing a self-contained application rather than middleware, your AppFunc will be the last step in the pipeline, so you will want to ignore the “next” AppFunc. You would therefore do this:

app.Use(new Func<AppFunc, AppFunc>(ignored => HelloWorldAppFunc));

There are other ways of registering OWIN apps or middleware with a Katana host, by passing a middleware type or instance with a specific signature, or by using one of Katana’s strongly-typed wrappers, but none of these are defined in the OWIN specification, so I won’t dwell on them here.

Fortunately, this is set to be clarified in ASP.NET vNext: there’s been a lot of feedback from the community that IAppBuilder shouldn’t be the only way of creating an OWIN pipeline, and that the Katana wrapper classes, OwinMiddleware, OwinRequest and OwinResponse, have been causing some confusion, so the means to host a raw OWIN application or middleware will become more transparent. In the meantime, I hope that this clears up some of the confusion.

21
Aug

Query Objects: a better approach than your BLL/repository

If you’ve been following what I’ve been saying here on my blog and on the ASP.NET forums over the past month or so, you’ll no doubt realise that I’m not a fan of the traditional layered architecture, with your presentation layer only allowed to talk to your business layer, your business layer only allowed to talk to your repository, only your repository allowed to talk to your ORM, and all of them in separate assemblies for no reason whatsoever other than That Is How You Are Supposed To Do It. It adds a lot of friction and ceremony, it restricts you in ways that are harmful, its only benefits are unnecessary and dubious, and every implementation of it that I’ve come across has been horrible.

Here’s a far better approach:

public class BlogController : Controller
{
    private IBlogContext _context;

    public BlogController(IBlogContext context)
    {
        _context = context;
    }

    public ActionResult ShowPosts(PostsQuery query)
    {
        query.PrefetchComments = false;
        var posts = query.GetPosts(_context);
        return View(posts);
    }
}

[Bind(Exclude="PrefetchComments")]
public class PostsQuery
{
    private const int DefaultPageSize = 10;

    public int? PageNumber { get; set; }
    public int? PageSize { get; set; }
    public bool Descending { get; set; }
    public bool PrefetchComments { get; set; }

    public IQueryable<Post> GetPosts(IBlogContext context)
    {
        var posts = Descending
            ? context.Posts.OrderByDescending
                (post => post.PostDate)
            : context.Posts.OrderBy(post => post.PostDate);
        if (PrefetchComments) {
            posts = posts.Include("Comments");
        }
        if (PageNumber.HasValue && PageNumber > 1) {
            posts = posts.Skip
                ((PageNumber - 1) * (PageSize ?? DefaultPageSize));
        }
        posts = posts.Take(PageSize ?? DefaultPageSize);
        return posts;
    }
}

A few points to note here.

First, you are injecting your Entity Framework DbContext subclass (the implementation of IBlogContext) directly into your controllers. Get over it: it’s not as harmful as you think it is. Your IOC container can (and should) manage its lifecycle.

Secondly, your query object follows the Open/Closed Principle: you can easily add new sorting and filtering options without having to modify either the method signatures of your controllers or its own other properties and methods. With a query method on your Repository, on the other hand, adding new options would be a breaking change.

Thirdly, it is very easy to avoid SELECT n+1 problems on the one hand while at the same time not fetching screeds of data that you don’t need on the other, as the PrefetchComments property illustrates.

Fourthly, this approach is no less testable than your traditional BLL/BOL/DAL approach. By mocking your IBlogContext and IDbSet<T> interfaces, you can test your query object in isolation from your database. You would need to hit the database for more advanced Entity Framework features of course, but the same would be true with query methods on your repository.

Fifthly, note that your query object is automatically created and populated with the correct settings by ASP.NET MVC’s model binder.

All in all, a very simple, elegant and DRY approach.

14
Aug

If your tests aren’t hitting the database, you might as well not write tests at all

Out of all the so-called “best practices” that are nothing of the sort, this one comes right up at the top of my list. It’s the idea that hitting the database in your tests is somehow harmful.

I’m quite frankly amazed that this one gets as much traction as it does, because it’s actively dangerous. Some parts of your codebase require even more attention from your tests than others — in particular, parts which are:

  1. easy to get wrong
  2. tricky to get right
  3. not obvious when you’re getting it wrong
  4. difficult to verify manually
  5. high-impact if you do screw up.

Your data access layer, your database itself, and the interactions between them and the rest of your application fall squarely into all the above categories. There are a lot of moving parts in any persistence mechanism — foreign key constraints, which end of a many-to-many relationship you declare as the inverse, mappings, migrations, and so on, and it’s very easy to make a mistake on any of them. If you’ve ever had to wrestle with the myriad of obscure, surprising and gnarly error messages that you get with both NHibernate and Entity Framework, you’ll know exactly what I mean.

If you never test against a real database, but rely exclusively on mocking out your data access layer, you are leaving vast swathes of your most error-prone and business-critical functionality with no test coverage at all. You might as well not be testing anything.

Yes, tests that hit the database are slow. Yes, it’s off-putting to write slow tests. But tests that don’t hit the database don’t test things that need to be tested. Sometimes, there are no short cuts.

(Incidentally, this is also why you shouldn’t waste time writing unit tests for your getters and setters or for anaemic business services: these are low-risk, low-impact aspects of your codebase that usually break other tests anyway if you do get them wrong. Testing your getters and setters isn’t unit testing, it’s unit testing theatre.)

“But that rule just applies to unit tests. Integration, functional and regression tests are different.”

I agree there, and I’m not contradicting that. But if you’re saying “don’t hit your database in your unit tests” and then trying to qualify it in this way, you’re just causing confusion.

Regardless of what you are trying to say, people will hear “don’t hit the database in your tests, period.” People scan what you write and pick out sound bites. They see the headline, and skip over the paragraph about integration tests and so on as if it were merely a footnote.

By all means tell people to test their business logic independently of the database if you like, but phrase it in a way that’s less likely to be misunderstood. If you’re leaving them with the impression that they shouldn’t be testing their database, their data access layer, and the interaction between them and the rest of your application, then even if that isn’t your intention, you’re doing them a serious disservice.

24
Jul

Interchangeable data access layers == stealing from your client

This is one of those so-called “best practices” that crops up a lot in the .net world:

You need to keep the different layers of your application loosely coupled with a clean separation of concerns so that you can swap out your data access layer for a different technology if you need to.

It all sounds right, doesn’t it? Separation of concerns, loose coupling…very clean, SOLID, and Uncle Bob-compliant, right?

Just. A. Minute.

The separation of concerns you are proposing is high-maintenance, high-friction, usually unnecessary, obstructive to important performance optimisations and other requirements, and, as this post by Oren Eini aka Ayende Rahien points out, usually doesn’t work anyway.

In what universe is it a best practice to allocate development time and resources, for which your client is paying, towards implementing a high-maintenance, high-friction, broken, unnecessary, non-functional requirement that they are not asking for, at the expense of business value that they are?

In the universe where I live, that is called “stealing from your client.”

Nobody is saying here that separation of concerns is bad per se. What is bad, however, is inappropriate separation of concerns — an attempt to decouple parts of your system that don’t lend themselves to being decoupled. Kent Beck has a pretty good guideline as to when separation of concerns is appropriate and when it isn’t: you should be dealing with two parts of your system which you can reason about independently.

You can not reason about your business layer, your presentation layer, and your data access layer independently. User stories that require related changes right across all your layers are very, very common.

Every project that I’ve ever seen that has attempted this kind of abstraction has been riddled with severe SELECT n+1 problems that could not be resolved without breaking encapsulation.

(Nitpickers’ corner: I’m not talking about test mocks here. That’s different. It’s relatively easy to make your test mocks behave like Entity Framework. It’s orders of magnitude harder to make NHibernate or RavenDB behave like Entity Framework.)

If you can present a valid business case for making your persistence mechanism interchangeable, then it’s a different matter, of course. But in that case, you need to implement both (or all) the different options up-front right from the start, and to bear in mind that the necessary separation of concerns almost certainly won’t cleanly follow the boundary between your business layer and your DAL. You should also warn your client of the extra costs involved, otherwise you won’t be delivering good value for money.

16
Jul

The Anaemic Business Layer

The three-layer architecture, with your presentation layer, your business layer and your data access layer, is a staple of traditional .net applications, being heavily promoted on sites such as MSDN, CodeProject and the ASP.NET forums. Its advantage is that it is a fairly canonical way of doing things, so (in theory at least) when you get a new developer on the team, they should have no trouble in finding where everything is.

Its disadvantage is that it tends to breed certain antipatterns that crop up over and over and over again. One such antipattern is what I call the Anaemic Business Layer.

The Anaemic Business Layer is a close cousin of the Anaemic Domain Model, and often appears hand in hand with it. It is characterised by business “logic” classes that don’t actually have any logic in them at all, but only shunt data between the domain model returned from your ORM and a set of identical model classes with identical method signatures in a different namespace. Sometimes it may wrap all the calls to your repository in catch-log-throw blocks, which is another antipattern in itself, but that’s a rant for another time.

The problem with the Anaemic Business Layer is that it makes your code much more time consuming and difficult to maintain, since you have to drill down through more classes just to figure out what is going on, and you have to edit more files to make a single change. This in turn increases risk because it’s all too easy to overlook one of the places where you have to make a change. It also makes things restrictive, because you lose access to certain advanced features of your ORM such as lazy loading, query shaping, transaction management, cross-cutting concerns or concurrency control, that can only properly be handled in the business layer.

The Anaemic Business Layer is usually symptomatic of an over-strict and inflexible insistence on a “proper” layered architecture, where your UI is only allowed to talk to your business layer and your business layer is only allowed to talk to your data access layer. You could make an argument for the need for encapsulation — so that you can easily change the implementation of the methods in the business layer if need be — but that’s only really important if you’re producing an API for public consumption by the rest of the world. Your app is not the rest of the world, and besides, those specific changes tend not to happen (especially for basic CRUD operations), so I’d be inclined to call YAGNI on that one.

The other reason why you might have an Anaemic Business Layer is that you’ve got too much going on in your controllers or your data access layer. You shouldn’t have any business logic in either, as that hinders testability, especially if you’re of the school of thought that says your unit tests shouldn’t hit the database. But if that’s not the case, then it’s time to stop being so pedantic. An Anaemic Business Layer serves no purpose other than to get in the way and slow you down. So ditch your unhelpful faux-“best practices,” bypass it altogether, and go straight from your UI to your repository.

10
Jul

On dark matter developers and the role of GitHub in hiring

The term “dark matter developer” was coined by Scott Hanselman a couple of years ago, when he wrote this:

My coworker Damian Edwards and I hypothesize that there is another kind of developer than the ones we meet all the time. We call them Dark Matter Developers. They don’t read a lot of blogs, they never write blogs, they don’t go to user groups, they don’t tweet or facebook, and you don’t often see them at large conferences. Where are these dark matter developers online?

The problem with Scott’s post is that he doesn’t give a very clear definition of dark matter developers. Certainly, I get the impression that a lot of people confuse dark matter developers with 501 developers, or low-end developers, or technological conservatives. Take a look at this Hacker News thread for instance—some people seemed to be categorising themselves as dark matter developers even though they were actively contributing to open source projects.

Here’s my suggestion for a more precise definition of a dark matter developer:

A dark matter developer is someone who has not made any evidence publicly available that they are able to do what their CV and LinkedIn profile claim that they can do.

This is pretty much the point that Troy Hunt makes in his blog post, “The ghost who codes: how anonymity is killing your programming career.” To be sure, he does throw in a curve ball where he confuses passion and competence, and I did take him to task on that one at the time, but that aside, the whole point he was making was actually a valid one. What evidence have you made publicly available that you really do have the skills listed on your CV or your LinkedIn profile?

Sadly, for well over 90% of developers out there in the market for a job, the answer is: none whatsoever.

I’ve heard plenty reasons why this might be the case, but I’m yet to hear a good reason for it. For example, some people claim that requiring a GitHub account discriminates against busy people and people with families. That seems like just making excuses to me. You don’t have to spend five hours a day at it, or even five hours a week — and a recruiter expecting something of that order probably would be discriminating unfairly. But if you can’t manage to rustle up five hours in a year to put together one or two small but well-written projects to showcase when you’re looking for a job, do you really have your priorities right?


Cueball: There’s a reason for everything.
Megan: Yeah, but it’s not always a good reason.
—”Time”, xkcd

This certainly wouldn’t be accepted in certain other lines of work. You wouldn’t hire a photographer or an architect who didn’t have a portfolio, for example, nor would you take an academic seriously without the all-important track record of peer-reviewed publications in scholarly journals. With that in mind, it seems a bit odd to me that we shouldn’t have something similar in the very industry that practically invented the online portfolio. Or that having something similar, we fail so spectacularly at actually making use of it.

The gold standard here is, of course, an active GitHub account. Some people have objected to the concept of GitHub as your CV in recent months for various reasons, but none of them have come up with any better suggestions, and the fact remains that even just one or two public GitHub or Bitbucket pull requests or similar shows that your code has been reviewed and endorsed by other developers. But even if you haven’t had any pull requests, any code up there is better than nothing. Even simple snippets will do—GitHub lets you post gists on https://gist.github.com/ just by copying and pasting from your IDE to your browser. In the absence of code itself, informed discussions about programming still carry some weight. A blog, or some answers on Stack Overflow, are both ways of supporting your credentials.

In the end of the day, being a dark matter developer probably won’t stop you getting a job, and similarly an online presence won’t guarantee you one. It simply isn’t feasible at this stage to systematically reject everyone who isn’t on GitHub, as some people advocate — especially in the .net ecosystem which is still frustratingly conservative and traditional. But having some code publicly available for hiring managers to review could certainly make the difference between your CV being noticed and it being lost in a pile along with thirty other CVs that all look exactly the same as each other. Besides, things are changing, and I sometimes wonder — five years or ten years from now, might dark matter developers end up finding themselves unemployable?

10
Dec

Mercurial is doing better than you think

Here’s something interesting that I came across the other day. Apparently, Facebook has recently switched from Git to Mercurial for source control for its internal projects, and on top of that, they have hired several key members of the core Mercurial crew, including Matt Mackall, the Mercurial project lead.

Their main reason for this decision is performance: some of Facebook’s repositories are so large that they are bringing Git to its knees. They looked at the Git source code and the Mercurial source code and felt that the latter would be easier to fine tune to give them the performance that they needed.

This is an interesting development. With Git now on the verge of overtaking Subversion to the most widely used SCM in corporate settings, it’s tempting to write off Mercurial as something of a lost cause. Back in January, when I posted a suggestion on the Visual Studio UserVoice forums that Microsoft should support Mercurial as well as Git in Team Foundation Server, I thought it would be doing well to get three hundred votes and then plateau. But as it stands, it’s now passed 2,500 votes and still going strong, making it the tenth most popular open request on the forums and the second most popular request for TFS in particular, with nearly twice as many votes as the original DVCS request had. Git may have cornered the market for open source collaboration, but its unnecessarily steep learning curve and often pathological behaviour make it surprisingly unpopular with the majority of developers for whom public collaboration on open source projects is not a priority.

It’ll be interesting to see the outcome of this, but one thing is for certain: it’s not all over yet.

04
Nov

So I’ve built my own ALT.MVC engine

(ALT.MVC means any MVC-style framework for .NET that was not written by Microsoft.)

For the past few weeks, my commute-time hobby project has headed in a completely new direction, and I’ve been working on a completely new MVC framework for .NET. I was inspired to do this mainly by some of the other offerings in the ALT.MVC space, such as NancyFX, FubuMVC, Simple.Web and OpenRasta, and I was curious to see just how much effort is involved in getting a minimum viable framework up and running. About two weeks, it turns out, plus about the same again to add some spit and polish, though the end result is pretty spartan. It doesn’t (yet) have any kind of model binding, or validation, or CSRF protection, or authentication and authorisation, or even session management. But these will come in due course.

Different MVC frameworks have different emphases. ASP.NET MVC is generally designed to appeal to as broad a constituency as possible. As a result it’s a bit of a jack-of-all-trades: it does everything under the sun, most of it passably, some of it awkwardly, and none of it brilliantly. FubuMVC is designed for people who are steeped in Clean Code and design patterns, and who have a degree in Martin Fowler. NancyFX is built around the “super-duper-happy path”: its aim is to keep things as low-ceremony and low-friction as possible. WebForms was designed to ease the transition to web development for traditional VB and Delphi developers, who are used to event-driven drag-and-drop RAD programming. The result is a complete untestable mess, simply because the web doesn’t work that way.

The key emphasis behind my own MVC effort is modularity. Most MVC frameworks encourage a fairly monolithic design to your application. Sure, they may be extensible using IOC containers and so on, but you’ll have to jump through several hoops in order to implement feature toggles, or A/B tests, or plugin frameworks, and even then your solution will be partial at best. My MVC framework, on the other hand, is built entirely around the concept of pluggable, switchable modules. This allows for some unique features. For example, you can have your static files—your CSS, JavaScript and images—switchable along with their corresponding server-side components. Turn off the feature that serves up /my/secret/script.js and it will give you a 404 when you try and access it in your web browser.

Anyway, enough of the sales pitch, let’s take a look at how it all works. A module looks like this:

public class HomeModule : Dolstagis.Web.Module
{
    public HomeModule()
    {
        AddStaticFiles(&quot;~/content&quot;);
        AddViews(&quot;~/views&quot;);
        AddHandler&lt;Index&gt;();
    }
}

This tells us that all files within your content folder should be served up as static files; that your views are all in your views folder, and registers a handler called Index to handle your home page. Handlers might look something like this:

[Dolstagis.Web.Route(&quot;/&quot;)]
public class Index : Dolstagis.Web.Handler
{
    public object Get()
    {
        var model = new { Message = &quot;Hello world&quot; };
        return View(&quot;~/views/hello.mustache&quot;, model);
    }
}

This class handles all GET requests for the home page with the view at views/hello.mustache using an anonymous class for our model. I’m using Nustache as the default view engine at present; since these are the same as Mustache views, you can easily use them on both client and server. Note that you can only have one route per handler: this is by design as it is much closer to the Single Responsibility Principle, and reduces the number of dependencies that you have to inject into your controllers that end up not even being used on most of your requests.

As I say, it’s still very much early days yet, and it’s not production ready by a long shot, but there are several things I’ve got planned for it. If you fancy getting involved with something new in the ALT.MVC space, then I’m always open to suggestions and pull requests. You can find the source code in my Dolstagis.Web repository on GitHub, or drop me a line in the comments below or to @jammycakes on Twitter.

07
Oct

Your best practices are (probably) nothing of the sort

Now I have nothing against best practices per se.

But if you are going to tell me that something is a “best practice,” please first make sure that it really is a best practice. The software development world is plagued by so-called “best practices” that are nothing of the sort, that just introduce friction, ceremony and even risk without offering any benefits whatsoever in return. Some of them were once perfectly valid but have been superseded by developments in technology; some of them were based on widely held assumptions that have since been proven to be incorrect; some of them are based on total misunderstandings of something that someone famous once said; and some of them are just spurious.

I’ll give one example here, which came up in a discussion on Twitter the other day. It’s quite common for people to put their interfaces in one assembly, their business logic in a second, their repositories in a third, their models in a fourth, their front end in a fifth, and so on. This is all done in the name of “having a layered architecture.” The problem with this is that it makes dependency management harder (in fact in the pre-NuGet days it was an absolute nightmare) and forces you to jump around all over the place in your solution when making changes to related classes. It just adds friction, without even solving the problem it claims to solve: separate assemblies are neither necessary nor sufficient for a layered architecture. Oh, and it also violates the Common Closure Principle, which states that classes that change together must be packaged together.

Unfortunately, these so-called “best practices” proliferate because most developers lack the courage to question them, for fear of being viewed as incompetent or inexperienced by those with the authority to hire, fire or promote them. The people who promote garbage “best practices” tend to have Many Years Of Experience At Very Impressive Sounding Companies, and if you’re not that experienced (or confident) yourself, that can be quite intimidating. You don’t agree that we should put our interfaces, enums, business classes, repositories and presentation layers in separate assemblies? You obviously don’t understand a layered architecture!

Don’t let that intimidate you though. When somebody tells you that “you’re not following best practices,” it’s an indication that in their case, Many Years Of Experience At Very Impressive Sounding Companies actually means one year of experience repeated many times building run of the mill CRUD applications on outdated technologies at places that store users’ passwords in plain text. They are almost certainly not active on GitHub, or Twitter, or Stack Overflow, they are very unlikely to have hobby projects, and they probably never discuss programming with experts from outside their own team, let alone from other technology stacks.

In other words, The Emperor Has No Clothes.

But when something really is a best practice, it’ll be quite different. For starters, they will cite the practice concerned by name. They won’t tell you that “you’re not following best practices” but that “you’re violating the Single Responsibility Principle” or “you’re making test driven development harder” or “You Ain’t Gonna Need It” or something else specific. Another hallmark of a genuine best practice is that it will have tangible, enumerable benefits that are actually relevant to your situation. Here are some questions you can and should ask about it:

  1. Does it make it easier to get things right?
  2. Does it make it harder to get things wrong?
  3. Does it make it easier to back out when things go wrong?
  4. Does it make it easier to diagnose problems?
  5. Does it make it easier to get things done faster and with less effort without compromising points 1-4?
  6. Does it deliver the benefits that it claims to deliver? What evidence do you have that it does?
  7. Does it solve problems that you are actually likely to face, or is it one big YAGNI-fest?
  8. Are the problems that it solves still relevant, taking into account the current state of technology, market forces, security threats, and legislative or regulatory requirements?
  9. What alternative approaches have you considered, and how did they compare? It’s nonsensical to talk about “best practices” when you have nothing to compare them against, because the word “best” is meaningless in a sample size of one.
  10. Do its benefits actually outweigh its costs? In practice? In your situation?
  11. Have you understood it correctly, and are you sure you’re not confusing it with something else?

Any best practice that is worth following will stand up to scrutiny. And scrutinised it should be. Because blindly doing something just because somebody cries “best practices” is just cargo cult. And cargo cult programming is never a best practice.

09
Sep

My choice of Git GUI tools

Despite Git’s reputation for being ridiculously command-line centric, Git users are actually pretty spoilt for choice when it comes to graphical front ends. Here’s what I use.

Atlassian SourceTree has been around for a while on the Mac, but it has only recently seen a 1.0 release for Windows. It is free, though it does require user registration.

SourceTree screenshot

In terms of features, SourceTree does just about everything I want it to, and visually it’s the one I find easiest on the eye. One particularly nice feature of SourceTree is that it automatically fetches from all your remotes every ten minutes, so you’re quickly kept abreast of what your colleagues are working on. My main gripe with it is that “Push” to a Subversion repository doesn’t work, even though it does bring up a dialog box saying that it will, so I have to drop to a command prompt to type git svn dcommit. I’d also like to see an interface to git bisect, though no doubt that will come in due course.

For integration with Windows Explorer and Visual Studio, I use Git Extensions:

Git Extensions screenshot

I don’t use it for that much else, mainly because I find it visually a bit harsh, but it’s pretty fully featured. One nice feature is the ability to recover lost commits — useful if you do an unintended git reset --hard.

If you find that Git Extensions doesn’t show up in Visual Studio 2012, you may need to explicitly tell Visual Studio where to find it. This thread on the Git Extensions Google group tells you what you need to know.

My merge tool of choice is Perforce Merge:

p4merge

It’s a really nice tool that’s easy on the eye, easy to use, and gives you a very clear view of what’s changed. If you’re still using TortoiseMerge or kdiff3, you’re missing out.