james mckay dot net

Blah blah scribble scribble waffle waffle
10
Dec

Mercurial is doing better than you think

Here’s something interesting that I came across the other day. Apparently, Facebook has recently switched from Git to Mercurial for source control for its internal projects, and on top of that, they have hired several key members of the core Mercurial crew, including Matt Mackall, the Mercurial project lead.

Their main reason for this decision is performance: some of Facebook’s repositories are so large that they are bringing Git to its knees. They looked at the Git source code and the Mercurial source code and felt that the latter would be easier to fine tune to give them the performance that they needed.

This is an interesting development. With Git now on the verge of overtaking Subversion to the most widely used SCM in corporate settings, it’s tempting to write off Mercurial as something of a lost cause. Back in January, when I posted a suggestion on the Visual Studio UserVoice forums that Microsoft should support Mercurial as well as Git in Team Foundation Server, I thought it would be doing well to get three hundred votes and then plateau. But as it stands, it’s now passed 2,500 votes and still going strong, making it the tenth most popular open request on the forums and the second most popular request for TFS in particular, with nearly twice as many votes as the original DVCS request had. Git may have cornered the market for open source collaboration, but its unnecessarily steep learning curve and often pathological behaviour make it surprisingly unpopular with the majority of developers for whom public collaboration on open source projects is not a priority.

It’ll be interesting to see the outcome of this, but one thing is for certain: it’s not all over yet.

04
Nov

So I’ve built my own ALT.MVC engine

(ALT.MVC means any MVC-style framework for .NET that was not written by Microsoft.)

For the past few weeks, my commute-time hobby project has headed in a completely new direction, and I’ve been working on a completely new MVC framework for .NET. I was inspired to do this mainly by some of the other offerings in the ALT.MVC space, such as NancyFX, FubuMVC, Simple.Web and OpenRasta, and I was curious to see just how much effort is involved in getting a minimum viable framework up and running. About two weeks, it turns out, plus about the same again to add some spit and polish, though the end result is pretty spartan. It doesn’t (yet) have any kind of model binding, or validation, or CSRF protection, or authentication and authorisation, or even session management. But these will come in due course.

Different MVC frameworks have different emphases. ASP.NET MVC is generally designed to appeal to as broad a constituency as possible. As a result it’s a bit of a jack-of-all-trades: it does everything under the sun, most of it passably, some of it awkwardly, and none of it brilliantly. FubuMVC is designed for people who are steeped in Clean Code and design patterns, and who have a degree in Martin Fowler. NancyFX is built around the “super-duper-happy path”: its aim is to keep things as low-ceremony and low-friction as possible. WebForms was designed to ease the transition to web development for traditional VB and Delphi developers, who are used to event-driven drag-and-drop RAD programming. The result is a complete untestable mess, simply because the web doesn’t work that way.

The key emphasis behind my own MVC effort is modularity. Most MVC frameworks encourage a fairly monolithic design to your application. Sure, they may be extensible using IOC containers and so on, but you’ll have to jump through several hoops in order to implement feature toggles, or A/B tests, or plugin frameworks, and even then your solution will be partial at best. My MVC framework, on the other hand, is built entirely around the concept of pluggable, switchable modules. This allows for some unique features. For example, you can have your static files—your CSS, JavaScript and images—switchable along with their corresponding server-side components. Turn off the feature that serves up /my/secret/script.js and it will give you a 404 when you try and access it in your web browser.

Anyway, enough of the sales pitch, let’s take a look at how it all works. A module looks like this:

public class HomeModule : Dolstagis.Web.Module
{
    public HomeModule()
    {
        AddStaticFiles("~/content");
        AddViews("~/views");
        AddHandler<Index>();
    }
}

This tells us that all files within your content folder should be served up as static files; that your views are all in your views folder, and registers a handler called Index to handle your home page. Handlers might look something like this:

[Dolstagis.Web.Route("/")]
public class Index : Dolstagis.Web.Handler
{
    public object Get()
    {
        var model = new { Message = "Hello world" };
        return View("~/views/hello.mustache", model);
    }
}

This class handles all GET requests for the home page with the view at views/hello.mustache using an anonymous class for our model. I’m using Nustache as the default view engine at present; since these are the same as Mustache views, you can easily use them on both client and server. Note that you can only have one route per handler: this is by design as it is much closer to the Single Responsibility Principle, and reduces the number of dependencies that you have to inject into your controllers that end up not even being used on most of your requests.

As I say, it’s still very much early days yet, and it’s not production ready by a long shot, but there are several things I’ve got planned for it. If you fancy getting involved with something new in the ALT.MVC space, then I’m always open to suggestions and pull requests. You can find the source code in my Dolstagis.Web repository on GitHub, or drop me a line in the comments below or to @jammycakes on Twitter.

07
Oct

Your best practices are (probably) nothing of the sort

Now I have nothing against best practices per se.

But if you are going to tell me that something is a “best practice,” please first make sure that it really is a best practice. The software development world is plagued by so-called “best practices” that are nothing of the sort, that just introduce friction, ceremony and even risk without offering any benefits whatsoever in return. Some of them were once perfectly valid but have been superseded by developments in technology; some of them were based on widely held assumptions that have since been proven to be incorrect; some of them are based on total misunderstandings of something that someone famous once said; and some of them are just spurious.

I’ll give one example here, which came up in a discussion on Twitter the other day. It’s quite common for people to put their interfaces in one assembly, their business logic in a second, their repositories in a third, their models in a fourth, their front end in a fifth, and so on. This is all done in the name of “having a layered architecture.” The problem with this is that it makes dependency management harder (in fact in the pre-NuGet days it was an absolute nightmare) and forces you to jump around all over the place in your solution when making changes to related classes. It just adds friction, without even solving the problem it claims to solve: separate assemblies are neither necessary nor sufficient for a layered architecture. Oh, and it also violates the Common Closure Principle, which states that classes that change together must be packaged together.

Unfortunately, these so-called “best practices” proliferate because most developers lack the courage to question them, for fear of being viewed as incompetent or inexperienced. The people who promote garbage “best practices” tend to have Many Years Of Experience At Very Impressive Sounding Companies, and if you’re not that experienced (or confident) yourself, that can be quite intimidating. You don’t agree that we should put our interfaces, enums, business classes, repositories and presentation layers in separate assemblies? You obviously don’t understand a layered architecture!

Don’t let that intimidate you though. When somebody tells you that “you’re not following best practices,” it’s an indication that in their case, Many Years Of Experience At Very Impressive Sounding Companies actually means one year of experience repeated many times building run of the mill CRUD applications on outdated technologies at places that store users’ passwords in plain text. They are almost certainly not active on GitHub, or Twitter, or Stack Overflow, they are very unlikely to have hobby projects, and they probably never discuss programming with experts from outside their own team, let alone from other technology stacks.

In other words, The Emperor Has No Clothes.

But when something really is a best practice, it’ll be quite different. For starters, they will cite the practice concerned by name. They won’t tell you that “you’re not following best practices” but that “you’re violating the Single Responsibility Principle” or “you’re making test driven development harder” or “You Ain’t Gonna Need It” or something else specific. Another hallmark of a genuine best practice is that it will have tangible, enumerable benefits that are actually relevant to your situation. Here are some questions you can and should ask about it:

  1. Does it make it easier to get things right?
  2. Does it make it harder to get things wrong?
  3. Does it make it easier to back out when things go wrong?
  4. Does it make it easier to diagnose problems?
  5. Does it make it easier to get things done faster and with less effort without compromising points 1-4?
  6. Does it deliver the benefits that it claims to deliver? What evidence do you have that it does?
  7. Does it solve problems that you are actually likely to face, or is it one big YAGNI-fest?
  8. Are the problems that it solves still relevant, taking into account the current state of technology, market forces, security threats, and legislative or regulatory requirements?
  9. What alternative approaches have you considered, and how did they compare?
  10. Do its benefits actually outweigh its costs? In practice? In your situation?
  11. Have you understood it correctly, and are you sure you’re not confusing it with something else?

Any best practice that is worth following will stand up to scrutiny. And scrutinised it should be. Because blindly doing something just because somebody cries “best practices” is just cargo cult. And cargo cult programming is never a best practice.

09
Sep

My choice of Git GUI tools

Despite Git’s reputation for being ridiculously command-line centric, Git users are actually pretty spoilt for choice when it comes to graphical front ends. Here’s what I use.

Atlassian SourceTree has been around for a while on the Mac, but it has only recently seen a 1.0 release for Windows. It is free, though it does require user registration.

SourceTree screenshot

In terms of features, SourceTree does just about everything I want it to, and visually it’s the one I find easiest on the eye. One particularly nice feature of SourceTree is that it automatically fetches from all your remotes every ten minutes, so you’re quickly kept abreast of what your colleagues are working on. My main gripe with it is that “Push” to a Subversion repository doesn’t work, even though it does bring up a dialog box saying that it will, so I have to drop to a command prompt to type git svn dcommit. I’d also like to see an interface to git bisect, though no doubt that will come in due course.

For integration with Windows Explorer and Visual Studio, I use Git Extensions:

Git Extensions screenshot

I don’t use it for that much else, mainly because I find it visually a bit harsh, but it’s pretty fully featured. One nice feature is the ability to recover lost commits — useful if you do an unintended git reset --hard.

If you find that Git Extensions doesn’t show up in Visual Studio 2012, you may need to explicitly tell Visual Studio where to find it. This thread on the Git Extensions Google group tells you what you need to know.

My merge tool of choice is Perforce Merge:

p4merge

It’s a really nice tool that’s easy on the eye, easy to use, and gives you a very clear view of what’s changed. If you’re still using TortoiseMerge or kdiff3, you’re missing out.

06
Aug

Namespaces in JavaScript

Namespaces are a common technique used in many programming languages to avoid naming collisions between different parts of your code. Unfortunately JavaScript doesn’t have built in namespace support, but it can be implemented fairly simply by creating a nested hierarchy of objects in the global namespace. A bit like this:

var My = My || {};
My.Cool = My.Cool || {};
My.Cool.Namespace = My.Cool.Namespace || {};

My.Cool.Namespace.showMessage = function(msg) {
    $('#messageBox').html(msg).show();
};

The problem with this is that it’s pretty verbose. It would be far better if you could have a function to declare a namespace and simplify things.

There are several different approaches to this knocking around. Usually, they look something like this:

var ns = namespace("My.Cool.Namespace");
ns.showMessage = function(msg) {
    $('#messageBox').html(msg).show();
};

My approach is a little bit different. Your namespace function takes a second argument, a function which is called to initialise your namespace. You can do this by assigning properties to the this object in the function body:

namespace("My.Namespace", function() {
    "use strict";

    this.showMessage = function(msg) {
        $('#message').html(msg).show();
    };
});

My.Namespace.showMessage('Hello world');

The advantage of this approach is that it implements the JavaScript module pattern, enclosing the entire contents of your file in a single function. Another thing I’ve implemented is the ability to create shortcuts to other namespaces. You can do this by passing them in an additional array to the namespace() function, which then forwards them on to the initialiser:

namespace("My.Namespace", [jQuery, Backbone], function($, bb) {
    "use strict";

    // bb is a shortcut for the Backbone namespace of
    // backbone.js; this example creates a Backbone model
    // called My.Namespace.Note

    this.Note = bb.Model.extend({
        initialize: function() { ... },

        author: function() { ... },

        coordinates: function() { ... },

        allowedToEdit: function(account) {
            return true;
        }
    });
});

I’ve put my namespace function on GitHub, along with some more detailed instructions on how to use it.

13
Jun

The writing is on the wall for Subversion as Git takes over

This time last year, the Eclipse Community Survey noted that Git’s market share had risen from 12.8% to 27.6%, while Subversion had dropped from a seemingly unassailable 51.3% to 46.0%. This year’s survey results, published yesterday, note that this trend has continued: Git/GitHub has risen to 36.3% while Subversion has dropped to 37.8%. Subversion may still be in the top slot for now, but its lead is tiny and it is rapidly losing ground.

Other data sources, such as itjobswatch.co.uk, paint a similar picture. Look at how demand for Git skills has grown in recent years:

Git demand according to itjobswatch.co.uk

Job trackers such as this tend to give Subversion a bigger lead, because they focus on the rather more conservative corporate market and purposely ignore the world of hobbyists and open source developers. But even so, the trend is clear. Thirteen percent of UK programming jobs now ask for Git experience. Seventeen percent ask for Subversion, but the gap is narrowing rapidly and it is almost certain now that Git will overtake Subversion in corporate settings by the end of this year.

We are now fast approaching the point at which not using Git will increasingly hurt developers and companies alike. As a developer, a lack of Git experience is now starting to call into question your willingness and ability to keep your skills up to date. As a company, if you don’t use Git, you will find yourself competing for good developers against companies who do. Once you’ve got used to Git, Subversion is a painful experience, and fewer and fewer competent developers will be prepared to put up with it given the choice.

Then there are third party products and services. Already we are seeing an increasing number of these coming on the market which only support Git — GitHub and Heroku being two prominent examples. Those that do support other alternatives are increasingly treating them as an afterthought, with only limited features. Even if you’re a Microsoft-only shop, Git is getting harder to avoid. Entity Framework and ASP.NET MVC, along with several other Microsoft-run projects, are now hosted using Git. Team Foundation Server is introducing Git as a first-class source control option, complete with the tight end to end integration experience which TFS users value so much. Windows Azure makes Git one of its main avenues for deployment.

Not only has Subversion fallen behind, its development is painfully slow. Subversion 1.7, originally scheduled for the spring of 2010, was only released in October 2011 — a year and a half late. Subversion 1.8 is also a year late and has had its scope cut back by a half. Subversion 1.9, tentatively slated for this time next year, could well see even more significant delays, especially if the shift in demand forces its key players to divert resources to Git-based products and services. Subversion 1.10, the first to promise some genuinely useful new features (shelving and checkpointing), is “speculatively at best” scheduled for mid-2015. It is quite possible that it may never be released.

Subversion has no future. It is old, obsolete, decrepit technology and you need to be planning for its end of life. Git, on the other hand, is rapidly becoming the lingua franca of source control throughout the entire software industry. Love it or hate it, but if you don’t take it seriously, it won’t be long before the industry doesn’t take you seriously.

30
May

Dolstagis: my pet project

I thought it would be a good idea to say a bit about my pet project that I’ve been working on over the past few months on my daily commute.

Round about September or October, I ended up reading Patrick McKenzie’s blog, kalzumeus.com, where he was talking about how he managed to start up a business from his hobby programming project, Bingo Card Creator, on no more than five hours a week. Seeing as I spend twice that sitting in trains, otherwise doing nothing other than playing Angry Birds, staring out of the window, reading xkcd, or sleeping, I thought that something like that would be a much more profitable use of my time.

Only one problem: I didn’t have any ideas. Or rather, I had too many of them. At any one time I will have half a dozen ideas for a web application floating around in my head, but none of them have yet risen above the others. Nevertheless, I figured that there’s a lot of ground work to be getting on with before you get your big idea, so I typed hg init, cranked up Visual Studio, and got going. I’d also heard the story of how Flickr got started: how it was originally intended to be an online game but ended up as a photo sharing service almost by accident, and I figured that perhaps one particular idea would emerge out of the melting pot as I get working on it.

So far I still haven’t had my big idea, but I have ended up with the makings of a web application framework for ASP.NET MVC. Its intentions are similar to those of Django, the Python web application framework, in that it’s a “batteries included” framework which will eventually offer all sorts of building blocks for your own ASP.NET applications: dependency injection, unit and integration testing, NHibernate session management, asset bundling and minification, user authentication and authorisation, an admin section, comments, and eventually even integrating unit testing for JavaScript into your build process.

It’s also turned out to be a bit of a playground for me to experiment with new things, and to try out some of my ideas and hypotheses about patterns and practices to see if they’re any good. So far, I’ve tried (and rejected, for now at least) CoffeeScript, the Web API, and not using the Repository pattern.

Since I suck at coming up with cool names for things, I’ve called it “Dolstagis” for now, after an in-joke that was current among some of my colleagues and myself at work a few years back. I’ll no doubt write a bit more about it over the coming weeks and months, and the lessons that I’ve learned along the way, but in the meantime, if you want to see what it looks like, I’ve posted the code on GitHub.

28
Jan

Reboot

Over the past eight years or so, I’ve posted over two hundred entries here on my blog. Most of these are now out of date, many of them reflect approaches to software development that I no longer endorse, and some of them are outright embarrassing. In addition, some of them are pretty personal and I’m not that comfortable with having personal stuff posted on a public Internet site. I’ve decided, therefore, that it’s time for a reboot.

I’ve taken my blog offline on a couple of occasions in the past, but it’s never stayed offline for long, since some of my old posts are actually worth keeping and/or a good reference. So now I’m trying a different approach. I’ve unpublished all my blog posts to date, with a view, at some point in the next few months, if I get the time, to maybe going through them and re-treading (or rewriting) some of the content that I think worth revisiting. I may also write one or two new posts too, so watch this space.

However, I’m giving this a very low priority for now. I have a pet project that I’m trying to get off the ground, which is occupying most of my commuting time (when I’m not asleep on the train), and I have other things on my plate as well at other times. I’m also very conscious of one of the big risks of blogging: namely, being seen as something of a prima donna. In particular, if you are spending more time blogging about programming than working on your hobby programming projects, your priorities are totally wrong. For that reason, my more contentious and argumentative posts in particular are gone for good. There’s far too much of this kind of thing going on in the software development world and I don’t want to be a part of it.