james mckay dot net
because there are few things that are less logical than business logic

First impressions of JetBrains Rider

Up until recently, if you wanted to develop in .NET, your options for which IDE to use were pretty limited. Your choice was basically Visual Studio or … er, Visual Studio. Sure, there are one or two open source alternatives such as SharpDevelop, or you could use OmniSharp with a text editor, but these are pretty basic by comparison, and they tend not to see much use by anyone other than hobbyists.

Now there’s nothing wrong with Visual Studio per se. It’s a great IDE, with a ton of cool features, it does the job, and it does it well. But having just one high quality IDE to choose from contributed massively to the monoculture nature of .NET, with its pervasive insistence by many teams on being spoon-fed by Microsoft. Not surprisingly, many leading .NET developers have been clamouring for a decent, professional quality alternative over the years.

And what better company to deliver on that demand than JetBrains? As authors of not only the phenomenally popular Resharper but also IDEs for other platforms including IntelliJ IDEA, PyCharm, RubyMine and WebStorm, they were already most of the way there as it was. The absence of a fully-fledged .NET IDE to complete their line-up was puzzling, to say the least.

Well about a year ago, they finally delivered. And in the past couple of weeks or so I’ve been trying out their offering: Rider.

The first impression that I get of Rider is that it seems a lot more stable and less resource intensive than the combination of Visual Studio and Resharper. Although it has a different look and feel to Visual Studio, it brings you the full power of almost all of Resharper’s toolchain into a standalone editor that works, and works well. It comes in versions for Windows, Linux and OSX, giving you true cross-platform development. If you’ve ever wanted to do .NET development on Linux, now you have a way to do so.

Rider has some particularly nice touches. One thing I like about it is its built-in file comparison tool. As well as comparing two files against each other, or a locally checked out file against a version in source control, and as well as editing the differences, you get some handy buttons that let you copy chunks from one side to the other with a single mouse click. And it gets even better than that — thanks to its tight integration with the rest of the IDE, you get full code completion functionality, and even access to refactoring tools such as renaming methods or organising usings from within the diff window. A feature such as this really comes into its own when dealing with copy-and-paste code.

Rider’s diff/merge window, complete with code completion tools

Having said that, it does have its quirks and gotchas that Visual Studio users need to be aware of. Being based on the same core as other JetBrains IDEs, it follows their workflows and mental models rather than Visual Studio’s. So, for example, clicking “Run” on the toolbar doesn’t attach the debugger; you have to click the “Debug” button next to it to do that. And unlike Visual Studio, it doesn’t warn you when you edit your source code while the debugger is attached, nor does it lock the files down into read-only mode. This can lead to some initially puzzling situations when you try stepping through some code only to find that it has lost track of all the local variables. But the differences aren’t extensive, and if you’ve used other JetBrains IDEs before, or even if you’ve just used something else as well as Visual Studio, it doesn’t take long to get up to speed with it. To make the transition easier, Rider allows you to use Visual Studio key bindings instead of the Resharper-based or IntelliJ-like options.

Although Rider will handle most Visual Studio solutions just fine, there are a few corner cases that it struggles with. It didn’t work well with one of our products at work that includes a number of WCF services, and a colleague who also tried it out six months ago said he ran into problems with some older WebForms-based code. Its Docker support is also less mature than Visual Studio’s. But it’s improving all the time, and no doubt these problems will be resolved sooner or later.

Is it worth switching to Rider? Certainly some people will benefit from it more than others. I think the people most likely to get value out of Rider are polylot programmers who have a subscription to the entire suite of JetBrains desktop tools, and who will benefit greatly from having a common set of IDEs across multiple languages. Small businesses with more than five developers (which thus exceed the licensing limits for Visual Studio Community) will also benefit because Rider is considerably cheaper than a subscription to Visual Studio Professional. And Linux users now have an option for a high-end, professional quality IDE that targets the .NET ecosystem. But .NET traditionalists probably won’t touch it with a barge pole, and some legacy projects may experience a certain amount of friction.

But it’s well worth considering nonetheless. And whether you adopt it or not, Rider brings some much needed diversity to the landscape of high-end .NET IDEs. In so doing, it goes a long way towards breaking down the suffocating monoculture in many parts of the .NET ecosystem that insists on being spoon-fed by Microsoft. And that can only be a good thing.

It’s not just an opinion, it’s scar tissue

Software developers such as myself often have strong opinions about how code should be written. While some people may be tempted to dismiss these as “just an opinion,” the truth of the matter is that more often than not, these strong opinions are forged in the fires of Things Going Wrong And Having To Clear Up Afterwards.

Take exception handling for example. Bad exception handling practices are one of my big bugbears in code. Whether it’s Pokémon exception handling, or advocating return codes instead of exceptions, or just incoherent or unclear guidelines about how to use them, bad error handling really, really gets up my nose.

The project that you have to thank for that is called Bills Knowledge Base.

Bills Knowledge Base, or BKB as it was affectionately known, was an internal web application in Parliament used to keep track of the progress of legislation. When I was brought onto the project in early 2009, it had all of a sudden stopped displaying any data. And I was asked to fix it. NOW.

It quickly became clear why this was the case. Someone had just deployed a new version and had missed out an important DLL. The reason why it wasn’t showing any data instead of crashing out with a stack trace or an error page was that it was riddled with Pokémon exception handling. All over the place. Put there by some code generation for which the templates had been thrown away.

Having deployed the missing DLL, I then turned my attention to the database.

It probably won’t surprise you when I tell you that it was a complete mess. Foreign key constraints were missing, leaving orphaned rows everywhere. Dates were stored in text fields in a whole array of mutually incompatible formats. Fields that were supposed to be required were blank. Enumeration fields contained unrecognisable mystery values. It was a miracle that the system actually ran at all, given the state it was in.

I did the only thing that one can do in such a situation. I rolled up my sleeves and set to work cleaning up the data.

It took me a month. One whole month.

I eventually managed to rip out the Pokémon exception handling, harden the system, and make it behave properly. That took even longer.

It’s now more than five years since I last worked on BKB. When I handed it over, it worked properly, it was robust, and the data had long since been licked into shape. I don’t know what development has been done on it since then, but it was still faithfully doing its job when I left the place earlier this year. So if you ever feel inclined to question what I have to say about exceptions, just head over to https://services.parliament.uk/bills/. Getting that little corner of the web to the place where it is today left me with some scar tissue. And it’s that scar tissue that makes me twitch whenever I see bad error handling code.

An update on Lambda Tools

A little under year ago, I started work on a new open source project to manage deployment of serverless code to AWS Lambda. This grew out of a task that I’d started at work, where we had a number of Lambda functions managing various features of our infrastructure. At the time, they were being managed rather chaotically through Terraform and I wanted to get a Continuous Delivery pipeline set up for them.

As I have since moved on to a new job, I thought I should probably say a word or two about it.

Use Serverless instead.

I was introduced to the Serverless framework by a colleague a few months before I left my last job, and I was immediately impressed. It does everything I’d envisaged for Lambda Tools, plus a whole lot more, and furthermore it is actively being developed by a full-time team with contributions from the open source community. As well as supporting AWS, it also supports Azure, Google Cloud Platform, and a whole lot more. The fact that Serverless is a thing saved me masses and masses of work on a project that I was struggling to fit in round everything else.

I’m particularly impressed by the way that Serverless works. Rather than manipulating AWS resources independently, as Terraform does, it works by generating CloudFormation templates. This makes things massively more robust than trying to configure different resources independently of each other. Since CloudFormation is built into AWS itself, and everything it does is transactional, you’re a lot less likely to end up with things getting out of sync with each other when making changes.

When I left the Parliamentary Digital Service, the WebOps team was still using Lambda Tools for most of their existing code, though I had started the transition to Serverless with a Continuous Delivery pipeline for one particular project. However I don’t know what their plans are for it in the long run.

As for myself, I don’t have any plans to develop Lambda Tools any further. We aren’t using a serverless platform at my present job and I don’t anticipate us doing so in the near future either. Even if we did, the fact that there is a mature and robust alternative means that I would be using Serverless rather than trying to carry on reinventing the wheel.

A note on performance reviews

In many organisations, you are required to complete some form of annual performance review process in which you agree some objectives with your line manager to be completed over the following twelve months. In the Parliamentary Digital Service, this was called the Individual Performance Review, or IPR.

For a while now, I’ve wanted to release an open source tool or library and build an online community around it. I’ve thrown a few things against the wall over the years, but nothing has ever stuck. But the Powers That Be thought that to do something like that would be good for recruitment, so I put it down as one of my IPR objectives for the 2017-2018 reporting year. Lambda Tools was the result.

On the face of it, it sounds like a good idea. You have a personal objective that is closely aligned with the objectives of your employer. Why not combine the two and pick up some brownie points for doing something that you’re passionate about anyway?

Unfortunately, it didn’t work out that way.

It was always viewed as a low priority by the rest of the team, who gave me little or no encouragement to keep working on it, and who weren’t well placed to pitch in and help anyway because they were Ops engineers rather than developers. In theory, we were supposed to have “10% time” to work on projects such as this, but while many other teams made full use of their 10% time, on my team it simply didn’t happen. As a result, I ended up doing most of my work on it on the train and in the evenings, just to have something to put down on my IPR form. It ended up feeling like a lead weight round my shoulders, and to then discover that something already existed that did everything I wanted it to do and more left me feeling thoroughly discouraged. I’m sure you would feel discouraged too if you’d discovered you’d spent a whole lot of your own time reinventing the wheel just so that you could tick a box on a form.

If there’s one lesson I’ve learned, it is this: if you have to set performance objectives at work, stick to what you can deliver in your 90% time. Annual performance review processes are nothing more nor less than bureaucratic enterprisey box-checking exercises that simply do not deliver the benefits that they claim to offer. Their feedback loops are far too slow. They suck the life out of everything they touch, and if you let them get their grubby paws on your 10% time or your pet projects, they will suck the life out of that too. Keep the beast locked up in its cage. Don’t let it rob you of your passion.

Some thoughts on DevOps

It’s now six weeks since I started at my new job, and I’m really enjoying it. Returning to .NET has felt like a homecoming in many ways. Even though I’ve been quite critical of some of the things that go on in the Microsoft ecosystem at times, it’s what has paid the bills for most of the past sixteen years, it’s a platform that I enjoy working with, and I’d built up quite a lot of experience and expertise in it in that time.

My two year hiatus from .NET was spent mostly in the world of DevOps and cloud computing with AWS. While I gained some valuable experience with it, I never really settled down in it. In particular, I was especially unhappy about being pigeonholed as a “WebOps engineer” on what our delivery manager insisted was “an Ops team.” I’m a developer, not an Ops guy, and besides, that kind of thinking completely flies in the face of what DevOps is supposed to be all about.

If you’re calling your team an “Ops team,” you’re not doing DevOps.

There’s a very good reason why DevOps is called DevOps and not OpsDev or Ops. It is Development first and Ops second. Or, if you want to put it a different way, it is about the Development of a software product to automate your Ops. Jez Humble, who wrote the book on Continuous Delivery, tells us that there’s no such thing as a DevOps team for a good reason. In the DevOps world, Ops is a software product, not a team.

This being the case, while you may need experienced Ops specialists to give you direction on what needs to be built, you also need experienced developers to build it. They need to have a thorough grounding in concepts such as design patterns, the SOLID principles, dependency injection, separation of concerns, test-driven development, algorithmic complexity, refactoring, and the like. You need to recruit, promote, plan, prioritise, and provide training accordingly. Otherwise you’ll either limit what you’re able to achieve, or else you’ll end up with unmaintainable code that needs to be rewritten. And when you’re dealing with infrastructure as code, a rewrite is far, far harder than when you’re dealing with business logic.

In any case, DevOps needs to be the responsibility of your development team as a whole. The whole point of DevOps is to break down the silos between Development and Ops, and to have a separate DevOps team (or worse, a separate Ops team) just creates another silo that you could be doing without.

Your Repository is not a Data Access Layer

The Repository pattern has come in for a lot of criticism over the past few years by high-end .NET developers. This is understandable, because in most projects, the Repository layer is usually one of the worst-implemented parts of the codebase.

Now I’ve been critical of badly implemented Repositories myself, but to be fair, I don’t think we should ditch the pattern altogether. On the contrary, I think that we could make much more effective use of the Repository pattern if we just abandoned one popular misconception about it.

Your Repository is (mostly) not a DAL.

If you’re wondering what I mean, here is an example of a typical Repository method. It comes from BlogEngine.net, an open source ASP.NET blogging platform, and it is typical of the kinds of Repository methods that you and I have been working with on a daily basis for years:

public CommentsVM Get()
{
    if (!Security.IsAuthorizedTo(Rights.ViewPublicComments))
        throw new UnauthorizedAccessException();

    var vm = new CommentsVM();
    var comments = new List<Comment>();
    var items = new List<CommentItem>();

    var all = Security.IsAuthorizedTo(Rights.EditOtherUsersPosts);
    foreach (var p in Post.Posts)
    {
        if (all || p.Author.ToLower() == Security.CurrentUser.Identity.Name.ToLower())
        {
            comments.AddRange(p.Comments);
        }
    }  
    foreach (var c in comments)
    {
        items.Add(Json.GetComment(c, comments));               
    }
    vm.Items = items;

    vm.Detail = new CommentDetail();
    vm.SelectedItem = new CommentItem();

    return vm;
}

Now this isn’t bad code. It’s actually quite clean code. It’s clear, well-formatted, and easy to understand, even if returning a ViewModel from your Repository does make me twitch a bit. But where is the data access logic?

There is not a single line in this code that tells me which underlying persistence mechanism is being used. Are we talking to Entity Framework? To NHibernate? To RavenDB? To a web service? To Amazon DynamoDB? Or to a program for comparing human and chimp genomes? In just about every .NET project that I’ve encountered, the Repository classes are all populated with methods just like this one. They may contain some LINQ queries, but these won’t give me any indication either. Yet in every single case, they’ve been in projects called My.Project.DAL or something along those lines.

We’re sometimes told that the role of the Repository layer is to abstract away your data access logic from your business logic. But in methods such as this, the data access logic appears pretty thoroughly abstracted to me already.

No, this is business logic, pure and simple.

Why we’ve been thinking of the Repository as a DAL

The reasons why Repositories are viewed as a data access layer are purely historical. The classic three-layer architecture dates back to the late 1990s, when everybody thought that stored procedures were the One True Best Practice™, and that moving your BLL and DAL onto separate hardware was the right approach to scalability problems that almost nobody ever had to face in the Real World. Back in the early days of .NET 1.0, your typical Repository contained method after method that looked something like this:

public User GetUser(int userID)
{
    using (SqlConnection cn = new SqlConnection(connectionString))
    using (SqlCommand cmd = new SqlCommand("usp_GetUser", cn)) {
        cmd.Parameters.Add(new SqlParameter("@UserID", userID));
        cmd.CommandType = CommandType.StoredProcedure;
        using (SqlDataReader reader = cmd.ExecuteReader()) {
            if (reader.Read()) {
                return new User(
                    (int)reader["UserID"],
                    (string)reader["UserName"],
                    (string)reader["DisplayName"],
                    (string)reader["Email"]
                );
            }
            else {
                return null;
            }
        }
    }
}

It was pretty much in-your-face that this was data access code. It was also very, very tedious and repetitive to maintain. It was this tedium that gave rise to modern O/R mappers, and in fact, in the early days, offerings such as LLBLGen Pro and NHibernate were sometimes actually referred to as “generic DALs.” Then, eventually, Microsoft got in on the act with Entity Framework.

In a nutshell, your data access layer is now Entity Framework itself.

Your Repository is first and foremost business logic

The problem with viewing modern-day Repositories as a DAL is that it demands that you draw a clear distinction between data access logic and business logic while obfuscatinig that very distinction.

I’m yet to see a clear, coherent definition of where the distinction lies. The nearest I can get is a vague and woolly concept of LINQ code as being data access on the grounds of equally vague and woolly concepts of IQueryable<T> being tight coupling. Now Mark Seemann makes some valid points in his blog post — LINQ is indeed a leaky abstraction — but what that means in practice is that if you run up against the leaks in the abstraction, you are dealing with inseparable concerns, which simply can’t be categorised cleanly as either business logic or data access logic, and have to be tested using integration tests rather than unit tests. Another example of inseparable concerns is where you have to bypass Entity Framework altogether to go directly to the database, for example, for performance reasons.

In fact, LINQ may be a leaky abstraction, but it’s a much better abstraction than any alternative you’re going to come up with. Once again, LINQ code gives you no indication whatsoever of what underlying data access mechanism you are actually using, and in many cases you can — and should — test anything you do with IQueryable<T> without hitting the database. In any case, query construction implements business rules and is therefore well and truly a business concern.

So what is the Repository pattern, as implemented in most projects, best for? Simple: a query layer. While query objects are a better choice for more complex queries, and extension methods on IQueryable<T> should be considered seriously for cross-cutting concerns such as paging and sorting, for simpler queries with only a few arguments each, a Repository is not a bad choice.