james mckay dot net
because there are few things that are less logical than business logic

Sorry, but I won’t watch your video

From time to time, when I’m discussing or debating something online, people send me links to videos — usually on YouTube — that they expect me to watch in support of whatever point they’re arguing.

Nowadays, I usually decline. I’m always open to a well-reasoned argument, even if I disagree with it. But it needs to be presented in a format where I can engage with it properly, fact-check it easily, and make sure I have understood it correctly. The video format doesn’t do that, and in fact more often than not it gets in the way.

  • Videos are inefficient. I can read far more quickly than I can watch a video. When I am reading, I can also skip over content that is already familiar to me, or that isn’t relevant to the topic at hand.
  • Videos are not searchable. With written material, especially online, I can quickly copy and paste words or phrases into Google to fact-check it, or into a forum post to reply to you or ask about it elsewhere. I can’t easily do this with videos.
  • Videos spoon-feed you. When reading, I can step back and ask questions. If there’s something I haven’t understood, I can re-read it several times to make sure that I get it. By contrast, with videos, the videographer sets the pace, and you have to fight against that if you want to do any critical thinking. Sure, you can pause and rewind, but doing so is much more inefficient and imprecise than with written text.
  • Videos are soporific. I’ve lost count of the number of times that I’ve momentarily fallen asleep watching a video and had to rewind it because I’ve missed an important point. Or gotten distracted onto something else and lost track of what was being said. By contrast, when I’m reading, my mind is totally focused on the text.
  • Videos are often far too long. Sorry, but if your video is an hour long, then I can tell from that fact alone that either it is a Gish Gallop, or it takes far too long to get to the point, or it is trying to tackle a subject that is too complicated to address properly in video format anyway.

Videos have their place, and the points that they make may well be valid and correct. But they are best suited for entertainment or inspiration. They are less effective for education or information, and are simply not appropriate for online debate and discussion. If someone asks you to watch a video, ask them to provide you with a text-based alternative — a web page, a PDF or a PowerPoint presentation — instead. If they really don’t have any alternative other than a video, ask them to summarise it and provide timestamps. Your time is valuable. Don’t let other people dictate how you spend it.

Featured image credit: Vidmir Raic from Pixabay

The vagaries of humans and other living beings

The title of this post is a quote from my school report when I was thirteen years old. My headmaster wrote about me, “His mind is better attuned to exact subjects such as Maths and Physics than to those concerning the vagaries of humans and other living beings.”

It was a fair point. I was a pretty geeky kid when I was at school. I excelled in subjects such as maths and physics, I did reasonably well at most other academic subjects — and I was utterly hopeless on the rugby pitch. But his comment highlighted something that’s worth bearing in mind whenever discussing subjects such as science and technology. There are two kinds of subjects that we get taught in school or at university, and that we deal with in the workplace. On the one hand, there are exact subjects, such as maths, physics, chemistry, geology, electronics, computing, and the like, while on the other hand, there are those that deal with the vagaries of humans and other living beings. And the two require completely different mindsets.

It’s a difference that I’ve felt keenly since I reactivated my Facebook account back in June after a two and a half year break. About a couple of months in, I wrote a post that simply said this:

Passion is not a substitute for competence.

This statement would be totally uncontroversial if I posted it on one of our Slack channels at work. When you’re working with exact subjects such as science or technology, you simply can’t afford to let passion become a substitute for competence. I’ve seen projects have to be rewritten from scratch and tech companies fail altogether because they made that mistake, especially about ten years ago when the whole “passionate programmer” hype was at its height.

But many of my friends on Facebook are pastors. Their entire vocations are built around dealing with the vagaries of humans and other living beings. Competence to people such as them may still be necessary, but the relative importance that they can (and should) place on passion of one form or another is much, much greater. To them, saying that “passion is not a substitute for competence” has completely different connotations.

Needless to say, my short, seven-word post turned out to be pretty controversial. And that controversy took me completely by surprise.

The essential difference

Exact subjects deal in hard evidence, empirical data, and systems tightly constrained by reason and logic. They leave little or no room for opinion or subjective interpretation, apart from situations where there is insufficient data to differentiate between two or more alternatives. The arts and humanities, on the other hand, are much more open to interpretation, speculation, and subjective opinion. Exact subjects require precise definitions and literal thinking, often expressed through symbols and code. The arts and humanities are expressed in figures of speech, analogy, poetry, and terms that are often ambiguous and very loosely defined.

Both are equally important. But they are not interchangeable.

The mistake that all too many people make is to treat exact subjects in the way that they would treat the vagaries of humans and other living beings, or vice versa. For non-technical people, this is all that they know how to do. Learning to think in the exact, rigorous manner required by the sciences does not come easily to many people. It requires training, practice, discipline, experience, patience, and hard work. Subjects that concern the vagaries of humans and other living beings, on the other hand, only require intuition, empathy and common sense, and tend to be the “default” way of thinking for most people.

This is why pseudoscience gets so much traction. Subjects such as astrology, cryptozoology, alternative medicine, water divining or graphology have a scientific looking veneer, but rather than adopting an exact, rigorous approach, they appeal to the vagaries of analogy, hand-waving approximation, empathy and “common sense,” which yield results that are much easier for most people to relate to. Unfortunately, since they are dealing with exact, deterministic systems, this approach is inappropriate, and therefore misleading or even simply wrong.

It’s also common for non-technical people to view science as if it were a matter of subjective opinion. This is especially the case when the exact sciences produce results that they find awkward for political or economic reasons. I’ve lost count of the number of climate change sceptics who I’ve seen saying “Surely if something is science, it should allow for multiple opinions,” for example. Sorry, but it doesn’t work that way. If it did, then we could have referendums on the laws of physics. You can make all the noise you like about The Will Of The People™, but good luck trying to abolish Maxwell’s Equations or the Second Law of Thermodynamics just because 51.9% of the population voted to do so. And then who can forget this:

“The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia.” — Malcolm Turnbull, Prime Minister of Australia.

Context switching

But if some people make the mistake of viewing exact subjects as if they were subjective, human ones, there is an equal and opposite danger for those of us whose careers and expertise fall on the “exact” side of the table: to view the vagaries of humans and other living beings as if they were deterministic systems tightly constrained by reason and logic.

When you’re giving instructions to a computer, it takes what you say at face value and does what you ask it to do. If it doesn’t “get it” the first time (your code doesn’t compile, your tests fail, or whatever) you just tweak your code, rephrase it, and repeat until you get the results you want. You can’t do that with people. They filter what you say through a layer of assumptions and preconceptions about you and through their own expertise. When I said that passion is not a substitute for competence, my pastor friends didn’t have software engineering or recruitment in mind, but activities such as street evangelism or politics.

Nor can you keep rewording and refining your attempts to communicate your intentions or understanding to other people. If they’re genuinely interested, it might help, but much of the time they’ll either miss the point of what you’re saying, or else conclude that you’re just boring or even argumentative and obnoxious, and switch off.

Herein lies another problem. For if it’s hard to learn to think in exact, rigorous terms, it’s even harder to switch context between the two. And the hardest skill of the lot is to be able to bridge the gap between them.

Yet this is the very challenge that we face in software development teams. There is no subject more geared towards exact, rigorous, pedantic thinking than computer programming. If you get things wrong, Visual Studio lets you know it in no uncertain terms — in some cases dozens of times an hour. You are subjected to a feedback loop that makes working in a physics or chemistry lab look positively lethargic by comparison. You have to worry about spelling, capitalisation, and even tabs versus spaces. Yet at the same time, you are frequently being fed requirements from non-technical stakeholders that are vague, ambiguous, incoherent, self-contradictory, or even patent nonsense. As Martin Fowler said in his book, Patterns of Enterprise Application Architecture (and as I’ve quoted in the strapline of my blog), “there are few things that are less logical than business logic.”

Be aware of what you’re dealing with.

If there’s one thing I’ve learned over the summer, it’s the need to have some empathy for how “the other side” thinks. I don’t think it’s right to expect non-geeks to develop exact, rigorous approaches to anything, but just to be aware that there are times when such approaches are needed, and not to denigrate or disparage those of us who work with them. But those of us of a more technical mindset need to be able to relate to both worlds. This being the case, the burden should be on us to bridge the gap as best we can.

Featured image: March for Science, Melbourne, April 22, 2017. Photograph by John Englart.

A must-watch talk for every .NET developer by Udi Dahan

No matter what your preferred software architecture is — whether it’s n-tier, CQRS, Clean Architecture or whatever — if you’re a .NET developer, you need to watch this video. No exceptions, no excuses. It’s by Udi Dahan, one of the “founding fathers” of CQRS. He makes exactly the same points as I’ve been making about software architecture over the past few years:

A brief history of pointless mappings

Throughout my career, I’ve worked on many projects, in .NET as well as with other platforms and frameworks. One particular practice that I’ve encountered time and time and time again in .NET, which I rarely see elsewhere, is that of having a separate identical set of models for each layer of your project, mapped one to another by rote with AutoMapper.

It’s a practice that I detest with a passion. It adds clutter and repetition to your codebase without delivering any benefit whatsoever, and gets in the way of important things such as performance optimisation. In fact, if you suggested it to a Python developer or a Ruby developer, they would probably look at you as if you were crazy. But many .NET developers consider it almost sacred, justifying it on the grounds that “you might want to swap out Entity Framework for something else some day.”

But why should this be? How did speculative generality end up being viewed in the .NET ecosystem as a Best Practice™? In actual fact, there are historical reasons that, in the dim and distant past, were very real concerns.

Back in the early days of .NET, round about 2001/2002, the best practice that Microsoft recommended was to use stored procedures for everything. It didn’t take long for everyone to start complaining on the ASP.NET forums about how cumbersome this was. Half of the .NET community had come from Borland Delphi, with its RAD tools letting you drag and drop data sources and data grids onto a form, while the other half had come from Java, which already had O/R mappers such as Hibernate. To go from either of these approaches to hand-cranking stored procedures, with all the tedious repetition that it involved, was like going back into the stone age.

Naturally, a whole lot of two-guys-in-a-garage ISVs were more than willing to step into the gap with a slew of ORMs. By 2004, we had Entity Broker, Pragmatier, WilsonORMapper, Objectz.net, Sisyphus, NPersist and a host of others that have long since been forgotten. They were coming and going like nobody’s business, and you couldn’t rely on the one you chose still being around six months later. With this being the case, abstracting out your ORM “just in case” you needed to swap it out for something else seemed like an eminently sensible — if not vitally necessary — suggestion.

Within a couple of years, things started to settle down, and two market leaders — the open-source NHibernate and the commercial LLBLGen Pro — emerged. These both quickly gained a solid backing, and they are both still going strong today.

But there was nothing from Microsoft. In the early days they promised us an offering called ObjectSpaces, but it was subsequently abandoned as vapourware.

This was a problem for some people. Right from the beginning, the majority of .NET developers have worked in companies and teams that wouldn’t touch anything that didn’t come from Microsoft with a barge pole if they didn’t have to. But working with DataSets and stored procedures was so painful that they held their noses and used NHibernate anyway — but wrapped it in an abstraction layer in the hope that they could swap it out for Entity Framework the moment that the latter became stable enough for them to do so.

Entity Framework finally appeared in 2008, but the first version was so bad that many in the .NET community started up a vote of no confidence in it. It was 2011 — ten years after .NET 1.0 was first released to beta — before Entity Framework was good enough to see serious use in production, and a further two years before it reached a similar level of functionality to NHibernate.

Nowadays, of course, Entity Framework is well established and mature, and although there are differences between EF6 and EF Core, the only thing these days that you’re likely to want to swap it for is hand-crafted SQL for performance reasons — and that usually means cutting right across your neat separation between your DAL and business layers altogether. Even testing is scarcely a reason any more now that EF Core has an in-memory provider for the purpose.

But old habits die hard, and by the time we got here the practice of abstracting your O/R mapper on the grounds that “you might want to swap out your data access layer for something else” had become deeply entrenched as a Best Practice. Many of its advocates are too young to remember its historical context, so they aren’t aware that it is aimed at a use case whose likelihood has nosedived. Nor are they aware that although we once had a good idea of what we’d have to swap our DAL out for, nowadays all we can talk about are unknown mystery alternatives. But this is why we constantly need to be reviewing our best practices to see whether they still apply. Because if we don’t, they just fossilise into cargo cult programming. And that benefits nobody.

Which .NET IOC containers pass Microsoft’s tests?

Updates:

  • 4 May 2019: Grace now passes all Microsoft’s tests as of version 7.0.0. Updated other containers to the latest versions.

Since my last post on the state of IOC containers in .NET Core, I’ve ended up going down a bit of a rabbit hole with this particular topic. It occurred to me that since Microsoft has come up with a standard set of abstractions, it is probably best, when choosing a container, to pick one that conforms to these abstractions. After all, That Is How Microsoft Wants You To Do It.

But if you want to do that, what are your options? Which containers conform to Microsoft’s specifications? I decided to spend an evening researching this to see if I could find out.

Rather helpfully, there’s a fairly comprehensive list of IOC containers and similar beasties maintained by Daniel Palme, a .NET consultant from Germany, who regularly tests the various options for performance. He currently has thirty-five of them on his list. With this in mind, it was just an evening’s work to go down the list and see where they all stand.

I looked for two things from each container. First of all, it needs to either implement the Microsoft abstractions directly, or else provide an adapter package on NuGet that does. Secondly, it needs to pass the specification tests in the Microsoft.Extensions.DependencyInjection.Specification package.

The contenders

In the end of the day, I was able to find adapters on NuGet for twelve of the containers on Daniel’s list. Seven of them passed all seventy-three test cases; five failed between one and four of them. They were as follows:

ContainerAbstraction packageTests
AutoFac 4.9.2
AutoFac.Extensions.DependencyInjection 4.4.0All passed
Castle Windsor 5.0.0Castle.Windsor.MsDependencyInjection 3.3.1All passed
DryIoc 4.0.4DryIoc.Microsoft.DependencyInjection 3.0.3All passed
Grace 7.0.0Grace.DependencyInjection.Extensions 7.0.0All passed
Lamar 3.0.2Lamar 3.0.22 failed
LightInject 5.4.0LightInject.Microsoft.DependencyInjection 2.2.04 failed
Maestro 3.5.0Maestro.Microsoft.Dependencyinjection 2.1.24 failed
Microsoft.Extensions
.DependencyInjection 2.2.0
Microsoft.Extensions.DependencyInjection 2.2.0All passed
Rezolver 1.4.0Rezolver.Microsoft.Extensions.DependencyInjection 2.2.0All passed
Stashbox 2.7.3Stashbox.Extensions.Dependencyinjection 2.6.8All passed
StructureMap 4.7.1StructureMap.Microsoft.DependencyInjection 2.0.02 failed
Unity 5.10.3
Unity.Microsoft.DependencyInjection 5.10.2All passed

Which tests failed?

It’s instructive to see which tests failed. All but one of the failing tests failed for more than one container.

  • ResolvesMixedOpenClosedGenericsAsEnumerable. This requires that when you register an open generic type (for example, with svc.AddSingleton(typeof(IRepository<>), typeof(Repository<>))) and a closed generic type (for example, IRepository<User>), a request for IEnumerable<IRepository<User>> should return both, and not just one. Lamar and StructureMap all failed this test.
  • DisposesInReverseOrderOfCreation. Does what it says on the tin: last in, first out. Lamar, Maestro and StructureMap fail this test.
  • LastServiceReplacesPreviousServices tests that when you register the same service multiple times and request a single instance (as opposed to a collection), the last registration takes precedence over the previous registrations. LightInject fails this test.
  • ResolvesDifferentInstancesForServiceWhenResolvingEnumerable checks that when you register the same service multiple times, you get back as many different instances of it as you registered. LightInject fails three of the test cases here; Maestro fails two.
  • DisposingScopeDisposesService checks that when a container is disposed, all the services that it is tracking are also disposed. Maestro fails this test — most likely for transient lifecycles, because different containers have different ideas here about what a transient lifecycle is supposed to mean with respect to this criterion.

These failing tests aren’t all that surprising. They generally concern more complex and esoteric aspects of IOC container functionality, where different containers have historically had different ideas about what the correct behaviour should be. They are also likely to be especially difficult for existing containers to implement in a backwards-compatible manner.

Nevertheless, these are still tests that are specified by Microsoft’s standards, and furthermore, they may cause memory leaks or incorrect behaviour if ASP.NET MVC or third party libraries incorrectly assume that your container passes them. This being the case, if you choose one of these containers, make sure you are aware of these failing tests, and consider carefully whether they are ones that are likely to cause problems for you.

The most surprising result here was Lamar. Lamar is the succesor to StructureMap, which is now riding off into the sunset. It was also written by Jeremy Miller, who has said that two of his design goals were to be fully compliant with Microsoft’s specification from the word go, while at the same time having a clean reboot to get rid of a whole lot of legacy baggage that StructureMap had accumulated over the years and that he was sick of supporting. It is also the only container in the list that supports the DI abstractions in the core assembly; the others all rely on additional assemblies with varying amounts of extra complexity. However, the two failing tests in Lamar were exactly the same as the failing tests in StructureMap, so clearly there has been enough code re-use going on to make things difficult. Furthermore, the tests in question represent fairly obscure and low-impact use cases that are unlikely to be a factor in most codebases.

The no-shows

Most of the IOC containers on Daniel’s list for which I couldn’t find adapters are either fairly obscure ones (e.g. Cauldron, FFastInjector, HaveBox, Munq), dead (e.g. MEF), or not actually general purpose IOC containers at all (e.g. Caliburn Micro). There were, however one or two glaring omissions.

Probably the most prominent one was Ninject. Ninject was the first IOC container I ever used, when I was first learning about dependency injection about ten years ago, and it is one of the most popular containers in the .NET community. Yet try as I might, I simply have not been able to find a Ninject adapter for the .NET Core abstractions anywhere. If anyone knows of one, please leave a note in the comments below and I’ll update this post accordingly.

Having said that, it isn’t all that surprising, because Ninject does have some rather odd design decisions that might prove to be a stumbling block to implementing Microsoft’s specifications. For example, it eschews nested scopes in favour of tracking lifecycles by watching for objects to be garbage collected. Yes, seriously.

Another popular container that doesn’t have an adapter is Simple Injector. This is hardly surprising, though, because Simple Injector has many design principles that are simply not compatible with Microsoft’s abstraction layer. The Simple Injector authors recommend that their users should leave Microsoft’s built in IOC container to handle framework code, and use SimpleInjector as a separate container for their own application code. If SimpleInjector is your personal choice here, this is probably a good approach to consider.

Finally, there doesn’t seem to be an adapter for TinyIOC, which is not on Daniel’s list. However, since TinyIOC is primarily intended to be embedded in NuGet packages rather than being used as a standalone container, this is not really surprising either.

Some final observations

I would personally recommend — and certainly, this is likely to be my practice going forward — choosing one of the containers that implements the Microsoft abstractions, and using those abstractions to configure your container as far as it is sensible to do so. Besides making it relatively easy to swap out your container for another if need be (not that you should plan to do so), the Microsoft abstractions introduce a standard vocabulary and a standard set of assumptions to use when talking about dependency injection in .NET projects.

However, I would strongly recommend against restricting yourself to the Microsoft abstractions like glue. Most IOC containers offer significant added value, such as convention-based registration, lazy injection (Func<T> or Lazy<T>), interception, custom lifecycles, or more advanced forms of generic resolution. By all means make full use of these whenever it makes sense to do so.

For anyone who wants to tinker with the tests (or alert me to containers that I may have missed), the code is on GitHub.