james mckay dot net
because there are few things that are less logical than business logic

How not to stop Brexit

For better or for worse, the Conservatives under Boris Johnson have won the General Election with a majority of either 78 or 80, depending on which way the result in St Ives turns out. This means that, for better or for worse, Brexit is definitely going ahead, and there will not be a second referendum.

I personally voted Remain in 2016. Leaving the EU didn’t make much sense to me from either an economic or a logistical perspective, and I was particularly unimpressed with the arguments I was seeing from the “Leave” side, many of which seemed anti-intellectual, tin-foil hat conspiratorial, or simply not true. And I’ve never been impressed with the incessant references to the referendum result as “The Will Of The People.” The 48.1% of us who voted Remain are people too.

But Brexiteers have one legitimate concern that I have to agree with. The EU has a problem with taking “no” for an answer.

I’ve seen this playing out time and time again for over a quarter of a century. We saw it, for example, with the Maastricht Treaty and with the Lisbon Treaty (which was just a rebranding of the EU Constitution). Whenever an EU member state has a referendum that gives a result that Brussels doesn’t like, they simply make them vote again until they come up with the “right” result.

This isn’t democracy: it’s democracy theatre. It’s a complete sham, and if truth be told it makes the idea of a so-called “People’s Vote” seem really, really creepy, because it would just be more of the same. It’s a toxic, anti-democratic practice that needs to be broken.

Nevertheless, the 2016 referendum could potentially have been undone if only Remainers had gone about it the right way. If the UK were to leave the EU wth some kind of interim arrangement in place, and then have a “rejoin” referendum some months later, that would respect the mandate from 2016, avoid the mathematical problems with having three options on the ballot paper (deal/no deal/remain) rather than two, and generally have a much more credible claim towards being truly democratic. It would be clean, fair and above board.

Unfortunately, no political party proposed this option. Instead, far too many politicians did everything that they could to try to undermine and frustrate the referendum result before it could be carried out. In fighting tooth and nail for approaches that were not democratically credible, Remainers failed to come up with one that was. And in so doing, they made the whole process far, far, far more chaotic, stressful and acrimonious than it could otherwise have been.

Featured image credit: Tim Reckmann

Sorry, but I won’t watch your video

From time to time, when I’m discussing or debating something online, people send me links to videos — usually on YouTube — that they expect me to watch in support of whatever point they’re arguing.

Nowadays, I usually decline. I’m always open to a well-reasoned argument, even if I disagree with it. But it needs to be presented in a format where I can engage with it properly, fact-check it easily, and make sure I have understood it correctly. The video format doesn’t do that, and in fact more often than not it gets in the way.

  • Videos are inefficient. I can read far more quickly than I can watch a video. When I am reading, I can also skip over content that is already familiar to me, or that isn’t relevant to the topic at hand.
  • Videos are not searchable. With written material, especially online, I can quickly copy and paste words or phrases into Google to fact-check it, or into a forum post to reply to you or ask about it elsewhere. I can’t easily do this with videos.
  • Videos spoon-feed you. When reading, I can step back and ask questions. If there’s something I haven’t understood, I can re-read it several times to make sure that I get it. By contrast, with videos, the videographer sets the pace, and you have to fight against that if you want to do any critical thinking. Sure, you can pause and rewind, but doing so is much more inefficient and imprecise than with written text.
  • Videos are soporific. I’ve lost count of the number of times that I’ve momentarily fallen asleep watching a video and had to rewind it because I’ve missed an important point. Or gotten distracted onto something else and lost track of what was being said. By contrast, when I’m reading, my mind is totally focused on the text.
  • Videos are often far too long. Sorry, but if your video is an hour long, then I can tell from that fact alone that either it is a Gish Gallop, or it takes far too long to get to the point, or it is trying to tackle a subject that is too complicated to address properly in video format anyway.

Videos have their place, and the points that they make may well be valid and correct. But they are best suited for entertainment or inspiration. They are less effective for education or information, and are simply not appropriate for online debate and discussion. If someone asks you to watch a video, ask them to provide you with a text-based alternative — a web page, a PDF or a PowerPoint presentation — instead. If they really don’t have any alternative other than a video, ask them to summarise it and provide timestamps. Your time is valuable. Don’t let other people dictate how you spend it.

Featured image credit: Vidmir Raic from Pixabay

The vagaries of humans and other living beings

The title of this post is a quote from my school report when I was thirteen years old. My headmaster wrote about me, “His mind is better attuned to exact subjects such as Maths and Physics than to those concerning the vagaries of humans and other living beings.”

It was a fair point. I was a pretty geeky kid when I was at school. I excelled in subjects such as maths and physics, I did reasonably well at most other academic subjects — and I was utterly hopeless on the rugby pitch. But his comment highlighted something that’s worth bearing in mind whenever discussing subjects such as science and technology. There are two kinds of subjects that we get taught in school or at university, and that we deal with in the workplace. On the one hand, there are exact subjects, such as maths, physics, chemistry, geology, electronics, computing, and the like, while on the other hand, there are those that deal with the vagaries of humans and other living beings. And the two require completely different mindsets.

It’s a difference that I’ve felt keenly since I reactivated my Facebook account back in June after a two and a half year break. About a couple of months in, I wrote a post that simply said this:

Passion is not a substitute for competence.

This statement would be totally uncontroversial if I posted it on one of our Slack channels at work. When you’re working with exact subjects such as science or technology, you simply can’t afford to let passion become a substitute for competence. I’ve seen projects have to be rewritten from scratch and tech companies fail altogether because they made that mistake, especially about ten years ago when the whole “passionate programmer” hype was at its height.

But many of my friends on Facebook are pastors. Their entire vocations are built around dealing with the vagaries of humans and other living beings. Competence to people such as them may still be necessary, but the relative importance that they can (and should) place on passion of one form or another is much, much greater. To them, saying that “passion is not a substitute for competence” has completely different connotations.

Needless to say, my short, seven-word post turned out to be pretty controversial. And that controversy took me completely by surprise.

The essential difference

Exact subjects deal in hard evidence, empirical data, and systems tightly constrained by reason and logic. They leave little or no room for opinion or subjective interpretation, apart from situations where there is insufficient data to differentiate between two or more alternatives. The arts and humanities, on the other hand, are much more open to interpretation, speculation, and subjective opinion. Exact subjects require precise definitions and literal thinking, often expressed through symbols and code. The arts and humanities are expressed in figures of speech, analogy, poetry, and terms that are often ambiguous and very loosely defined.

Both are equally important. But they are not interchangeable.

The mistake that all too many people make is to treat exact subjects in the way that they would treat the vagaries of humans and other living beings, or vice versa. For non-technical people, this is all that they know how to do. Learning to think in the exact, rigorous manner required by the sciences does not come easily to many people. It requires training, practice, discipline, experience, patience, and hard work. Subjects that concern the vagaries of humans and other living beings, on the other hand, only require intuition, empathy and common sense, and tend to be the “default” way of thinking for most people.

This is why pseudoscience gets so much traction. Subjects such as astrology, cryptozoology, alternative medicine, water divining or graphology have a scientific looking veneer, but rather than adopting an exact, rigorous approach, they appeal to the vagaries of analogy, hand-waving approximation, empathy and “common sense,” which yield results that are much easier for most people to relate to. Unfortunately, since they are dealing with exact, deterministic systems, this approach is inappropriate, and therefore misleading or even simply wrong.

It’s also common for non-technical people to view science as if it were a matter of subjective opinion. This is especially the case when the exact sciences produce results that they find awkward for political or economic reasons. I’ve lost count of the number of climate change sceptics who I’ve seen saying “Surely if something is science, it should allow for multiple opinions,” for example. Sorry, but it doesn’t work that way. If it did, then we could have referendums on the laws of physics. You can make all the noise you like about The Will Of The People™, but good luck trying to abolish Maxwell’s Equations or the Second Law of Thermodynamics just because 51.9% of the population voted to do so. And then who can forget this:

“The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia.” — Malcolm Turnbull, Prime Minister of Australia.

Context switching

But if some people make the mistake of viewing exact subjects as if they were subjective, human ones, there is an equal and opposite danger for those of us whose careers and expertise fall on the “exact” side of the table: to view the vagaries of humans and other living beings as if they were deterministic systems tightly constrained by reason and logic.

When you’re giving instructions to a computer, it takes what you say at face value and does what you ask it to do. If it doesn’t “get it” the first time (your code doesn’t compile, your tests fail, or whatever) you just tweak your code, rephrase it, and repeat until you get the results you want. You can’t do that with people. They filter what you say through a layer of assumptions and preconceptions about you and through their own expertise. When I said that passion is not a substitute for competence, my pastor friends didn’t have software engineering or recruitment in mind, but activities such as street evangelism or politics.

Nor can you keep rewording and refining your attempts to communicate your intentions or understanding to other people. If they’re genuinely interested, it might help, but much of the time they’ll either miss the point of what you’re saying, or else conclude that you’re just boring or even argumentative and obnoxious, and switch off.

Herein lies another problem. For if it’s hard to learn to think in exact, rigorous terms, it’s even harder to switch context between the two. And the hardest skill of the lot is to be able to bridge the gap between them.

Yet this is the very challenge that we face in software development teams. There is no subject more geared towards exact, rigorous, pedantic thinking than computer programming. If you get things wrong, Visual Studio lets you know it in no uncertain terms — in some cases dozens of times an hour. You are subjected to a feedback loop that makes working in a physics or chemistry lab look positively lethargic by comparison. You have to worry about spelling, capitalisation, and even tabs versus spaces. Yet at the same time, you are frequently being fed requirements from non-technical stakeholders that are vague, ambiguous, incoherent, self-contradictory, or even patent nonsense. As Martin Fowler said in his book, Patterns of Enterprise Application Architecture (and as I’ve quoted in the strapline of my blog), “there are few things that are less logical than business logic.”

Be aware of what you’re dealing with.

If there’s one thing I’ve learned over the summer, it’s the need to have some empathy for how “the other side” thinks. I don’t think it’s right to expect non-geeks to develop exact, rigorous approaches to anything, but just to be aware that there are times when such approaches are needed, and not to denigrate or disparage those of us who work with them. But those of us of a more technical mindset need to be able to relate to both worlds. This being the case, the burden should be on us to bridge the gap as best we can.

Featured image: March for Science, Melbourne, April 22, 2017. Photograph by John Englart.

A must-watch talk for every .NET developer by Udi Dahan

No matter what your preferred software architecture is — whether it’s n-tier, CQRS, Clean Architecture or whatever — if you’re a .NET developer, you need to watch this video. No exceptions, no excuses. It’s by Udi Dahan, one of the “founding fathers” of CQRS. He makes exactly the same points as I’ve been making about software architecture over the past few years:

A brief history of pointless mappings

Throughout my career, I’ve worked on many projects, in .NET as well as with other platforms and frameworks. One particular practice that I’ve encountered time and time and time again in .NET, which I rarely see elsewhere, is that of having a separate identical set of models for each layer of your project, mapped one to another by rote with AutoMapper.

It’s a practice that I detest with a passion. It adds clutter and repetition to your codebase without delivering any benefit whatsoever, and gets in the way of important things such as performance optimisation. In fact, if you suggested it to a Python developer or a Ruby developer, they would probably look at you as if you were crazy. But many .NET developers consider it almost sacred, justifying it on the grounds that “you might want to swap out Entity Framework for something else some day.”

But why should this be? How did speculative generality end up being viewed in the .NET ecosystem as a Best Practice™? In actual fact, there are historical reasons that, in the dim and distant past, were very real concerns.

Back in the early days of .NET, round about 2001/2002, the best practice that Microsoft recommended was to use stored procedures for everything. It didn’t take long for everyone to start complaining on the ASP.NET forums about how cumbersome this was. Half of the .NET community had come from Borland Delphi, with its RAD tools letting you drag and drop data sources and data grids onto a form, while the other half had come from Java, which already had O/R mappers such as Hibernate. To go from either of these approaches to hand-cranking stored procedures, with all the tedious repetition that it involved, was like going back into the stone age.

Naturally, a whole lot of two-guys-in-a-garage ISVs were more than willing to step into the gap with a slew of ORMs. By 2004, we had Entity Broker, Pragmatier, WilsonORMapper, Objectz.net, Sisyphus, NPersist and a host of others that have long since been forgotten. They were coming and going like nobody’s business, and you couldn’t rely on the one you chose still being around six months later. With this being the case, abstracting out your ORM “just in case” you needed to swap it out for something else seemed like an eminently sensible — if not vitally necessary — suggestion.

Within a couple of years, things started to settle down, and two market leaders — the open-source NHibernate and the commercial LLBLGen Pro — emerged. These both quickly gained a solid backing, and they are both still going strong today.

But there was nothing from Microsoft. In the early days they promised us an offering called ObjectSpaces, but it was subsequently abandoned as vapourware.

This was a problem for some people. Right from the beginning, the majority of .NET developers have worked in companies and teams that wouldn’t touch anything that didn’t come from Microsoft with a barge pole if they didn’t have to. But working with DataSets and stored procedures was so painful that they held their noses and used NHibernate anyway — but wrapped it in an abstraction layer in the hope that they could swap it out for Entity Framework the moment that the latter became stable enough for them to do so.

Entity Framework finally appeared in 2008, but the first version was so bad that many in the .NET community started up a vote of no confidence in it. It was 2011 — ten years after .NET 1.0 was first released to beta — before Entity Framework was good enough to see serious use in production, and a further two years before it reached a similar level of functionality to NHibernate.

Nowadays, of course, Entity Framework is well established and mature, and although there are differences between EF6 and EF Core, the only thing these days that you’re likely to want to swap it for is hand-crafted SQL for performance reasons — and that usually means cutting right across your neat separation between your DAL and business layers altogether. Even testing is scarcely a reason any more now that EF Core has an in-memory provider for the purpose.

But old habits die hard, and by the time we got here the practice of abstracting your O/R mapper on the grounds that “you might want to swap out your data access layer for something else” had become deeply entrenched as a Best Practice. Many of its advocates are too young to remember its historical context, so they aren’t aware that it is aimed at a use case whose likelihood has nosedived. Nor are they aware that although we once had a good idea of what we’d have to swap our DAL out for, nowadays all we can talk about are unknown mystery alternatives. But this is why we constantly need to be reviewing our best practices to see whether they still apply. Because if we don’t, they just fossilise into cargo cult programming. And that benefits nobody.