james mckay dot net

Because there are few things that are less logical than business logic
08
Jun

The Continuous Retrospective

The sprint retrospective is one of the most important ceremonies in Scrum. At the end of every sprint, the team meets together to discuss what went well, what went badly, and how you can improve your processes  and working practices.

Effective as it may be, the end-of-sprint retrospective isn’t perfect. It’s very common to encounter problems during the sprint, only to get to the retrospective and  find that you’ve forgotten what they were. On the other hand, you can end up wasting a lot of time complaining about things that you aren’t able to do anything about.

To resolve these problems, we have recently implemented a “continuous retrospective” within our team. Instead of waiting until  the end of the sprint, we can highlight problems as and when they arise, and action them accordingly.

continuous retrospective

Besides the fact that issues are less likely to be forgotten, there are several other advantages to this approach. It is less formal, and a better use of people’s time. It improves communication. It can also improve visibility and transparency, allowing people to see that you are aware of, and addressing, issues that you are encountering.

A different approach.

This is a different approach from your traditional, end-of-sprint retrospective. For starters, it is less meeting-oriented. You simply set up a kanban-style board in your team’s working area, where you can add Post-It notes for things that are going well, things that need action, and things that need further discussion, perhaps to reach a team consensus.

Write things down as soon as you think of them — don’t wait until the end of the sprint, or even until the daily stand-up meeting. You should aim to discuss them with the rest of the team as soon as possible, and reach a consensus on what action to take. Actions may be, for example:

  • Creating backlog tasks for the items you bring up
  • Updating documentation
  • Communicating problems or findings with other teams or the wider software development community as appropriate (e.g. through email, Twitter, blogs, or forums)
  • Flagging issues to raise with senior management.

Don’t just limit items to things that are within your team’s remit. Anything that needs attention or can be improved is fair game — the appropriate action may be to raise it with other teams or senior management, for example.

Do you still need an end of sprint retrospective?

Maybe, or maybe not. It depends on your team.

Some teams may find that a continuous retrospective renders the end of sprint retrospective mostly superfluous. Others may find that the end of sprint retrospective still provides value — for example, by allowing you to check your progress, or to prioritise issues that have proven to be more complex to address.

But regardless of whether we retain the end of sprint retrospective or not, our goal is to make the continuous retrospective, as the sprint is in progress, the main driver of our team’s improvement. Agility is all about keeping your feedback loops as tight as possible, and like continuous integration and continuous delivery, continuous retrospectives are another way to achieve that end.

Thanks to Steven Wade, Vinitha Devadas and Dan Barrett for their contributions to this post.

24
Mar

The three essential files required by every Git repository

There are three files that you should add to every new Git repository, right from the outset. Unfortunately I frequently see projects that don’t have all three of these files, for whatever reason. These are they.

.gitignore

This is the one you’re most likely to have. However, you may be doing it wrong.

You’ll no doubt be aware what .gitignore does—it specifies a list of file patterns to ignore. However, did you know that GitHub maintains a repository of technology-specific .gitignore files? Or did you know that there is a website called gitignore.io that lets you combine two or more of them into one? So for example, if you are using Visual Studio together with Node.js, Python and Visual Studio Code, you can request a .gitignore file that contains all four sets of patterns.

This should be your starting point when you’re setting up your .gitignore file. Some platforms, such as Visual Studio, have some pretty complex ignore requirements, and it’s all too easy to end up ignoring too much, or too little. But since these ready-made .gitignore templates are peer reviewed and have proven themselves in numerous projects, they save you a lot of guesswork. Additionally, because they cover most if not all of the cases that you need, they are generally a case of “set it and forget it.”

.gitattributes

As you will no doubt be aware, Windows handles line endings differently from Unix or OS X. Windows uses the ASCII control characters CR and LF (0x0D, 0x0A) to indicate an end of line, whereas Unix and OS X use only LF (0x0A).

To allow users of different operating systems to work on the same codebase, Git can be configured either to normalise line endings on check-in and check-out, or to leave them as-is, using the core.autocrlf configuration setting. However, not everyone configures Git the same way, and this can cause confusion (and unnecessary merge conflicts) if you’re not careful, as well as confusing certain text editors such as Notepad or gedit. To avoid problems here, you can (and should) override the core.autocrlf setting for your project using a .gitattributes file. This should contain just one line:

* text=auto

You can set other options with .gitattributes, but this simple example will be sufficient for 99% of cases. It will ensure that text files are checked out on Windows with CRLF endings and on Unix/OS X with LF endings.

In some cases, you may need to make sure that code is checked out with Unix (LF) line endings on both platforms. This can be the case if your files get shared between Windows and Unix systems by mechanisms other than Git — for example, using tools such as Vagrant, Terraform or Docker. In this case, use the following line:

* text=auto eol=lf

Note however that if you are using git-svn against a Subversion repository, you want to make sure that line ending normalisation is turned off, otherwise both Git and Subversion will attempt to handle line endings, leading to confusion among Subversion users who aren’t using git-svn. In this case, you should use:

* -text

README.md

The third file does not actually affect Git’s behaviour. However, it is every bit as important. Your README.md file — a Markdown document located in your root directory is the home page for your developer documentation.

I say this because it is the first place that anyone working on your project will see. GitHub, Bitbucket, GitLab and most other modern Git hosts render it when you visit your source repository’s home page in your browser. As such, even if your actual developer documentation home page is elsewhere, you should at the very minimum have a readme file containing a link pointing to it. This is a well-established standard, and by sticking to it you will make your documentation much easier to find.

29
Feb

Fuzzy dates aren’t as good an idea as you think

I received an e-mail from a colleague the other day about some code that I’d recently pushed to GitHub. Since I’d pushed some more changes round about the time he sent the e-mail, I needed to know which revision he was referring to.

There’s just one problem:

image

Of course, I could have got the proper times from SourceTree or typing git log in the console, but it’s still annoying, especially since the GitHub page was more easily to hand. And GitHub does show you the exact time as a tooltip—something I missed at the time—but it’s still annoying, especially if you have to hover over half a dozen different datestamps to find the one you’re looking for.

We need to have a rethink about fuzzy dates. Yes, I know that it’s friendly and cuddly and warm and fuzzy and cute (and more interesting to code) to say “two days ago” or “eighteen hours ago,” but when I’m trying to refer back to 17:22 precisely, it’s utterly useless and just adds friction without providing any value whatsoever.

24
Feb

Signing Git commits with GPG on Windows

One of the “gotchas” with Git is that it allows you to check in code as anyone. By running git config user.name and git config user.email, you can put anyone’s name to your commits—Stephen Hawking, Linus Torvalds, Henry VIII, or even me. If you want an idea of some of the problems this can cause, Mike Gerwitz’s article A Git Horror Story is a cautionary tale.

To resolve this problem, Git allows you to sign commits using GPG (GNU Privacy Guard, the GNU implementation of PGP), and in fact, Git includes the command-line version of GPG out of the box. You can run it within a Git Bash console.

However, most Windows users would prefer a GUI-based version, and gpg4win (GPG for Windows) is your go-to option here. You can install it either using the downloadable installer or else via Chocolatey.

To use gpg4win with Git needs a little bit of configuration, but first we’ll generate a new certificate. Go to your Start menu and start up Kleopatra, the gpg4win key manager:

image

Now click on the “File” menu and choose “New certificate…”

image

Choose the first option here—a personal OpenPGP key pair. Enter your name and e-mail address, and optionally a comment:

image

Review the certificate parameters and click “Create Key”:

image

You will be prompted to enter a passphrase:

image

Finally, your key pair will be successfully created:

image

Click on “Make a Backup of Your Key Pair” to back it up to your hard disk.

image

Save your private key somewhere safe. Your password manager database is as good a place as any. (You are using a password manager, aren’t you?)

Once you’ve gone through the wizard, you will see your new key pair in the Kleopatra main window. Click on the “My Certificates” tab if you don’t see it at first:

image

The number in the right hand column, in this case 1B9DC839, is your key ID. You now need to configure Git to use it. Type this in a Git shell, replacing “1B9DC839” with your own GPG key:

git config --global user.signingkey 1B9DC839

Prior to version 2.0, you had to instruct Git to sign each commit one at a time by specifying the -S parameter to git commit. However, Git 2.0 introduced a configuration option that instructs it to sign every commit automatically. Type this at the console:

git config --global commit.gpgsign true

Finally, you need to tell Git to use the gpg4win version of gpg.exe. Git comes with its own version of gpg.exe, but it is the MinGW version—a direct port of the Linux version, which saves your keychain in the ~/.gnupg folder in your home directory. The gpg4win port, on the other hand, saves your keychain in ~/AppData/Roaming/GnuPG. Certificates managed by one won’t be seen by the other. You will also need to use the gpg4win version if you want to use a GUI such as SourceTree, since the MinGW version of gpg.exe is entirely command line based and doesn’t play nicely with Git GUIs. By contrast, the gpg4win version brings up a dialog box to prompt for your password.

git config --global gpg.program "c:/Program Files (x86)/GNU/GnuPG/gpg2.exe"

If you are using 32-bit Windows, or if you have installed gpg4win into a custom location, you will need to tweak the location of the program.

To check that it works, commit some code to a repository somewhere. You should be prompted for the passphrase that you entered earlier:

image

You can then verify that your commit has been signed as follows:

$ git log c05ddaa8 --show-signature -1
commit c05ddaa8e9da289fa5148d370b8ba9e5c419df9a
gpg: Signature made 02/24/16 08:08:39 GMT Standard Time using RSA key ID 1B9DC839^M
gpg: Good signature from "James McKay (Signed Git commits) <code@JAMESMCKAY.NET>" [ultimate]^M
Author: James McKay <code@JAMESMCKAY.NET>
Date:   Wed Feb 24 08:07:47 2016 +0000

You won’t be asked for your passphrase every time. Once you’ve entered it once, gpg spins up a process called gpg-agent.exe, which caches it in memory for a while.

27
Nov

“I’ve never had a problem with it.”

From time to time, when I’m discussing possible tools or techniques for a project with other developers, one of them will defend their choice by saying that they’ve never had a problem with it.

This immediately makes me sceptical. If you say you’ve never had a problem with something technical, it tells me one of two things. Either you’re lying, or you don’t have enough experience with it to be able to recommend it.

Pretty much every tool, every technique, every framework, every language, every best practice, has its pitfalls. It may work well in some situations but not in others. It may only work if you follow a strict set of protocols to the letter. It may have bugs or unintended consequences. It may not scale to meet your needs. It may deliver benefits that you don’t need at the expense of ones that you do. It may not even deliver the benefits it claims to deliver at all. In nearly every case there are multiple ways of getting it badly wrong. More often than not, you don’t discover what the pitfalls are until you’ve been using it for quite some time.

There’s no shame in admitting this. Problems with your approach aren’t necessarily a deal breaker. But if you’re recommending something to me, I want to know that you’ll be able to steer me through the problems and teach me how to avoid them, mitigate them, or recover from them. By saying that you’ve never had a problem with it, you’re telling me that you will not be able to do so. And that is a deal breaker.

09
Jul

Check-in before code review is an antipattern

I’ve never been that satisfied with most explanations that I see on the Internet of why Git is better than Subversion. Usually they wax lyrical about distributed versus centralised workflows or the advantages of branching and merging, but I find that kind of misses something because it takes way too long to get to the point. The question is, what is Git’s biggest advantage, in business terms, over Subversion?

The answer is quite simple. Git supports workflows that Subversion does not, that have significant benefits for your code quality, team collaboration and knowledge sharing.

There are a few such workflows, and the ones that have become popular all have one thing in common. Code gets reviewed before it is merged into the main codebase, rather than waiting till after the fact.

In actual fact, Git’s flexibility about when you conduct code reviews leaves Subversion dead in the water. Pull requests on a web-based Git server allow you to not only review code before it is merged, but to involve the whole team in the code review and even to carry out code reviews on work that is still in progress if you’re that way inclined. In effect, every task, every user story, every feature becomes a complete conversation.

To be fair, you can adopt this workflow with Subversion, using either task branches or patches, but Subversion makes branching and merging so clunky, user-unfriendly and error-prone that it simply isn’t practical, and besides being similarly clunky, submitting patches blocks further work until your submission has been reviewed. Consequently in practice, on most Subversion-based projects, commit-before-review is the norm, and review-before-merge is only used on the most high-impact, high-risk work. And it shows—trunk-based Subversion-hosted projects almost always have a far, far lower quality than pull request-based Git-hosted projects.

Why the difference? Simple. In the commit-before-review workflow, every check-in becomes a fait accompli.

This can be a recipe for disaster.

If bad code gets checked in, you have to explicitly ask for it to be backed out or modified—and you have to follow through to ensure that this is done. Sometimes it can’t be backed out or modified, because other code has been checked in that depends on it. Contentious design decisions can all too easily be steamrollered in without any discussion—and in the event of a disagreement, backing them out can all too easily be filibustered.

There’s also a strong psychological pressure to let standards slip as well. When you’re checking in code as a fait accompli, it’s all too easy to check in ill-thought-out variable names, poor test coverage, that doesn’t follow the team’s coding standards, with useless commit summaries to boot. It’s also far too easy for your reviewer (singular—you seldom if ever get more than one person reviewing your code in this model) to decide to pick his or her battles and only focus on the more important things.

On the other hand, when the default action is “reject,” as with pull requests, the onus is on you as the author of the code to prove that your changes are fit for purpose. This gives you all the more incentive to get things right—to stick to the team’s agreed coding conventions, to write tests, to separate concerns correctly, and so on. It also means that you pay more attention to making your code readable and your commit summaries informative. After all, your team-mates (plural) are going to have to make this judgment call based on whether they can understand what you’ve done or not.

Another significant benefit of pull requests is that they dramatically improve knowledge sharing among the team. A new developer may submit a pull request that reinvents methods that already exist, or that violates coding standards that they didn’t know existed. Pull requests are an opportunity for education here—you can easily point them in the right direction. On the other hand, with commit-before-review, because it is so easy to overlook things such as these, opportunities to educate your team-mates get lost.

One other thing bears saying here. Even if you do manage to adopt a pull request-like workflow with Subversion, you still face one major limitation: changesets in Subversion are immutable. With Git, if the commit history of a task branch makes it difficult to review, you can always ask the author to revise it—clarifying commit summaries, squashing superfluous changesets, and perhaps (for experienced Git users) even teasing changesets apart. You can do this quite effectively with the git rebase --interactive command. With Subversion, on the other hand, once it’s in, you’re stuck with it.

Pull requests are not the only advantage that Git has over Subversion. But they are the most important and the most business-critical. A pull request-based workflow with Git will give you a codebase that is much cleaner and much more robust, with fewer nasty surprises and an informative and useful source history. Trunk-based development in Subversion, on the other hand, can leave you with a very bad taste in your mouth. For this reason, sticking with Subversion raises serious questions about the quality and maintainability of your codebase.

16
Feb

Inseparable concerns

Separation of concerns is often cited as the reasoning behind the traditional three-layer architecture. It is important, otherwise you will end up with a Big Ball of Mud.

However, in order to separate out your concerns, you must first categorise them correctly as either business concerns, presentational concerns or data access concerns. Otherwise you will end up with unnecessary complexity, poor performance, anaemic layers, and/or poor testability.

Unfortunately, most three-layer applications completely fail to categorise their concerns correctly. More often than not this is because it is simply not possible to do so, as some concerns fall into more than one category and can’t be refactored out without introducing adverse effects. I propose the term inseparable concerns for such cases.

The key to separation of concerns is to let it be driven by your tests. Under TDD, the first thing you would do if a particular line of code contained a bug would be to write a failing unit test that would pass given the expected correct behaviour. It is what this test does that tells you whether the code under test is a business concern, a data access concern, or a presentational concern.

It is a presentational concern if the test simulates raw user input, or examines final rendered output. For example, mocking any part of a raw HTTP request (GET or POST arguments, cookies, HTTP headers, and so on), verifying the returned HTTP status code, or examining the output generated by a view. In general, if it’s your controllers or your views that you’re testing, it’s a presentational concern.

It is a business concern if the test verifies the correctness of a business rule. Basically, this means that queries are business concerns, period. If they are not returning the correct results, then they have not implemented some business rule or other correctly. Other examples of business concerns include validation, verifying that the data passed to the database or a web service from a command is correct, or confirming that the correct exception is thrown in response to various failure modes.

It is a data access concern if the test requires the code to hit the database. Note that this is where the so-called “best practice” that your unit tests should never hit the database breaks down: if you are adhering to it strictly, sooner or later you will encounter a bug where it stops you from writing a failing test. Most people, when confronted with such cases, skip this step. Don’t: TDD should take precedence. Set up a test database and write the test already.

It is an inseparable concern if it falls into more than one of the above categories. Pretty much any performance-related optimisation that you do will be an example here. For example, if you have to bypass Entity Framework and drop down to raw SQL, you will have to hit the database to verify that business logic is correct. Therefore, it is both a business concern and a data access concern.

Inseparable concerns are much more prevalent than you might expect. IQueryable<T> is the best that we’ve got in terms of making your business and data access layers separable, but, as Mark Seemann points out, it still falls short because NotSupportedException. Another example is calling .Include() on a DbSet to include child entities. Although this is a no-op on Mock<IDbSet<T>>, you can’t verify that you are making the correct calls to .Include() in the first place without hitting the database. Besides which, if you’re mocking DbSet<T> instead of IDbSet<T>, as you’re supposed to be able to do with EF6, calling .Include() throws an exception.

I would just like to stress here that inseparable concerns are not an antipattern—they are a fact of life. All but the simplest of code bases will have them somewhere. The real antipattern is not introducing them, but trying to treat them as if they were something that they’re not.

30
Oct

“That’s just your opinion” means “I’m not listening”

Ever been trying to present a reasoned argument, with evidence, for something, only to be rebuffed with a response like this?

  • “That’s just your opinion.”
  • “Well, everyone’s entitled to their own opinion.”
  • “You just don’t like it because it’s not cool and trendy.”

What they mean is this:

  • “I’m not listening. I’ve got my fingers in my ears. Neener, neener, neener.”

 

23
Oct

Why you should use a general purpose scripting language for your build scripts

For quite some time now, given the choice, I’ve opted to write my build scripts in plain Python using nothing but the standard libraries. I personally believe (with good reason) that build scripts are best served by a general purpose scripting language such as this, and that domain-specific languages or frameworks for build scripting have little if anything to offer. Most people use DSLs such as NAnt, MSBuild, Rake, Grunt or Gradle for their build scripts simply because they believe That Is How You Are Supposed To Do It, but in most cases it isn’t necessary, and in many cases it is even counterproductive.

In this post, I’d like to explain the reasons why I recommend using general purpose scripting languages and avoiding specialised build frameworks, and to address some commonly held misconceptions about build scripts in general. If I’m saying a lot about MSBuild, that’s because it’s the tool that I have the most experience with; however, many of my points apply to other tools as well, including those that aren’t XML-based.

1. Build scripts are code, not configuration.

Most build frameworks view build scripts as configuration first and code second. This is wrong. Badly wrong.

You see this in the way they adopt a declarative, rather than an imperative, approach, defining your script in terms of “targets” or “tasks” with dependencies between them. This approach can make sense in some situations — in particular, where you have a large number of similar tasks in a complex dependency graph, and you need to allow the build engine to determine the order in which they are run. This is the case, for example, with Visual Studio solutions that consist of a large number of projects.

But top level build scripts don’t work that way. At the topmost level, build scripts are inherently imperative in nature, so a declarative approach doesn’t make a whole lot of sense. Your typical build script consists of a sequence of diverse tasks which are run in an order that you define, or sometimes in a loop based on values in a collection. For example, it may look something like this:

  • Fetch the latest version of your code from source control
  • Delete any leftover files from the previous build
  • Write a file to disk containing version information
  • Fetch your project’s dependencies (NuGet packages, for example)
  • Compile your project
  • Bundle and minify your assets
  • Run your unit tests
  • Run your integration tests
  • Prepare installation packages
  • Deploy your build to the appropriate servers
  • Prepare reports (e.g. code coverage)

Writing your build script imperatively, with each of these steps as a function call in the top level of your code, allows you to see, at a glance, what your script is doing. On the other hand, writing them declaratively, with each task specifying its own dependencies, often requires you to jump around all over your build script just to get a handle on things.

One important thing that build scripts need is control flow structures — conditions, loops, iteration over arrays, parametrised subroutines, variables, and so on. You simply can’t represent these properly with a declarative language. Sure, you can define tasks to handle some of these requirements, such as regex-based find and replace, but that will never be as clear as a purely imperative approach.

I’ve never come across a definitive explanation why build frameworks should all be based around the declarative, configuration-like approach of tasks with dependencies, other than a vague, hand-waving and unsubstantiated claim that “people prefer it that way.” Personally I think it’s more likely that people just saw that this was how make, the granddaddy of build tools, was designed, assumed that it was a Best Practice, and blindly copied it without thinking.

2. Build scripts need to be maintained.

Build scripts don’t tend to change very often — perhaps once every three to six months or so. Consequently it’s tempting to view them as something that you write once and can forget about completely. However, they do change, so readability and maintainability are critical. A well written build script can make all the difference between a change taking half an hour and it taking half a sprint; between it working as intended and being riddled with bugs.

This means, of course, that XML-based build languages, such as MSBuild or NAnt, are a very bad idea. This is nothing to do with a lack of “cool” — it’s a lack of readability and maintainability, pure and simple. XML simply isn’t capable of expressing the kind of control flow structures that you need in a succinct, readable manner. MSBuild is particularly bad here. Its lack of support for looping, iteration or parametrised subroutines makes it difficult if not impossible to write anything more complex than the simplest of build scripts without resorting to painful amounts of copy and paste code. Since DRY is a vital discipline in keeping your code maintainable, anything that forces you to violate it as much as MSBuild does should be avoided with extreme prejudice.

To mitigate the problem, Ant, NAnt and MSBuild allow you to embed snippets of code in other languages, such as PowerShell. Besides the fact that the syntax to do so is so verbose and cumbersome that it’s scarcely worth it, this just raises the question: why not just use PowerShell end to end instead?

3. Build scripts need to be run from the command line.

It’s all too common to find build scripts that are very tightly integrated with the Continuous Integration server. This usually happens when you have vast swathes of configuration settings in TeamCity, TFS, Jenkins or what have you. This causes two problems: first, you have a lot of important and potentially breaking detail that isn’t checked into source control; second, it becomes very difficult if not impossible to run your build on your local machine, end to end, from the command line.

If you can’t run your build from the command line, debugging it will be painful. Every iteration of your edit-compile-test loop will require a separate check-in and a sit-on-your-hands wait for several minutes until it either completes or breaks. This is a very inefficient and wasteful way of doing things. It can also cause problems when you have to track down a regression with git bisect, because you’ll have a whole string of broken revisions to contend with.

4. Build scripts have few other domain specific requirements, if any.

Apart from this, there are only two other requirements that your build scripts have. Your build language needs to be interpreted rather than compiled (otherwise you’ll have a chicken-and-egg problem), and it needs to be able to run other programs: your compiler; NuGet to fetch your dependencies; your test runner; and so on. But that’s pretty much it. Just about any general purpose scripting language — Python, Ruby, PowerShell, bash, DOS batch files, heck even PHP if you’re that way inclined — will fit the bill.

What about the specific (N)Ant/MSBuild tasks that you need to call? Most of these can be implemented quite simply as calls to either the language’s standard library or a command line interface.

Some .NET developers don’t like this approach because they say that using, say, Python or PowerShell would mean having to learn a new language. Personally I find this a very strange argument, because if you’re using MSBuild, you’re doing that already anyway. Not only that, but the learning curve that you’re taking is actually steeper: the conceptual differences between, say, C# and Python are very superficial when compared to the conceptual differences between C# and MSBuild. Besides, learning a scripting language is a skill that can be transferred to other problem domains if necessary, whereas MSBuild is a very specialised and niche language that only ever gets used for build scripts for .NET projects.

Just because you are presented with something that describes itself as a build tool doesn’t mean to say you have to use it. Aim to choose tools and languages that allow you to write code that is easy to read, understand and maintain. You’ll be much more productive and much less stressed — and the people who have to maintain your code after you will thank you for it.

For further reading

16
Oct

Moving a problem from one part of your codebase to another does not eliminate it

This is, of course, a statement of the obvious, but I’ve come across quite a few “best practices” in recent years that violate it.

People come up with some design pattern or other, telling you that it solves some problem or other. At first sight, it appears that it does eliminate the problem from one part of your codebase, but on closer inspection it turns out that it merely shifts it to another, and sometimes even introduces other problems in the process.

I first noticed this in a Web Forms application, where our resident Best Practices Guy berated me for using inline data binding expressions in the .aspx files. These were actually simple data binding expressions, with no business logic, a bit like this:

<asp:Repeater id="rptData" runat="server">
  <p>
    <asp:Label Text="<%# Eval("Text") %>" runat="server" />
  </p>
</asp:repeater>

Just like you’ve seen in every Web Forms tutorial since 2001, but he said I should have been looking up the label in the DataBound event and assigning it there instead:

void rptData_DataBound(object sender, RepeaterItemEventArgs e)
{
    var label = e.Item.FindControl("lblParagraph") as Label;
    if (label != null)
    {
        label.Text = ((LineItem)e.Item.DataItem).Text;
    }
}

He claimed that it would prevent problems if I’d mistyped the property name in the .aspx file, because the C# compiler would catch it.

The reason this is a fallacy is that it just moves the problem into your code-behind file. You’re just as likely to mistype the name of the control — lblParagraph — in the string and end up with exactly the same problem. Only it’ll be easier to miss it in testing because the null check means that it will fail silently. On top of that, you’re using more than twice as many lines of code spread over two different files rather than just one to do the same thing.

I noticed a similar problem when I was evaluating OOCSS — a design pattern that’s supposed to reduce duplication in your CSS, by having you declare separate CSS classes for different functional aspects such as “button” or “highlighted” or “media”. Twitter Bootstrap uses it fairly heavily. Its selling point is that it’s supposed to make your CSS more maintainable and lightweight without using a pre-processor by reducing duplication in your stylesheets. Unfortunately, in the process, it introduces a lot of duplication and weight into your HTML because you now have to set additional class declarations on a huge number of elements.

Then of course there’s our old friend, the Repository Facade, whose proponents tell you that it reduces tight coupling between your business layer and your ORM. Of course a generic Repository Facade does this at the expense of making it impossible to optimise your queries for performance, but with a specialised one — where you’re moving your queries into your Repository Facade itself — you’re just moving the tight coupling from one part of your codebase to another. It doesn’t reduce the amount of work that you would have to do to switch your data source in the slightest, and in the process it prevents you from unit testing your business logic independently of the database.