james mckay dot net
because there are few things that are less logical than business logic

Posts tagged: .net

The state of IOC containers in ASP.NET Core

One of the first things that I had to do at my new job was to research the IOC container landscape for ASP.NET Core. Up to now we’ve been using the built-in container, but it’s turned out to be pretty limited in what it can do, so I’ve spent some time looking into the alternatives.

There is no shortage of IOC containers in the .NET world, some of them with a history stretching as far back as 2004. But with the arrival of .NET Core, Microsoft has now made dependency injection a core competency baked right into the heart of the framework, with an official abstraction layer to allow you to slide in whichever one you prefer.

This is good news for application developers. It is even better news for developers of libraries and NuGet packages, as they can now plug straight into whatever container their consumer uses, and no longer have to either do dependency injection by hand or to include their own copies of TinyIOC. But for developers of existing containers, it has caused a lot of headaches. And this means that not all IOC containers are created equal.

Conforming Containers in .NET Core

Originally, the .NET framework provided just a simple abstraction layer for IOC containers to implement: the IServiceProvider interface. This consisted of a single method, GetService(Type t). As such, all an IOC container was expected to do was to return a specific service type, and let the consumer do with it what it liked.

But there’s a whole lot more to dependency injection than just returning a service that you’re asked for. IOC containers also have to register the types to be resolved, and then — if required to do so — to manage their lifecycles, calling .Dispose() on any IDisposable instances at the appropriate time. When you add in the possibility of nested scopes and custom lifecycles, it quickly becomes clear that there’s much more to it than just resolving services.

And herein lies the problem. For with the introduction of Microsoft.Extensions.DependencyInjection and its abstractions, Microsoft now expects containers to provide a common interface to handle registration and lifecycle management as well.

This kind of abstraction is called a Conforming Container. The specification that conforming containers have to follow is defined in a set of more than fifty specification tests in the ASP.NET Dependency Injection repository on GitHub. It includes such requirements as:

  • When you register multiple services for a given type, when you request one, the one that you get back has to be the last one registered.
  • When you request all of them, they have to be returned in the order that they were registered.
  • When a container is disposed, it has to dispose services in the reverse order to that in which they were created.
  • There are also rules around which constructor to choose, registration of open generics, requesting types that haven’t been registered, resolving types lazily (Func<TService> or Lazy<TService>) and a whole lot more.

These specification tests are also available as a NuGet package.

There are two points worth noting here. First, conforming containers MUST pass these tests otherwise they will break ASP.NET Core or third party libraries. Secondly, some of these requirements simply cannot be catered for in an abstraction layer around your IOC container of choice. If a container disposes services in the wrong order, for example, there is nothing you can do about it. Cases such as these require fundamental and often complex changes to how your container works that in some cases might be breaking changes.

For what it’s worth, this is a salutary lesson for anyone who believes that they can make their data access layer swappable simply by wrapping it in an IRepository<T> and multiple sets of models. Data access layers are far more complicated than IOC containers, and the differences between containers are small change compared to what you’ll need to cater for if you want to swap out your DAL. As for making entire frameworks swappable, I’m sorry Uncle Bob, but you’re simply living in la-la land there.

All containers are equal, but some are more equal than others

So should we just stick with the default container? While many developers will, that is not Microsoft’s intention. The built in container was explicitly made as simple as possible and is severely lacking in useful features. It can not resolve unregistered concrete instances, for example. Nor does it implicitly register Func<T> or Lazy<T> (though the latter can be explicitly registered as an open generic). Nor does it have any form of validation or convention-based registration. It is quite clear that they want us to swap it out for an alternative implementation of our choice.

However, this is easier said than done. Not all IOC containers have managed to produce an adapter that conforms to Microsoft’s specifications. Those that have, have experienced a lot of pain in doing so, and in some cases have said that there will be behavioral differences that won’t be resolved.

For example, the authors of SimpleInjector have said that some of their most innovative features — specifically, those that support strong, early validation of your registrations — are simply not compatible with Microsoft’s abstractions. Travis Illig, one of the authors of Autofac, noted that some of the problems he faced were incredibly complex. Several commenters on the ASP.NET Dependency Injection GitHub repo expressed concerns that the abstraction is fragile with a very high risk that any changes will be breaking ones.

There are also concerns that third party library developers might only test against the default implementation and that subtle differences between containers, which are not covered by the specification, may end up causing problems. Additionally, there is a concern that by mandating a standard set of functionality that all containers MUST implement, Microsoft might be stifling innovation, by making it hard (or even impossible) to implement features that nobody else had thought of yet.

But whether we like it or not, that is what Microsoft has decided, and that is what ASP.NET Core expects.

Build a better container?

So what is one to do? While these issues are certainly a massive headache for authors of existing IOC containers, it remains to be seen whether they are an issue for authors of new containers, written from scratch to implement the Microsoft specification from the ground up.

This is the option adopted by Jeremy Miller, the author of StructureMap. He recently released a new IOC container called Lamar, which, while it offers a similar API to StructureMap’s, has been rebuilt under the covers from the ground up, with the explicit goal of conforming to Microsoft’s specification out of the box.

Undoubtedly, there will be other new .NET IOC containers coming on the scene that adopt a similar approach. In fact, I think this is probably a good way forward, because it will allow for a second generation of containers that have learned the lessons of the past fifteen years and are less encumbered with cruft from the past.

Whether or not the concerns expressed by authors of existing containers will also prove to be a problem for authors of new containers remains to be seen. I personally think that in these cases, the concerns may be somewhat overblown, but whether or not that turns out to be the case remains to be seen. It will be interesting to see what comes out in the wash.

What should a .NET renaissance look like?

Aaron Stannard has an interesting blog post in which he talks about all the different ways in which the .NET scene has improved in the past few years. There’s certainly a lot going on in the Microsoft ecosystem to get .NET developers excited, and he mentions six areas in particular where this is evident:

  1. The decoupling of .NET from Windows
  2. The new-found focus on CLR performance
  3. Moving .NET’s tooling to a cross-platform model
  4. The .NET user base is embracing the OSS ecosystem as a whole
  5. The direction on .NET development is pushing users further down into the details of the stack
  6. Microsoft’s platform work being done out in the open.

Now these all look pretty exciting, but the litmus test of whether we are seeing a .NET renaissance is whether or not it can attract people who have “left .NET” back into the fold.

I have had little involvement in .NET myself over the past year, since I moved onto a team doing DevOps work on AWS for mostly LAMP-based projects a year or so ago. While I wouldn’t describe myself as having “left .NET” never to return, there is still one very important thing that needs to happen before I would consider it an attractive prospect to pick up that particular baton again.

The .NET community as a whole needs to provide evidence that it is becoming more open to options from beyond the Microsoft ecosystem.

When you move beyond the .NET ecosystem, one of the first things you find is that there is much more cross-flow between the different technology stacks. Developers are much more likely to be familiar with (or at least, willing to try out) other languages outside their usual ambit. Ruby developers won’t think twice about getting their hands dirty with Python, or Go, or Scala, or even C#, if the need arises. Any solution that gets a good enough reputation and meets your business need will be up for consideration — ElasticSearch, DataDog, Terraform, Consul, you name it. Different languages are mixed and matched — and all the more so with the increasing popularity of microservice-based architectures.

By contrast, for many years, most .NET developers have shown very little interest in anything beyond the Microsoft ecosystem. In fact, some of them have even regarded other technology stacks with suspicion if not outright hostility. There’s a widespread attitude in many .NET teams in many companies that unless something is included out of the box in Visual Studio, documented first and foremost on MSDN, promoted by Microsoft MVPs, and certified by Microsoft examinations, you’ve no business whatsoever paying the slightest bit of attention to it. If you’ve ever been told to do something a certain inefficient and cumbersome way for no reason other than That Is How Microsoft Wants You To Do It, or been given a funny look for suggesting you use Python for something, you’ll know exactly what I mean.

Nowhere was this more evident than in the Silverlight community. The reason why Silverlight died and HTML5 took over in its place was that browsers and platforms which were outside of Microsoft’s control — starting with the iPhone and the iPad — started blocking it. Yet Silverlight developers almost unanimously put the blame for Silverlight’s demise at Microsoft’s feet. The fact that there were decisions being made by other browser manufacturers that had to be considered didn’t even seem to enter their minds.

When your team has a healthy level of interaction with other parts of the software development community, you start to see many, many benefits. You learn from other people’s mistakes as well as your own. Your attention is drawn to solutions to problems that you didn’t realise were problems. You get an element of peer review for your best practices. You get a better idea of which tools and technologies are likely to stick around and which aren’t. On the other hand, with a paternalistic, spoon-fed attitude, you end up turning up late to the party and very often completely misunderstanding the processes and tools that are being suggested to you. It’s amazing to visit the ASP.NET architecture forum and see how many .NET developers still cling on to horrendous outdated “best practices” such as n-tier, business layers that don’t contain any business logic, or misguided and ultimately futile attempts to make Entity Framework swappable for unknown mystery alternatives.

There are of course many .NET teams that get these things right, and that do successfully engage with teams from elsewhere. But I’d like to see a whole culture shift right across the entire .NET ecosystem. I’d like to see it become commonplace and widespread for .NET teams to go beyond embracing just those bits and pieces from elsewhere that get Microsoft’s imprimatur, such as Git, or bash on Ubuntu on Windows, or Angular.js. I’d like to see a greater willingness to try tools such as make or grunt instead of MSBuild; Terraform instead of Azure Resource Manager; ElasticSearch/Logstash/Kibana instead of SCOM; and so forth. I’d like to see a much greater willingness to augment C# codebases with utilities and helpers written in Python, Ruby or Go where it makes sense to do so.

I’d like to see them fully embrace twelve factor apps, configuration settings in environment variables rather than the abomination that is web.config, container-based architecture, and immutable servers treated as cattle rather than pets. I’d like to see innovations in software development tooling and techniques getting adopted by the .NET community much faster than they have done up to now. You shouldn’t have to wait for Microsoft to take notice and give their imprimatur before you start using tools such as Git, Docker or Terraform, when everyone else has got there already.

Once we get to that point, we can truly say that we are seeing a .NET renaissance.

Keep the number of projects in your solution to a minimum

There are a lot of common practices among .NET developers that get touted as “best practices” although they are nothing of the sort. A lot of them seem to be leftovers from the days about ten years ago when there was a lot of hype about n-tier although the people promoting n-tier didn’t properly understand the problems that n-tier was supposedly trying to solve. One such example is too many projects in a single solution.

In general, you should always aim to keep the number of projects in your solution to an absolute minimum. For a simple web application, your solution requires exactly two projects: the application itself and your unit tests. For an application with a web front end and a console application, your solution requires four projects: the shared components, the web front end, the console application, and your unit tests. Products that deploy different applications to different servers may need one or two more for shared components, for instance, but the number should still be kept as small as possible.

Your solution does not — I repeat, does not — require separate projects for your controllers, your model, your business services, your repository, your shared components, your interfaces, and your wrappers round third party web services.

Your solution does not require multiple unit test projects. Some people create a separate unit test project for every main project in their solution. This is completely unnecessary: why not have a single unit test project for all of them? Of course, it may be worth having one project for fast unit tests, and another one for slower tests that need to run against a database, but over and above that, reasons for creating extra test projects are few and far between.

Your solution does not require multiple front end applications of the same type for deployment on the same server. You may need a back-end admin application on one server and a front-end public facing website on another, but you don’t need two back-end admin web applications for the same solution.

There are three reasons why too many assemblies are harmful:

1. Too many assemblies slow down compilation. When you have to compile a single project that references thirty external dependencies, the C# compiler only has to pull in these referenced assemblies once. When you have to compile thirty projects that reference thirty external dependencies each, the C# compiler has to pull in all thirty dependencies every time — a grand total of nine hundred referenced assemblies. This adds a lot of time onto your edit-compile-test-loop, which in turn knocks you right out of the zone and makes the whole development process feel like wading through treacle.

2. Too many assemblies make dependency management a pain. If you have to add a third party reference to thirty different projects, it is a massive, painful violation of DRY. If you have to swap out one reference for another, it is painful. If you have to add a third party reference to only a subset of those thirty, it is even more painful because you have to work out which assemblies require it and which don’t. And don’t even get me started on the problems you might face if you end up with two different projects referencing the same assembly from two different places within the bowels of your third party dependencies directory.

3. Too many front-end projects in particular make configuration and release management a pain. If you have two web front end projects for deployment on the same server, you have to configure them both together and deploy them both together. The more configuration you have to manage, the greater your risk of making a mistake. When you add a new configuration option, you have to update several different applications, and you increase the risk that you might miss one. If you have to change Copy Local from False to True for some assembly or other, you have to go through all your front end applications to make sure this is done correctly. Again, it’s a violation of DRY.

The main reason why people advocate a lot of projects in their solution is to attempt to keep the different logical parts of their code separate, so, for instance, they aren’t referencing System.Web from within the data access layer, or the data access layer directly from the UI, and they aren’t introducing circular dependencies. In practice, it simply isn’t worth it. If dependencies between your classes and namespaces really bothers you, a far simpler alternative is to buy a licence for NDepend instead. Certainly, you should have a very, very good reason to add a new project to your solution, and you should look to see what you can consolidate wherever you can.