Last night I went to the Brighton Alt.net meeting for the first time in a while. It was good to spend a couple of hours discussing various .net related topics with some friendly likeminded people. One of the topics was dependency injection and specifically when is a project big enough to warrant using it and when might you favour manual wiring up of your application. From the DI crowd including myself the answer was that you should use dependency injection when your application has one or more classes and that we never favour manually wiring together dependencies.
I am a firm believer in SOLID principles. I don’t just mean that I have heard of them and have put the acronym on my CV. I know what all the letters stand for and I can give examples of each principle in code that I have written. When I follow these principles I consider my code to be well crafted. If I encounter impediments to progress within a codebase I can usually spot one or more SOLID compliant refactors that will solve the problem.
Dependency injection is the D in SOLID (Edit: as a couple of people have correctly pointed out the D is actually dependency inversion. If you’ve adhered to the dependency inversion principle you can use dependency injection to wire up a class’s dependencies rather than letting it do this itself). So in a sense you should use DI whenever you want to write well crafted code. I think the real issue here is that if the thought of using dependency injection seems like an overhead or somehow daunting then it is probably because you’ve not learnt how to use it effectively. In a sense the question is really about what size project warrants learning dependency injection. Rather than thinking in terms of when a project is big enough I’d recommend starting on a small simple project. Whenever I learn something new I try to minimise complexity elsewhere so that I can focus my effort on the thing I’m trying to learn.
You’ll find it easier if you design your interfaces/classes according to SOLID principles too. Use TDD to help drive out the design. Again, if you’re unfamiliar with this a small project is a great place to start. Try applying the S (single responsibility principle) which helps to keep the number of dependencies down and the L (Liskov substitution principle) which will guide you towards depending on interfaces rather than implementations.
Once you have a project which is loosely coupled via interfaces you’ll want to move on to wiring that up using a DI container. There are a ton of advanced features but for now stick with the basics. Container configuration can get very complicated and if it seems like it is getting out of hand you probably need to simplify your design (the same goes for complicated tests).
Most containers can be configured automatically, for example you can register all classes whose namespace contains a certain word. I’ve recently been experimenting with having a ‘Components’ part in each namespace for interfaces and classes which I want to register with my DI container. This allows me to leverage automatic registration without accidentally registering classes which don’t belong in the container.
Beware version compatibility if you are pulling DI and related packages from nuget. Some packages have version specific dependencies. If you encounter these problems learn how to use assembly binding redirects and the nuget Update-Package -Version command.
Remember, any friction you encounter using your DI container is part of the learning process. Once you’re familiar with the tool you shouldn’t really notice it.
You’ll also need to decide which container to use. There are a plethora to choose from. You’ll probably find performance data showing which is the fastest. In reality even the slowest container takes a relatively short time to configure and resolve dependencies and it doesn’t happen very often, but if performance is your thing let that be your guide.
Aside from dependency resolution there are a couple of other container features I recommend learning to use. Interceptors are very useful for cross cutting concerns like logging and transaction management. For example if I configure my container with a logging interceptor when I resolve a dependency I get a proxy back rather than the implementation class I configured. The proxy intercepts each method call I make on the interface and logs the class name, method name, arguments and return values and the elapsed time. This happens for all configured dependencies and means that I don’t need to have bits of logging code in each method I write. This results in cleaner more readable code. Its also worth getting to know the lifecycle options supported by your container because sooner or later you’ll encounter problems if you don’t. Singleton instances exist once in your application and you’ll receive a new instance of a transient dependency each time it is requested. Sometimes you won’t want a singleton but you will want to share an instance within a certain context e.g. a web request (per-request lifecycle) or if you need to do something similar outside of a web application you can use a scoped lifecycle (all non-singleton dependencies are scoped to a root object and are shared by dependencies of that object. They are disposed of when the root object is disposed). Finally, if your DI container provides some kind of installer interface for encapsulating configuration try to use it. This makes it easier for multiple projects (i.e. your application and your integration tests) to share the same DI configuration.
So, in summary I think you should always use DI because it is one of the SOLID principles and observing these will increase your code quality. I recommend starting small to get to know your container. Explore interception if your container supports it as this can help keep your code clean and focussed. Be aware of object lifecycle options and when to use them. Above all treat any friction or complexity in using DI as an opportunity to simplify your code.