When should I use Dependency Injection

Last night I went to the Brighton Alt.net meeting for the first time in a while. It was good to spend a couple of hours discussing various .net related topics with some friendly likeminded people. One of the topics was dependency injection and specifically when is a project big enough to warrant using it and when might you favour manual wiring up of your application. From the DI crowd including myself the answer was that you should use dependency injection when your application has one or more classes and that we never favour manually wiring together dependencies.

I am a firm believer in SOLID principles. I don’t just mean that I have heard of them and have put the acronym on my CV. I know what all the letters stand for and I can give examples of each principle in code that I have written. When I follow these principles I consider my code to be well crafted. If I encounter impediments to progress within a codebase I can usually spot one or more SOLID compliant refactors that will solve the problem.

Dependency injection is the D in SOLID (Edit: as a couple of people have correctly pointed out the D is actually dependency inversion.  If you’ve adhered to the dependency inversion principle you can use dependency injection to wire up a class’s dependencies rather than letting it do this itself). So in a sense you should use DI whenever you want to write well crafted code. I think the real issue here is that if the thought of using dependency injection seems like an overhead or somehow daunting then it is probably because you’ve not learnt how to use it effectively. In a sense the question is really about what size project warrants learning dependency injection. Rather than thinking in terms of when a project is big enough I’d recommend starting on a small simple project. Whenever I learn something new I try to minimise complexity elsewhere so that I can focus my effort on the thing I’m trying to learn.

You’ll find it easier if you design your interfaces/classes according to SOLID principles too. Use TDD to help drive out the design. Again, if you’re unfamiliar with this a small project is a great place to start. Try applying the S (single responsibility principle) which helps to keep the number of dependencies down and the L (Liskov substitution principle) which will guide you towards depending on interfaces rather than implementations.

Once you have a project which is loosely coupled via interfaces you’ll want to move on to wiring that up using a DI container. There are a ton of advanced features but for now stick with the basics. Container configuration can get very complicated and if it seems like it is getting out of hand you probably need to simplify your design (the same goes for complicated tests).

Most containers can be configured automatically, for example you can register all classes whose namespace contains a certain word. I’ve recently been experimenting with having a ‘Components’ part in each namespace for interfaces and classes which I want to register with my DI container. This allows me to leverage automatic registration without accidentally registering classes which don’t belong in the container.

Beware version compatibility if you are pulling DI and related packages from nuget. Some packages have version specific dependencies. If you encounter these problems learn how to use assembly binding redirects and the nuget Update-Package -Version command.

Remember, any friction you encounter using your DI container is part of the learning process. Once you’re familiar with the tool you shouldn’t really notice it.

You’ll also need to decide which container to use. There are a plethora to choose from. You’ll probably find performance data showing which is the fastest. In reality even the slowest container takes a relatively short time to configure and resolve dependencies and it doesn’t happen very often, but if performance is your thing let that be your guide.

Aside from dependency resolution there are a couple of other container features I recommend learning to use. Interceptors are very useful for cross cutting concerns like logging and transaction management. For example if I configure my container with a logging interceptor when I resolve a dependency I get a proxy back rather than the implementation class I configured. The proxy intercepts each method call I make on the interface and logs the class name, method name, arguments and return values and the elapsed time. This happens for all configured dependencies and means that I don’t need to have bits of logging code in each method I write. This results in cleaner more readable code. Its also worth getting to know the lifecycle options supported by your container because sooner or later you’ll encounter problems if you don’t. Singleton instances exist once in your application and you’ll receive a new instance of a transient dependency each time it is requested. Sometimes you won’t want a singleton but you will want to share an instance within a certain context e.g. a web request (per-request lifecycle) or if you need to do something similar outside of a web application you can use a scoped lifecycle (all non-singleton dependencies are scoped to a root object and are shared by dependencies of that object. They are disposed of when the root object is disposed). Finally, if your DI container provides some kind of installer interface for encapsulating configuration try to use it. This makes it easier for multiple projects (i.e. your application and your integration tests) to share the same DI configuration.

So, in summary I think you should always use DI because it is one of the SOLID principles and observing these will increase your code quality. I recommend starting small to get to know your container. Explore interception if your container supports it as this can help keep your code clean and focussed. Be aware of object lifecycle options and when to use them. Above all treat any friction or complexity in using DI as an opportunity to simplify your code.

Posted in Uncategorized | Tagged , , , , , , | 4 Comments

Why I LIke Whitebox Testing

A couple of weeks ago I had a discussion with a colleague about some Specflow acceptance tests we were writing.  These are for an Azure Service Bus based integration solution and the feature we were testing publishes messages to the service bus.  We use stub subscribers in our tests that actually listen to messages from the service bus and then record details about the message which they received.  The problem was that our stub couldn’t record whether it was playing the part of system A or system B, all it could do was record the fact that it had received a particular message which we intended to send to both systems at different times.  Our test was unable to prove that the message was received by the intended system at the intended time.

I pointed out that in our unit tests we used mocks and we didn’t go to the service bus at all.  We mocked out the class which was responsible for sending messages to the service bus and we set the mock up in such a way that we were able to tell which system was responding.  My colleague commented that whilst this did allow us to determine if the class under test was sending the right messages at the right time the test knew too much about the implementation of that class.  He said he didn’t like whitebox testing and actually he would quite like to remove the unit test.

I didn’t like that at all!  The idea is that our acceptance tests are written in conjunction with business people (although this rarely happens in practice) and that our unit tests are written to help guide our implementations.  When our unit tests and code are done our acceptance tests should pass.  Anyway, the crux of the matter seemed to be that using blackbox testing we should be able to examine the effects of our code on our stubs but using whitebox testing we could determine what the code did with its dependencies.  My colleagues main gripe with whitebox testing was that there was an unacceptable overhead in maintaining the tests should the code under test be refactored.  To which I replied that the tests should be updated before the code is refactored because I am a fan of test first design.

So let’s dig a bit deeper.  I certainly see the appeal of blackbox testing.  The argument is that you can change the implementation and so long as the test continues to pass everything is great.  The test is not coupled to the design.  James Carr in his excellent TDD anti-patterns article identifies an anti-pattern called The Inspector:

A unit test that violates encapsulation in an effort to achieve 100% code coverage, but knows so much about what is going on in the object that any attempt to refactor will break the existing test and require any change to be reflected in the unit test.

I have experienced this before and I agree that it is a problem. However I don’t think that whitebox testing equals anti-pattern and should be avoided at all costs.  The thing I disagree with are statements such as ‘I don’t like whitebox testing because I care about what a method does not how it does it’.  Now, I love a good analogy so think about all the things where you care about how it is done. Pretty much any piece of tech hardware we are interested in what is going on inside it.  How does it do that? I was flicking through a car magazine today describing in length the differences between a Porsche and Alfa Romeo and a Lotus.  It wasn’t enough for the author to tell you that you could drive any one of those cars and it would do the job of going really fast whilst making you look cool.  He cared deeply about exactly what was going on under the, uh, hood. And so do I.

If I can write a test which given certain inputs expects a certain output I will write a test to ensure that is what happens.  But such methods are few and far between.  Much more common in my field of enterprise applications are methods which coordinate or orchestrate a class’s dependencies possibly not returning anything at all.  The best way I know of to test these coordinators is by letting your test know about the internals of the class under test and using mocks to see if it is doing its job correctly.  The coordinating responsibility should be thought of as an algorithm and does need to be tested.  Using mocks to test this allows your test to be decoupled from things like service buses, databases and web services.  Integration with such things can result in slower more fragile tests particularly if you are not in control of those resources.  Mocks can allow you to assert how your code interacts with an interface which encapsulates an external dependency.

But more importantly for me is the value of whitebox testing to help me ensure that my implementations are not rotting, festering or stinking.  When I write a test (first of course) which uses mocks I tend to end up writing something which describes my implementation.  If I feel my test is starting to become unruly (see Excessive Setup in James Carr’s article) I would realise that something is wrong.  At this point I can refactor my test rather than my implementation.  All I need to do is rethink the interfaces of my dependencies and my algorithm that coordinates them.  I haven’t actually written any implementation code.  If I write my code first I tend to become attached to it even if it is hard to use because of poorly defined interfaces.  That can lead to horrible little snippets of code lingering because I’m reluctant to delete something that works.  Those horrible little snippets can end up becoming breeding grounds for bugs.  I care about what my code does and how it does it too.

So yes, I absolutely care about how my implementation works. I wouldn’t be happy to commit anything so long as it works.  Using test first design I strive to write simple concise tests because I know that will ensure I have simple concise implementations.  I don’t care so much about having to change the tests if I want to refactor the implementation because writing simple and concise tests means I don’t have to refactor my code very often because I got it right the first time.  And if I did get it really really wrong I should probably just throw the test out anyway and start again.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Writing Unit Tests is Quick and Easy

Writing a unit test should be done before you write your implementation. It should be quick and easy. Sometimes it may feel like writing your unit test is taking a long time. If you are writing your test first (like you should be) it is likely the time is actually being spent on designing your software, not writing your test, but sometimes the two things are actually the same. I know, crazy. We call it test driven design.

If your test was quick and easy to write the chances are your implementation will be focused, usable and decoupled. It turns out that the only way to write unit tests that are quick and easy to write is to design software which is focused, usable and decoupled. If your test takes a lot of setup (see Excessive Setup in James Carr’s TDD anti-patterns) it is likely that not only is your test difficult to understand but that the code it is testing is too complicated, way to complicated…and pretty much untestable.

Thankfully it’s not that hard to design something which is quick and easy to test if you follow a few simple rules:

  1. Keep your methods short. You don’t need to write long methods, ever, really.
  2. Identify what needs to be done in the method call. Can you separate any of these concerns and give your method a single responsibility?
  3. Always code to interfaces. Test against your implementation but within that class specify all dependencies as interfaces. I can’t believe it’s 2012 and I’m still saying this!

I’m not going to paste a code example here, I’m just going to describe the kind of code I write over and over again. I write a lot of web services. Typically I need to:

  •  validate the inputs according to certain validation rules
  • perform a read/write/delete data from the database based on the inputs.

A design which is hard to test would have my web service method perform the validation then marshal the relevant values into db parameters then call a stored procedure. If I am returning values I need to copy them out of the data reader and into an object.

This design is hard to test because I need to set up my test with a valid request and I need to have a database to hand where my method can execute the stored procedure. Relying on data in a database is difficult for many reasons, though not insurmountable. But this is getting into the realms of integration testing which serves a different purpose than unit testing and this post is talking about unit testing. I could opt to make the db command object a dependency and mock it out for my test. This results in an overly complicated test as I need to set up the mock to handle its parameter collection and also to return a mocked data reader so I can get at the returned data and things start to look a little out of control.

Experience has taught me to do the validation in an IValidator interface. Mine has a method called Validate which takes the object to validate and returns a validation result which tells me if validation was successful and gives me a validation errors message/collection to tell me what went wrong.

Next, as we all know we should access our data via a data access object (dao) or repository or whatever people are calling it these days. So my web service now calls the validator interface to validate its input. If the validation succeeds it passes the relevant values to the dao which returns the data as an object.

So now my test is easy:

  • Create the class being tested. Inject a mock validator and a mock dao.
  • Set up the validator mock to respond to the call on its validate method.
  • Set up the dao to respond to a call to its GetThingById method.
  • Assert that the web service method returns the object which my mocked data access object returned.
  • Verify that my mocks were called.

This is a very quick and easy test to write because the web service method now has a purely coordinating role. We don’t need to set up the request in any particular way because the validation is performed by another interface. We just need to make sure we call it. Similarly we don’t need to marshal any values into database parameters or execute the test against the database, we just need to pass the arguments which are wrapped in the request to the dao interface.

The validator implementation then becomes an algorithm we can test without needing mocks. Given a particular input we expect a particular output. Be sure to test for pass and failure, easy right?

The dao implementation only needs an integration test (at least that’s my opinion). I just want to make sure it can actually connect to the database and execute the procedure. For these types of test I like to start a transaction, insert any test data I might need, execute the dao method under test and then roll back the transaction. My test does not depend on any existing data, it is self-sufficient. The same goes for if you are using an ORM (which I prefer). Whichever data access approach you use try to prepare the test data in the database using something other than the class you are testing for obvious reasons.

So, to recap, use test friction as a way to measure complexity in your design. If a test takes a lot of set up try to simplify your design so that you have small easy to test pieces of functionality. Design orchestrating or coordinating classes to tie these pieces together and write quick and easy tests with mocks to prove not just that your classes work, but that they do so simply and cleanly.

Happy testing!

Posted in Uncategorized | Tagged , , , | Leave a comment

NHibernate, QueryOver SelectList project Child properties

I wanted to select some fields from a Parent, some from a Child, the total of AnotherChilds in the OtherChildren collection and stuff the lot into a ParentSummary.

This will do it:

    ParentSummary parentSummary = null;
    Child child = null;
    IList<AnotherChild> otherChildren = null;

    var result = session.QueryOver<Parent>()
      .JoinAlias(x => x.Child, () => child)
      .JoinAlias(x => x.OtherChildren, () => otherChildren)
      .SelectList(list => list
          .SelectGroup(x => x.Id).WithAlias(() => parentSummary.Id)
          .SelectGroup(x => x.Name).WithAlias(() => parentSummary.Name)
          .SelectGroup(() => child.Name).WithAlias(() => parentSummary.ChildName)
          .SelectCount(x => x.OtherChildren).WithAlias(() => parentSummary.OtherChildrenCount))
Posted in Uncategorized | Leave a comment

Conditionally Start/Stop a Windows Service

When developing windows services in visual studio I like to set up a pre-build event which stops and uninstalls the service and a post-build event which installs and starts the service.

Using net start servicename is problematic because your build will fail if you try and stop a stopped service, or start a started service.

Instead use a batch file which handles the error and returns 0 to indicate success:

@echo off
net start service1
if ERRORLEVEL 2 goto error
exit /b 0

Posted in Uncategorized | Leave a comment

Resistance Is Futile

In the months since I last posted I’ve managed to complete the project I was working on and decided it was time to end my long daily commute for a while and take some time off.  As it turned out I only got a short break before finding a great looking contract right on my doorstep. Its not bad as contracts go, casual dress, close to home, nice people.

At the interview there was a lot of interest in all things agile from a dev manager who is sympathetic to the cause but has not had the opportunity to explore these ideas.  And today it became apparent that there are arguments surrounding the proposed adoption of certain open source tools (though it look like there is budget for the paid ones).

As a .NET developer I’m a fan of TDD and Continuous Integration. I like NHibernate, IOC containers, and have rediscovered a hatred for Visual Source Safe.  I think Resharper should be mandatory for all Visual Studio users.  Resharper’s unit testing capabilities are complimented TestDriven.Net brings code coverage via NCover.  I’ve recently ditched a long time friend CruiseControl.Net in favour of Team City.  Likewise I am meaning to switch to Rake and say farewell to Nant (thanks for the recent update, but I think I’m done with XML now). And Git appeals to me though I feel comfortable enough with Subversion to be able to mostly avoid its pitfalls. I love tinkering with Rhino Mocks and NBuilder and can knock up Nunit tests almost as quickly as I can type.  I must confess that I have only scratched the surface of NHibernate the reason for this’ this dear readers the point of this post.  Resistance.  Developers, managers, companies that are so resistant to change, saturated with inertia and deeply mistrustful of anything not originating from the hallowed pages of MSDN.

Everywhere I look there are developers to whom it just never occurs to do anything differently, whose C# code is barely distinguishable from their VB code of ten years ago were it not for the angle brackets, semi colons and case sensitivity.  These people are yet to be convinced of the need for interfaces let alone design patterns or refactoring tools.  Copy and paste abounds, its really so much easier than all that inheritance.  Lets keep things simple eh?

So it was with great trepidation that I began my new contract and found myself being asked to demo some of the wondrous magic of which I had spoken. Until today.  Because tools such as Nunit, Log4Net, Rhino Mocks (the list goes on Alt.Netters) are not on the ‘approved product list’. A list so dull that I wonder if the architectural team that maintains it died circa 2001.  In fact if the architects (misleading term used in large corporations for people from IT who are quite high up in the organisation) are not putting progressive software tools on their list then I can only conclude that they do not have a passion for their trade and are most likely so out of touch that the only things that make it onto the list come from vendors who justify the high price tags with free lunches at each procurement milestone.

And there’s the rub.  If the guys at the top don’t care, then it follows that the guys in the trenches won’t care either.  Java doesn’t have this problem, open source projects are its lifeblood.  Hibernate is the defacto standard, whereas in my office the very download of its .NET equivalent is prohibited.  Large corporations suffer from expensive and often doomed software projects yet, with a few exceptions, refuse to embrace the answer to their problems identified by community figures and the software that nice open source developers write to make it easy to implement them.

And all of this indifference, fear and ignorance dominants the .NET landscape whilst some truly innovative tools are freely available to any who bother to look for them. But why do I care? Perhaps I should just leave it.  Resistance is futile…

Posted in Uncategorized | Tagged , , , | Leave a comment