A Ports and Adapters implementation in C#

I’m currently working on an application built with the ports and adapters pattern in mind, sometimes known as Hexagonal or Onion architecture. The application has both data side adapters and client side adapters.

If you’ve not done so already, please read Alistair Cockburn’s excellent article on the pattern at:

If you want to read it later and keep reading this now then here’s a quote from the article which succinctly conveys the spirit of the pattern:

“Create your application to work without either a UI or a database so you can run automated regression-tests against the application, work when the database becomes unavailable, and link applications together without any user involvement.”

The data side adapters take the form of interfaces defined by the application to enable it to use data services without needing to know about the underlying implementation. There are a few implementations of these interfaces in the application at present including an in memory implementation, an implementation which connects to a legacy database and a version which connects to a Mongo instance. Additionally I can inject mocks for the relevant interfaces and decouple from any of these implementations. This allow me to write tests which focus on the application logic.

The client side adapters are used to expose application features to different types of consumer. There is a client side adapter which provides a web user interface using asp.net MVC and another which exposes the same features via asp.net WebApi. Within the development team it has been discussed that the core application and the WepApi are one and the same thing. Whilst this is probably true it is useful to be aware of the boundary between application logic and how that is exposed over Http. Theoretically we could write a WPF user interface that talks directly to the core application assemblies or even use Xamarin tools to create iOS or Android applications that provide a rich offline experience on mobile devices. More importantly we can write tests against application features which do not need to know or care about Http.

In many instances you’ll only have one adapter implementation that your application ever uses though you’ll probably substitute this for a mock or a stub in your tests. For my current project we envisage using multiple adapters in production. The application is multi-tenanted in the sense that somewhere between 80-100 web sites from the corporate estate could eventually use its services and each site would need to work with data from one of three data stores.

Another way to think about this is that we’re actually creating an abstraction layer that supports the features the business needs. Once we’ve abstracted the key features it shouldn’t matter which implementation provides the data storage. This gives us a nice migration path. If the adapter interfaces expose methods for exporting and importing data we should in theory be able to migrate from one data store to another. For me this approach is reminiscent of branch by abstraction (http://martinfowler.com/bliki/BranchByAbstraction.html) and strangler application (http://martinfowler.com/bliki/StranglerApplication.html).

On day zero we’ll most likely support adapters for two legacy systems for web sites which depend on data held there and we’ll also have at least one web site as a pilot using an adapter for a new vendor system. The upshot of this is that we will need to select the appropriate data side adapter based on the web site calling into our application services and pages. Enter the adapter selector:

public interface IIdmAdapterSelector
    IIdmAdapter GetIdmAdapter();

This interface provides a single method to get an Idm (identity management) adapter.

public interface IIdmAdapter
     IUserRepository GetUserRepository();
     IEmailBlackList GetEmailBlackList();
     IAddressRepository GetAddressRepository();
     ICountryRespository GetCountryRepository();
     ISubscriptionRepository GetSubscriptiontRepository();

And this is the adapter interface. It’s very similar in intent to an abstract factory. The interface provides methods for getting a number of interfaces used by our application for identity management. Each concrete adapter will provide adapter specific implementations of the various interfaces.

We manage the mapping of adapters to web sites through a configuration class:

public class Configuration
    public void RegisterIdmAdapter<T>(string siteId) where T : IIdmAdapter
    public Type AdapterFor(Context context)

The configuration class provides a method for registering adapters against a site id and another method for retrieving an adapter by passing in an application defined context class. Notice that we register the type of the adapter rather than an instance. Also notice that whilst we use site id to register an adapter type we pass in a context instance to retrieve the type. As we’ll see, the context contains the site id so we can resolve the adapter type based on that but we can also take into account any additional contextual information which might be needed to configure the adapter specific implementations. But for now the context is based on the site id only:

public class Context
    public string SiteId { get; set; }

The configuration class can be set up by any mechanism we choose – perhaps from a web.config, a database, whatever is most appropriate. Initially we’ve decided that our web applicaton is responsible for setting up the configuration. As we’re using OWIN the entry point is the Startup class. We’ve added a method to get the configuration which we’ll be using and for simplicity it is hardcoded:

protected virtual Configuration GetConfiguration()
     var Configuration = new Configuration(); 
     return Configuration;

Next we need to consider how the application will get the site id. We’ve seen how this will be part of a context object but how does it get resolved? Let’s take a look:

public interface IEstablishContext
     Context GetContext();

This interface provides a method to get a context object. We’ve got a couple of implementations. The web application will use a WebContextResolver. This simply implements GetContext by looking for the site id in the HttpContext perhaps in a custom header or maybe the querystring. This then gets set on an instance of the application defined context object. The other implementation is an NUnitTestContextResolver. This works in a very similar way but instead of looking in the HttpContext it looks in the Nunit TestContext for user defined properties (such as our site id) which can be set per test.
We specify which implementation should be used in our Startup class:

protected virtual Type GetContextResolverType()
     return typeof(WebContextResolver);

We’ve opted to keep the adapter selection out of our controller classes preferring them to work only with the interfaces they require. Instead we’ll rely on our container configuration to determine which adapter to use and wire things up accordingly.  We’ve defined a method to configure our container (autofac but it could be any other DI container).  The adapter selector which we discussed earlier is registered with the container. It has three constructor dependencies:

public IdmAdapterSelector(IEstablishContext contextResolver, Configuration Configuration, IEnumerable<IIdmAdapter> adapters)
     this.contextResolver = contextResolver;
     this.Configuration = Configuration;
     this.adapters = adapters;

It is very simple to resolve the correct adapter for the context:

public IIdmAdapter GetIdmAdapter()
     var context = contextResolver.GetContext();
     var adapterType = Configuration.AdapterFor(context);
     var adapter = adapters.FirstOrDefault(x => x.GetType() == adapterType);
     return adapter;

The controllers depend on the interfaces provided by the adapter so we need to register those with the container too:

private static void RegisterAdapters(ContainerBuilder builder)
     var adapter = new Func<IComponentContext, IIdmAdapter>
     (c => c.Resolve<IIdmAdapterSelector>().GetIdmAdapter());

     builder.Register(c => adapter(c).GetUserRepository());
     builder.Register(c => adapter(c).GetEmailBlackList());
     builder.Register(c => adapter(c).GetAddressRepository());
     builder.Register(c => adapter(c).GetCountryRepository());
     builder.Register(c => adapter(c).GetSubscriptiontRepository());

So what we’ve done here is register each method of the adapter as the way to resolve the various interfaces it provides. When we set up our container from our Startup class we pass it the configuration and context like this:

var dependancyContainer = appBuilder.UseAutofac(config, BindingOverrides);

The binding overrides property uses the virtual methods we described earlier to populate the binding overrides instance:

protected BindingOverrides BindingOverrides
         var BindingOverrides = new BindingOverrides();
         BindingOverrides.ContextResolverType = GetContextResolverType();
         BindingOverrides.Configuration = GetConfiguration();
         return BindingOverrides;

This in turn means we can subclass our OWIN Startup to create a TestStartup which swaps out the WebContextResolver for the NUnitTestContextResolver and provides an alternate test configuration class:

public class TestStartup : Startup
     protected override Type GetContextResolverType()
         return typeof(NUnitTestContextResolver);

     protected override Configuration GetConfiguration()
         var Configuration = new Configuration();
         return Configuration;

This allows us to write tests that exercise our web app in memory using test settings:


We can set our context through attributes on our unit test classes:

[Property("siteId", "testId")]
public class UserControllerVerifyTests : UserControllerInMemoryTests

We can set these properties at test level too allowing us to vary the context for each test.
We need to make use of some marker interfaces to ensure that the container doesn’t get conflicting registrations. If we take the example of the user repository consider the following:

public class UserRepository : IUserRepository


builder.Register(c => adapter(c).GetUserRepository());

The UserRepository class will get registered as a type which can be provided when the container is asked to resolve IUserRepository. But we’ve also registered IUserRepository with the GetUserRepository method of the adapter. The implementation of that method will need to provide an adapter specific instance. In order to ensure that we can resolve the correct instance we can have it implement a marker interface like this:

public class UserRepository : IInMemoryUserRepository


public interface IInMemoryUserRepository : IUserRepository

This way the adapter’s GetUserRepository method is registered as the way to resolve IUserRepository and the adapter’s constructor requests an IInMemoryUserRepository which can be auto registered with the container by scanning the assemblies. This interface does nothing other than inherit from IUserRepository and serves purely as a marker to help us make effective use of the container.

This last part however is very important. If we implement interfaces from the application core directly in our adapter assemblies we will likely encounter unpredictable results. But hopefully as we add new dependencies to our constructors we’ll notice the use of marker interfaces and remember to follow that convention.

In summary, we’ve discussed the ports and adapters approach also known as Hexagonal or Onion architecture. We’ve discussed how it helps achieve a clean separation of concerns and enables easy testing and inter-application communication. We then looked at how to select the correct adapter for a multi-tenanted scenario by making use of an application context and an application configuration. We also saw how we were able to swap out context resolvers and configurations by overriding our OWIN Startup class which enabled us to more easily test our web application. Additionally we explored how to force our container to resolve the correct adapter specific implementations rather than require consuming classes to have an awareness of the mechanism. Finally we made use of marker interfaces to control how the container resolves interfaces with adapter specific implementations.

Posted in Uncategorized | Leave a comment

OAuth2 and OpenID Connect – Part 3

Accessing a Protected Resource

In this third and final article we’ll have a look at how to access a protected resource using an access token. In part 1 and part 2 we looked at how to authenticate and obtain an access token from an OAuth2/OpenID Connect identity server.
We will now look at how to include that token as part of a request for a protected resource and how the protected resource can check the token. The example projects can be found here:
In the test project there is a test called TestGet_valid_token_is_used_to_call_my_really_useful_api. This test obtains an access token using the resource owner password credential flow. This flow differs from the implicit flow used in previous articles in that it passes the resource owner’s username and password to the identity server. It’s a flow which should be used with some caution in real world scenarios as it exposes highly privileged credentials to the client. However it would not be practical to have a unit test which displayed the identity server login page. The code uses the Thinktecture.IdentityModel.Client nuget package which encapsulates the OAuth2 flows making it very easy to request the token. To make the request for the protected resource we use a regular HttpClient instance but the Thinktecture client library attaches an extension method called SetBearerToken. This sets the token in the authorization header using the bearer scheme which is the preferred way to transmit the access token in the request (as defined in RFC 6750). We then set the URI for the request and await the async response.

On the api side we’re plugging in the token validation as part of the OWIN pipeline. The Startup class uses some IAppBuilder extension methods defined in Thinktecture.IdentityServer.v3.AccessTokenValidation and Thinktecture.IdentityModel.Owin.ScopeValidation. Essentially this plugs in the logic to look at the incoming token via the token validation endpoint which we used in the .net 2.0 project to decrypt the token so that we could display it on screen (not something we’d want to do in a real application). The claims are then extracted from the decrypted token and passed to a ClaimsIdentity instance which is then passed as a constructor argument to an AuthenticationTicket instance which is in turn passed as an argument to a call to SetTicket on the AuthenticationTokenReceiveContext.

What this all means is that the user can now be authorized to use the api according to their claims. In this example we’ve specified that users must have read and write access to be considered authorized. We then just decorate our api methods with an AuthorizeAttribute and they will only be able to access the method if they have the read and write scopes defined in the access token. For more fine-grained control have a look at the ResourceActionAuthorizeAttribute and the ScopeAuthorizeAttribute in the Thinktecture.IdentityModel source.

Assuming you have both the identity server and MyReallyUsefulApi running both tests should pass indicating that access is granted when a valid token is received and that it is denied when that token is not present.

To summarise, in this final article on OAuth2 and OpenID Connect we’ve seen how to include an access token with a request to a protected resource and how to make that resource validate the token and allow access according to the claims within it. I’d recommend becoming familiar with the specs for both OAuth2 and OpenID Connect. Security is a complex domain and ultimately responsibility lies with you for your own applications. I’ve made extensive use of the open source Thinktecture repositories because they are very comprehensive and active projects. This series of articles has documented how I’ve used code from these projects and transplanted it into my own very simple projects. Doing this was very useful for my own understanding of OAuth2 and OpenID Connect. I don’t profess to be an expert in these areas so please don’t consider this to be anything like an out of the box solution. However I do hope that I’ve provided enough information to help others to get started.

Posted in Security | Tagged , | Leave a comment

OAuth2 and Open ID Connect – Part 2

Demo Project Walkthrough

In part 1 we looked at how to use OpenID Connect and OAuth2 with Thinktecture’s Identity Server and how to use an access token to access a protected resource.  In this part we’ll have a look at the code and explain how it works and hopefully you’ll then be in a position to integrate the approach with your own applications.

The demo projects are available on github:


The repository contains three solutions. The first project you’ll need is the Identity Server itself. The readme on the github page explains how to get the server up and running so no need to repeat that information here. This is a copy of the thinktecture repo taken at the time of writing.  The only change we’ve made is to add an additional client with the id ‘net2client’.

The host project contains a config folder which defines some scopes, clients and users.  You can leave these as they are for now, we’ll be logging in as ‘bob’.

As we mentioned in part 1 we’re trying to integrate an asp.net 2.0 application with OAuth2.  There are no libraries for handling tokens, specifically json web tokens (JWT) for .net 2.0 and we’re not cleared to use javascript yet.  However, it’s not a problem to obtain tokens from .net 2.0 as all we need to do is issue our request to the identity server authorization endpoint and the AspNet20OAuth2 project in the repo does just that.

There is a page called SignIn which has a button marked ‘Sign In On Identity Server’.  This performs the redirect to the identity server.  It specifies the page SignInCallback as the redirect url for the identity server to use once authorization is complete.  To sign in we must ensure that the identity server is running.  In the .net 2.0 project we specify the client id ‘net2client’ and as mentioned above our copy of the identity server repo has added that additional client to those already configured which support the samples repo.  The client section also specifies a redirectUris property where redirects can be registered and so we have registered the callback page.

RedirectUris = new List
//site specifies this url as the return url when it
//writes the redirect url in the query string
//of the authorization url
new Uri("http://localhost:58276/SignInCallback.aspx")

We also need to ensure that the OAuthHelperApi is running.  This is part of the MyReallyUsefulWebApi solution (though it would probably sit better within the AspNet20OAuth2 solution as this is where it is used).  The OAuthHelperApi has a controller called ValidateController which is used to validate the id token because .Net 2.0 lacks libraries that can handle JWTs. So, with both the identity server and the OAuthHelperApi running the AspNet20OAuth2 application is able to authenticate and obtain authorization.

The callback page displays the tokens once they have been validated.  Validating and decrypting the access_token is done via the identity server’s access token validation endpoint:

string authorizeUrl =

This functionality is built in to identity server.  The resource server hosting your protected API should validate the access token but I’ve done it in the client just so we can see what it contains.

We must validate the id token as this is proof of authentication and allows the client to trust the access token.  We’ve shamelessly plundered the identity server samples for the id token validation and because it uses .net 4.5 JWT libraries we’ve provided the OAuthHelperApi.  Token validation is then very straightforward and conforms to the OpenID Connect spec, section  ID Token Validation:

private List ValidateToken(string token, string nonce)
var parameters = new TokenValidationParameters
ValidAudience = "net2client",
ValidIssuer = "https://idsrv3.com",
IssuerSigningToken = new X509SecurityToken(X509.LocalMachine
.Find("CN=idsrv3test", false)

SecurityToken jwt;
var id = new JwtSecurityTokenHandler()
.ValidateToken(token, parameters, out jwt);

if (id.FindFirst("nonce").Value != nonce)
throw new InvalidOperationException("Invalid nonce");

return id.Claims.ToList();

This article has explained where the demo projects can be found and what they do.  We have an identity server, a client web application which redirects the user to the identity server to authenticate and grant authorization and finally a webapi which exists to validate the id_token as the web application uses asp.net 2.0.  In the next article we’ll look at how to use the access token to access a protected resource.

Posted in Uncategorized | Leave a comment

Logging using Aspect Oriented Programming

In this article I’d like to discuss how to implement consistent logging as a cross cutting concern via Aspect Oriented Programming.  We’ll look at some code that achieves this and we’ll also consider some of the implications of using this approach to logging.

Most people are used to adding logging to their code via inline log statements with whatever text they feel is appropriate.  This is generally fine when you know what you need to log but becomes a problem when you don’t.  It also adds noise to business logic.  If you consider that you may also want to add exception handling, transaction management and auditing you can quickly find yourself in a situation where methods are quite long and a good percentage of the code has nothing to do with the business logic that you should be focussing on.

Aspect Oriented Programming (AOP) is a programming style which extracts these cross cutting concerns.  There are several ways of achieving this such as code weaving with PostSharp or method interception using Castle Dynamic Proxy.  This article will focus on the latter as it is a simple low friction way to get started with AOP.

If you’re using a dependency injection framework such as Castle Windsor or Autofaq it’s very easy to get started with interceptors.  You can configure your container to use interception.  For example with Autofaq you can add the Autofac.Extras.DynamicProxy2 package which contains the EnableInterfaceInterceptors and InterceptedBy extension methods for IRegistrationBuilder. The first method as the name suggests switches on the interception feature.  The second method allows you to configure a registration with a particular interceptor.  You can specify interceptor classes which realize the IInterceptor interface from the Castle.Core package.

IInterceptor defines an Intercept method which takes an IInvocation instance.  This describes the method which is being intercepted.  When the container resolves a dependency which is configured to use an interceptor it will give you back a dynamic proxy which implements the interface you requested.  When you call a method against the proxy it will invoke the intercept method on your interceptor passing in the IInvocation instance which describes the method call you tried to make.  This interface defines properties for things like the method being called, the arguments being passed to that method and the return value.  It also defines a Proceed method which you should call in your interceptor code if and when you want to continue on to calling the method on the target implementation.

This is an extremely powerful feature which makes it easy to extract cross cutting concerns and have code in one place which used to be in every method call.

For logging it means that you can log the name of the class and the name of the method.  You can json serialise the arguments and the return value or exception details if an exception is thrown.  You can also start a stopwatch before you call proceed and stop it after the underlying method has executed and log the elapsed time.  But instead of having this code in every method you have it in one place.

I’ve put an implementation of this on github at https://github.com/maz100/Isla.  I’ve configured it to write log messages as json and there is a test method in JsonInvocationLoggingInterceptorTests called TestReadFromFile which shows how you can read a log file into a collection and query it to find out things like the longest running method call or the number of errors logged in the last hour.

There are a few things to bear in mind if you decide to adopt this approach.  Firstly you need to make sure your interfaces accurately describe what your code is doing.  If you just have one void method called DoStuff which takes no arguments you’re not going to get much value from AOP logging.  Secondly if you choose to serialise arguments and return values you need to be aware that the larger these objects are the longer it will take to serialise them.  This was most apparent on one project I worked on where the classes being serialised were deep object graphs.  Consequently the log files were extremely large and performance, whilst acceptable was certainly affected.  However we found the logs to be invaluable during development as we could search the logs detailing large batch processes and very quickly report on the results.  You may also want to consider how to exclude certain information from your logs, for example classes that contain a password or credit card details.  Finally you need to make sure the whole team understands the concept of AOP.  The container configuration ties together your interceptors and your implementations.  Unless you understand that it won’t be clear how to navigate the code.  However, I’ve used this approach with many teams and I’ve always found it to be extremely beneficial.

This article described how to implement Aspect Oriented Programming using Castle Dynamic Proxy.  We started be explaining why you might want to extract cross cutting concerns from your business logic.  We then looked at the packages and interfaces available using Autofaq as the example container. Next I discussed how the container invokes your interceptors and how to call the target method as part of your interception.  Finally we considered some of the pros and cons of using this approach to logging.

Posted in Aspect Oriented Programming | Tagged , , | Leave a comment

OAuth2 and Open ID Connect – Part 1

Integrating with Thinktecture Identity Server v3

My current project is looking to use OpenID Connect (oidc) and OAuth2 for authentication and authorization.  We’ve decided to start out using the open source .Net implementation from Thinktecture. We’ve got to a point where we can authenticate and gain authorization from the identity server and call into a protected resource.  This article is an overview of what we’ve done so far.

Firstly you’ll need to have the identity server running.  I’m not covering how to get the identity server to connect to your own custom data store.  There are various interfaces you can implement to do this, but for now we’re just working with the in memory users ‘bob’ and ‘alice’ that come as part of the identity server git repo.  I opened the source in visual studio and started the host project which created an IISExpress site.  I put some breakpoints in the code and stepped through to get a feel for what is going on though if you just want to have the server running it’s quite nice to spin up IISExpress with the site from the command line rather than have the project open in visual studio.

The next thing I did was create a website with a sign in button which redirects to the identity server logon page.  At this point you’ll need to have some understanding of oidc and OAuth2.  Here’s the url I redirect to:


Lets’ break this down and explain what it means.  The authorize endpoint is the part of the url without the querystring.  This endpoint performs the authorization and the querystring tells it what we want it to do.  Now for the querystring parameters.

  1.  The client_id is the id of the application requesting authorization.  Clients must be registered with the identity server and if a client_id which is not registered is received the request will be rejected.
  2. The scope value is a space separated list of values. openid is part of the oidc spec and indicates that we want to authenticate the user. The profile scope means we want access to the user’s profile. The read and idmgr scopes are resource scopes meaning that they are not part of the OAuth2/oidc spec but are defined by the resource (think api).  Both these scopes are defined in identity server and I’ve used them in my demo projects but they are custom scopes and would typically relate to your api functionality.
  3. The redirect_url is the url you want to be redirected to once authentication and/or authorization has taken place.  The redirect url must be registered with identity server as part of the client and multiple urls can be specified.  This feature means that malicious requests with evil redirect urls will be rejected.
  4. The state parameter is a unique value which is generated and sent as part of the request.  When the redirect occurs it will be passed back and the client should check that the response it receives contains the same state value that it sent to the identity server.  If it doesn’t it shouldn’t trust the response from the identity server.
  5. The response_type here is ‘id_token token’.  The id_token is part of oidc and is a feature which provides authentication in a way which OAuth2 does not.  See section 10 (esp 10.16) in RFC6749.  Essentially a valid id_token is proof of authentication and not just of authorization to a user’s profile. The token part means that we’re requesting an access token rather than an authorization code (in which case the value would be code).  This means that we are initiating the OAuth2 implicit flow typically used by browser based clients.  Server side clients would normally request an authorization code which would be sent to the browser and in turn to the server which would then exchange the code for an access token.  The purpose of the authorization code is to prevent the access token itself from being sent to the browser where it can be easily read and potentially misused.
  6. The nonce (number used once) parameter will be encoded in the id_token.  When the id_token is received back the client should check the nonce value it contains matches what was sent in a similar way to the state parameter.  This enables the client to ensure it does not accept an id_token which was not intended for it.
  7. Finally, the response_mode of form_post instructs the identity server to cause the tokens to be received by the client as part of a form post.

Once we’d registered our client and its redirect url with the identity server we could redirect to the authorization url as above and receive an id_token for authentication and an access_token for authorization.  Just to make things more interesting the first project where we’ll be using Oauth2 is an existing .Net 2.0 web application which was delivered to work in IE6.  Whilst IE6 compatibility would be hard to defend in 2014 it’s still a conversation that needs to be had and for the time being rules out handling the tokens in client side javascript.  There don’t appear to be any JWT libraries which are compatible with .Net 2.0 so we can’t handle the tokens there either.  So we’ve had to get a bit creative.

The request we made to the identity server will result in two tokens being returned, an id token for authentication and an access token for authorization.  It turns out that identity server can help us out with the access token as it has an access token validation endpoint which can accept the access token and return an unencrypted version which can be read with json.net in .Net 2.0.  However identity server will not validate the id token for us, we must do that ourselves.  So we created a webAPI project with a controller which can accept the signed id token, validate it and return the claims as json.  This api is not public facing and will be deployed on the same server as the client to keep things private.  Just to be clear, if you’re not working with a .Net 2.0 project you don’t need to have a separate service to validate the id token, so things are actually simpler.

So now we’re in a position where the client knows what claims have been authorized and it can ensure these are respected within the website. However it will also need to access the resource server which hosts some APIs we’ll be calling.  We have the authorization token we need to pass to the resource server api so next we needed to make sure that the api was secured so that it would only allow access to authorized resources.

For this we copied the SampleAspNetWebApi from the Thinktecture samples.  This is a basic web api project using Owin which introduces the Thinktecture.IdentityServer.v3.AccessTokenValidation authorization into the pipeline.  This library takes the claims from the supplies access token and adds them to a ClaimsIdentity which is set in an AuthenticationTicket.  This allows us to enforce authorization for controller actions decorated with the Authorize attribute.

In summary, this article has introduced the high level workflow involved in gaining access to an API secured with OAuth2 and OpenId Connect.  Our specific scenario also introduced the need for a separate service to validate the id token though for the majority of applications this won’t be necessary as you’ll be able to take advantage of the .net 4.5 JWT libraries.

In the next article I’ll introduce the demo projects we created to implement the workflow described here.

Posted in Uncategorized | Leave a comment

Test First Design Benefits

One of the things I like about writing tests first is that it makes me write code which is easy to work with.  Of course this depends on your definition of easy, so I’ll give an example.

A colleague of mine wrote a small framework to help work with Azure service bus.  It takes care of the details of interacting with the service bus with the aim being that developers need only write a handler to handle particular message types.

The client framework sets a message type property on the message when it is sent and then the receiver is able to use this to locate the appropriate message handler.

I set about writing my first handler and soon found myself wondering how I could inject dependencies into it.  The source code for the framework wasn’t available so I opened the dlls with dotpeek to see how the handlers were resolved and instantiated.  I discovered a factory which created handlers by newing them up.  Fair enough, I thought, I’ll just write my own factory that uses Castle Windsor to create instances and everything will be good.

Except I couldn’t because the service bus framework invoked the handler factory statically.  In other words the code which needed the factory was tightly coupled to a particular implementation.  I couldn’t tell it to use a different implementation without changing the source code which unfortunately wasn’t an option.

Doing TDD has taught me that static invocations are often the source of tight coupling.  I can’t mock out the static dependency.  I can only mock out an instance.  When I write my test first I know this and so I would choose to specify the dependency as a settable property. This means that the framework can start up with the default factory implementation which I can then override with my version that uses Castle to provide the handler instance.  As a side note, in this particular scenario I have to use property injection rather than constructor injection.  The service bus framework doesn’t use dependency injection and I’m not able to change that.  It doesn’t know about my factory implementation. It is the framework and not me that instantiates the component that needs the message handler factory so I have no control over the constructor arguments anyway.

Because I’m in the habit of writing test first using mocks I know that my dependencies are inverted and can be injected.  Often I’ll only use either a default implementation or a mock but inverting my dependencies gives me scope to use different implementations in the future and it doesn’t come at a cost.

We eventually tried a couple of approaches for the handlers in question.  I wrote some fairly stinky initialisation code within the handler that obtained an instance of the class  from a Castle and then delegated the handler’s single method call to the injected instance. We didn’t really like this as it introduced complexity just so that we could use Castle.  So we decided to drop Castle.  The handler was simple so we could just new up dependencies and wire them by hand.  Or so we thought.  Over time things became more complex and we had a bunch of setup code in the handler which became the thing which set up a ‘controller’ dependency which did the actual work.  Had we used a Handler factory that used Castle  we wouldn’t have had to write any set up code and the handler could have kept its role as the thing which handled the message.

What prompted me to write this is the idea I encounter fairly frequently that you shouldn’t write tests for implementation details. Typically a BDD style test is there to verify behaviour. Its considered a bad thing to have this test verify mock invocations as this ties the test to a particular implementation. You should be able to vary the implementation without all the tests failing. Perfectly reasonable and I do agree. However I also think that you should write mock based implementation detail tests ahead of the implementation. If you can do that I think you will end up with cleaner code. If you need to refactor the implementation details do that test first too. If you want to try out a completely different implementation then do that test first and if you like it feel free to throw away the old implementation along with its tests. Keep BDD style tests for verifying behaviours. These should not break if you vary the implementation. But also write TDD tests using mocks to help guide your implementation. These are a great (but by no means the only) way to ensure you write clean code – assuming you can write clean tests. Knowing the distinction between these two types of test is really important as is including both types of test in your test suite.

Posted in Opinion | Tagged , , , | Leave a comment

No, TDD isn’t dead, it just takes a bit of practice

In the recent ‘is TDD Dead‘ debates, DHH said that he felt TDD could lead to test induced damage particularly when using mocks. He said that it can lead to layers of indirection in the codebase. I’m a long time practitioner of TDD and I frequently use mocks. I agree with DHH that these layers of indirection are bad for the codebase. I don’t think that doing TDD and using mocks always leads to this problem. Actually trying to do TDD and trying to use mocks can be quite hard to start with as you get a feel for how to use the tools appropriately. It’s quite likely that mistakes will be made as a result of try to learn TDD but I think it is important to make the distinction between damage caused by trying to learn a new technique and damage caused by using a fundamentally flawed technique.

One of the best articles I have read on TDD is the TDD Anti-Patterns catalogue. If you’re not sure if you’re doing TDD right have a read of this article and see if any of it sounds familiar. It’s a good read even if you’re fairly confident with TDD. It outlines a few patterns which describe what not to do. I’ve found this really useful when I’ve written tests or test first code that just don’t seem to be right and I’m not sure why.

Moving on, I’d like to suggest a few things you can try for using mocks with TDD which have been helpful to me.

1. Write short tests. If you can express the intent of the code you are planning to write by writing a clear and concise test first then you most likely have a good understanding of the problem.

2. If the test is starting to grow it could be that it is describing an implementation which will have too many responsibilities. Can you extract surplus responsibilities and have your test assert that they have been called on a mock rather than having the test need to know about them in detail?

3. Limit the number of dependencies for your class under test. As a rule of thumb, if I have a class with more than three dependencies I find that things start to get messy. It tends to reveal poor separation of responsibilities and the first clue is usually a long test as already mentioned.

4. Try not to mix classic TDD and mockist TDD. Testing with mocks lends itself well to testing methods that coordinate their dependencies. Testing without mocks is great for pure functions which take one or more arguments and return a value. If you find yourself writing a test which does both consider creating a method on an interface which will handle the functional part and make it a responsibility of a coordinating method to call that appropriately.

I’m not suggesting that these are hard and fast rules which you must always observe although I do suggest that you pick a small project and rigorously observe them on that to see what happens. As an aside, a few years ago I read an essay called Object Calisthenics by Jeff Bay published in the ThoughtWorks Anthology. Jeff outlines nine rules of thumb and says use them religiously for a small 1000 line project and ‘you’ll start to see a significantly different approach to designing software’. Well, I did, and I did. So in the spirit of that essay I invite you to try the above and see if you like the results.

I really believe in writing your tests first, so if you don’t already, try that too. Write a test, then write some code, then refactor. Try it. For coordinating methods this usually causes me to think about my implementation as a series of steps. I would write one or more interfaces which expose the methods I need corresponding to the steps I’ve identified. I can then test that my implementation will execute all of those steps using mocked instances of those interfaces. That series of steps must describe what I think the code should do. It must be clear and easy to understand. It is essential to get that series of steps correct. Personally I find writing a test first helps me to do this. If I make a mistake or if my test indicates too much complexity I can refactor the test and the interfaces but I won’t write any implementation code until I’m happy that my test clearly shows the direction I’m heading. If you don’t do this then you may well end up writing layers of indirection. If you get it right your interfaces will allow you to write code that defers the details of exactly how something is done to an interface which specifies a contract promising to perform a particular step. When code is written before its test or has no tests people often refactor a large method so that it calls a series of steps defined as private methods. This is pretty similar but not very testable. You need mocked interfaces for the steps in order to write a test that verifies the step was executed.

Having written plenty of code this way I’ve found that I end up with more interfaces than folk that don’t use this style. They are often fairly narrow interfaces (a term I came across in the book ‘Growing Object Oriented Software‘ which accurately describes what I see in my own code) with one or two methods. I’ve found that this gives me a loosely coupled and highly cohesive codebase. I find it easy to reuse interfaces in other parts of the codebase and often I’ll find ways to assemble the code in different and better ways as my requirements evolve.

Incidentally, as my requirements do evolve I don’t worry too much if I find there are tests which are no longer relevant. Rather than spend time refactoring irrelevant tests to somehow make them pass I’ll just throw them away. If I’ve made significant changes to an implementation I’ve usually driven them test first anyway so I know my code is covered by another test. More often I will have created a new implementation of an interface as my understanding has changed so I might leave my old tests in place as they still apply to the old implementation. If I delete the implementation I’ll have to delete the tests at the same time as they simply won’t compile any more.

To conclude then, it takes time to get good at TDD. You may very well damage your codebase as a result of trying to learn TDD so use some common sense and pay attention to the kind of tests you are writing and the kind of implementations you are writing so that you can correct your mistakes. It takes time to get good at something so stick with it until you get the results you want.

Posted in Opinion | Tagged , , | 1 Comment