Tuesday, June 26, 2012

Developing Software with BDD

In a previous post, Developing Software Outside-In, I described how to use TDD practices to design and develop somewhat on the fly. The same concepts can be extended to Behavior Driven Development (BDD, see Dan North's article Introducing BDD for an introduction.)  BDD is an evolution of TDD in that it brings the same concepts to a higher level in the process. BDD focuses on defining the features of an application in a vocabulary understood by business users and developers. This vocabulary is used to define features and scenarios that are testable. Using these principles, and some nice tooling like SpecFlow, the advantages of TDD can be seen at an even higher level, earlier in the process.

In this article, I'll discuss how principles of BDD can be used in Outside-In development. I'll begin with how features are defined in a Functional Specification and then brought into a tool to use in the development process. I'll discuss how tests can be created based on those same feature definitions and then used as a starting point for building the application.


Functional Specifications


These days, I like to express specifications using the Gherkin syntax, somewhat. This syntax expresses the features of an application with feature and scenario statements. When I put specs together, I'll often include a UI mock up and possibly some definitions along with these scenario statements. Thus, my specs will look something like this:



The Gherkin portion is the "Feature:" and "Scenario:" statements.

Features represent specific functionality in the application. They should generally represent a single unit of work relevant to the user. Features should not be as granualar as "select something from a dropdown list", nor should they be as coarse as "Order Entry Module." They should generally be a distinct task, such as "Create a new Sales Order" or "Add an Item to a Sales Order."

Scenarios represent permutations of a feature based on the state of the system. You would express normal, adnormal and exception circumstances as in any test. When defining scenarios, you should be comprehensive. Features and scenarios are not only the specifications for the system, but also the tests that you will create for the system and even the its documentation. In addition, because features and scenarios are expressed in a way that the user will understand, they can survive the development cycle to be used in the next iteration of the application.

Let's look at a sample feature, then:

In order to ensure that deleting an Account Type does not impact any Customers
As an Account Administrator
I cannot Delete an Account Type if there is it assigned to a Customer
Scenario: Should warn user when there are Customers of the Account
          Type when Deleting the Account Type
Scenario: Should warn user when there is an error when Deleting the Account
          Type
Scenario: Should Delete an Account Type when there are no Customers of
          the Account Type when Deleting the Account Type

This feature simply states a business rule that Account Types cannot be deleted if the account type is associated with any customers. It's simple enough to understand that there are account types for customers, so you can't very well go deleting account types without dealing with those customers. The data integrity rules must not allow a null account type for a customer and we would rather avoid a nasty referential integrity error. So, we have a few basic scenarios that might occur and that we need to test for. Any business user would understand this, although we are not explicit about what "warn user" means. That's an implementation details that does not necessary involve the feature (although, it could if we wanted to specify it.)

Of course, this feature will not exist all by itself. There's not enough here to define an application. We can say, however, that there must be "Account Types" and there must be "Customers" and a "Customer" has a property that references an "Account Type" and that property cannot be null. In the context of the entire application, there will be other features that define forms and other domain objects and how they interact. The feature above merely serves as an example. You would have other features, such as:

Feature: Customers can be Searched by Name
Feature: Customers can be Searched by Number
Feature: Customers can be Created
Feature: Customers can be Deleted
Feature: Customers can be Changed
Feature: Account Types can be Created
Feature: Account Types can be Deleted

These features may elicit the response "duh." Yet, as every developer knows, they still must be built and they must be tested. So, we layout every little feature along with the various scenarios needed to validate that it will work property, according to the needs of the business. Such is a functional specification.


Mock Applications

The normal method of going from functional specs to development is to start laying out the designs needed to implement the specs. I used to use a lot of UML to do this, but I think there is now a better way. If I am building an application with a graphical user interface, my functional specs will have mockups of all the UIs that I am going to build. It does not matter how these mocked UIs are created, all that matters is that I've clearly defined what the user is going to get. So, thinking along the lines of building Outside-In, I can start with the UI, right? Well, I could, but I'd run into a significant testing hurdle.

Automated tests are not easily created for UIs. That's one of the reasons that we have all those "MV" patterns: MVP, MVC, and MVVM. They all attempt to remove the business logic from the UI so that we can have more comprehensive testing (among other things.) I suggest, then, that you start not quite with the UI, but with the View (they all have a view definition, which is an interface.) I like the MVP pattern and will focus on that one in particular, but the principles apply to the others as well.

So, just like I demonstrated in Building Software Outside-In, we can create the interfaces for the view and the presenter just based on the specs. From there we can build the rest of the application using TDD methods. However, BDD features often describe interactions between views, and I do want to test not only the granular interaction (one form should call a method that's supposed to open another form) but also a somewhat more robust interaction (when one form opens another form the application state should change in a certain way.) I want more than what mocking tools provide. I want a mock application.

A "Mock Application" is only a "Mock" in that a person cannot really do anything with it because the UI is not really a UI. It exists for testing purposes only. Consider the intent of the "View" in the MVP (or any other) pattern. By its very nature it is supposed to have no functionality. True, it will have functionality relevant to the platform (managing state in ASP.NET, for example,) but that functionality has nothing to do with our application's true features. If you were to implement the view using a plain old class with nothing but properties, it could still function as the view layer of the application. You would need another application to actually perform actions in the application, and that is precisely what tests are for.

To build the view layer of an application, all you really need to do is implement the view interfaces and any infrastructure required by the platform to allow your views to interact. I like the Application Controller pattern for this, which is described fairly well in The Presenter in MVP Implementations. The Application Controller handles the navigation between forms, which can itself be abstracted behind transition objects so that the mechanics of opening a view (Response.Redirect, for example, or Form.Show) does not have to interfere with the application logic. Using an Application Controller can minimze the platform specific implementation to just the views and a few transition objects. The only other consideration is maintaining which view is active, although in many circumstances that is not even relevant.


The Process of BDD Tests

The Gerkin language specifies a syntax for Scenarios. This syntax is a set of statements that begin with "Given", "When" and "Then". They are directly related to the typical "Setup", "Execute" and "Validate" steps of a test. The logic goes as follows:

Given that the application is in a certain state
When I perform some operation
Then certain things happen

These statements should be expressed in a way that is easily understood by the user base. They may be tedious at times, but they still make sense to everyone. So, from the functional specs, each scenario is further refined to express this level of detail.  Consider one of our sample scenarios:

Scenario: Should warn user when there are Customers of the Account
Type when Deleting the Account Type
I might express the steps in the scenario as follows:

Given that I have opened the Account Type List
And that I have selected an Account Type that is associated with a Customer
When I Delete that Account Type
Then I am warned that the Account Type cannot be Deleted

These are, in fact, the steps I need to take in code in order to test the scenario. If I use a tool like SpecFlow, then I can have individual, re-usable methods generated for each step that I can then implement.


Using BDD Tests and Mock Applications to Design and Build

How BDD works with SpecFlow is beyong the scope of this article. What I really want to discuss is the process of defining these tests, which came from a functional spec, so that you can build an application from the Outside-In. To be sure, in order for this to work, you must have done the following:

1. Decided on the architecture and platform and build a foundation. This is typically the same old stuff you build for every application. It's a starting point and does not express any business functionality.
2. Built a testing infrastructure. This is important only in some circumstances. For example, if I want to use Entity Framework Code First development, it would really be helpful if I put together the the infrastructure to generate my database on the fly to SQL CE or something.
3. Created the functional specs - obstensibly, these would have been reviewed by and approved by the business unit the application is for.

You might flesh out all the scenarios before coding. That can be helpful to clarify how the application works. On the other hand, you might just as likely start with a few scenarios and work from there. That would fit nicely into an Agile process. Either way, once you have the groundwork done, you begin in the same way: Designing the View and Domain Objects.

One of BDD's intentions is to establish a vocabulary that is understood by both developers and the business unit. That's also integral to Domain Driven Development. You might have noticed my odd use of capitalization in the scenario definitions. I did this to highlight potential candidates for domain objects, properties and methods. I do not believe these will always be everything you need, but it will be the most important parts. Once you work through a few scenarios, you'll see what I mean. So, it should be fairly straightforward to define the domain objects, views and presenters you need to implement the scenarios you have defined. To be sure, at this point we are not saying you have completely defined any of this; you've just defined enough to complete a particular scenario of a particular feature - very Agile, I think.

Defining these three parts - the view, present and domain - is all you need to implement the step definitions to complete the BDD feature tests - but, you must implement them. Hmm...in order to implement the presenter, you need to define the service and in order to implement the service you need to defined the data access layer. Well, not exactly. We can simply use a mocking framework for the presenter and be done with it.

Well, I don't actually mean to be done with it because I want these tests to run the entire stack. Yet, I am building from Outside-In and I'd like to validate each layer as I go. I can use unit tests to do that, and I should write unit tests per typical TDD methodology, but those tests are granular. My BDD tests are defined as application level features. I want to include the Application Controller in these tests and let the presenters be properly instantiated. For me, that means created through a factory. It is, in fact, the power of factories that will allow me to run these BDD tests before I even implement my service (or design my data access layer, for that matter.)

Using Factories to Assist in Outside-In Development

Factories are important in a well designed application. Factories allow us to leverage Dependency Injection to accomplish Outside-In development. When we define each layer of our application (presentation, service and data access) we defined factories for those layers. To be sure, these are abstract factories as they are implemented behind an interface. In our mock application, we would implement factories that produce mock-ups of the layers we have not yet created. In this way, the tests can run and validate the work that we have done.

Consider the following model for an abstract factory graph:


The Presenter Factory will require a Service Factory which will require a Data Access Factory. All this is wired together when the application starts. In our mock application, we simply create a mock up of the factory for the layers that we have not yet coded. So, if we are working on the Presenter, which is requried for the BDD tests, we can create mocks for the service layer factory and the service layer. In this way, our mocks will provide the expected results and allow our tests to run.

Once the presentation layer is complete and all tests pass, we move on to the service layer. Again, we would create a mock for the data access layer objects and factory which will allow our service layer to work. Our BDD tests should again pass, once the service layer is complete.

Conclusion

I hope this article will provide sufficient insight to get you started with BDD and Outside-In development. I have really only laid out the pricinciples for doing this. There are a lot of details that need to be fleshed out to get an application to work no matter how you build it. The goal here, however, is to build an application in a manner that most closely reflects the specifications. I believe the BDD plays an important role in that, but not without a reliable way to build applications using it. Hopefuly the approach outlined here will provide such a way.

Wednesday, June 13, 2012

Building Software Outside-In

I've been working with TDD (Test Driven Development) concepts heavily for the last year or so. I have to say, it has radically changed the way I do software development. The basic concept of writing tests before implementing has proven to be a very good way to ensure the development of solid, maintainable code. This is true not only because you end up with a set of comprehensive tests, but because the process of creating those tests forces you to make better design decisions.

Recently, I've begun to think about the overall process of designing and building software using TDD. As I've focused on abstracting each layer for better testing, the benefits of Outside-In development has become more and more apparent. So here I'd like to layout how I've begun to go about the work of designing and building using this approach. I do not pretend that this approach is by any means novel, but I do hope it will be explained well enough here to be of benefit to others.

Starting from the UI (the Outside)

Suppose you have a typical business application that has a user interface (UI) and some backend database. Assuming you have done the work to define what the application is supposed to do, you may very well have a number, if not all, the UIs defined. I typically do this in a functional spec with collaboration with (or at least sign off from) the business unit. Chances are, you will use a fairly familiar architecture and technology stack. I currently use .NET, Entity Framework and the MVP pattern. That coupled with best of practices for software architecture gives us the layers we need. Starting from the UI, they are:
  • UI (or View)
  • Presentation
  • Service
  • Data Access
There is, of course, a domain layer as well, which may traverse some of these layers depending on how it is implemented. For our purposes here, we'll keep it simple and assume all we need are basic CRUD functions, so the domain objects are simply passed between layers.

Most UIs can be broken down into data elements and actions. The MVP pattern separates these nicely between a View and a Presenter. When I look at a mock-up of a UI, I can start to see what those data elements and actions are. Thus, my design work starts by defining the View and Presenter interfaces.

Suppose we have a UI that looks like this:

It's pretty simple. From this you can see the properties that a view would require quite easily. So, I'd define the view like so:

public interface GroceryListViewInterface
{
      List<Grocery> GroceryList {get;set;}
      Grocery SelectedGrocery {get;}
}

I'll also need to start defining what a Grocery is:

public class Grocery
{
     public int? Id {get;set;}
     public string Name {get;set;}
}

The actions that a presenter must perform are fairly evident as well:

public interface GroceryListPresenterInterface
{
     Load();           // I've got to start somewhere
     New();            // that's a toolbar option
     Close();          // another toolbar option
     Edit();           // this is a link in the grid
     Delete();         // again, a link in the grid
          ConfirmDelete();  // see below
}

Designing through Testing

Were I creating UML sequence diagrams in a tool like Visio, I would next consider what I must do in order to perform the functions in the presenter. It's no different here. But in this case, so that I have something useful that validates my code, I will design the service layer by building the unit tests for the presenter that I have just defined. This will require that I create the presenter class and provide default implementations for the methods (they all will throw the NotImplementedException) and I will start an interface for the service.

In the unit test for the presenter class, I will use a mocking framework to define the service. I use Moq, but there are others out there that would work as well. The advantages of a mocking framework is that you do not need to implement the class you are mocking; you merely need the interface. I will create a single test fixture for a class and then stub out the tests that I need. For brevity, I've only included a few in the following example:

[TestClass]
public class GroceryList_TestFixture
{
    private Mock<GroceryListServiceInterface> serviceMock;
    private Mock<GroceryListViewInterface> viewMock;

    #region Load Tests
    [TestMethod]
    public void Load_Should_Load_Existing_Groceries()
    {
          throw new NotImplementedException();
    }
    #endregion
   
    #region New Tests
    [TestMethod]
    public void New_Should_Add_A_Blank_Grocery()
    {
         throw new NotImplementedException();
    }
    #endregion
}
Notice the long names for the tests. That makes them easy to read, especially with an underscore. I also like to express them using an subjunctive voice, i.e. I say "should" instead of "will" or "can". Another thing to note is the use of regions. I only showed a single test for each method here, and that may be all that's required. However, when there are many scenarios, I like them well organized. Finally, since I haven't implemented any of the tests, they all throw an exception. This will ensure that the tests fail if I happen to run them. That frees me to stub out as many tests as I need without worrying that I might forget to implement them.

When defining these tests, I want to consider all the various scenarios that might occur in a method. Lets consider the ConfirmDelete() method. Whenever I delete something, I want to confirm that operation with the user. "Delete" to me should always ask the question of the user, "Do you really want to do this?" Since I cannot assume my presenter can open a modal dialog, I always have a follow up method with a name like "ConfirmDelete" to be used if the user confirms the action.

Thinking about what happens during the ConfirmDelete() method, where the real work to delete the Grocery is done, a couple scenarios occur to me:
  1. When I confirm the deletion of a Grocery, the Grocery should be deleted.
  2. If an error occurs while confirming the deletion of a Grocery, I should see an error message.
More complicated applications would have more scenarios, but these will suffice. I will then stub out tests for each of these scenarios, using a test name that best describes the scenario:

[TestMethod]
public void ConfirmDelete_Should_Delete_A_Grocery_When_No_Errors_Occur()
{
     throw new NotImplementedException();
}

[TestMethod]
public void ConfirmDelete_Should_Show_A_Warning_When_An_Exception_Occurs()
{
     throw new NotImplementedException();
}

The next part of this step is where the real design work is done. Let's take the first test and work through it.

I'll first divide the test method into the three typical parts of a test. Different people have different names for these parts, but they are basically the same and fairly self explanatory:

[TestMethod]
public void ConfirmDelete_Should_Delete_A_Grocery_When_No_Errors_Occur()
{
    // setup
    // execute
    // validate
}

I setup the test by initializing the object I am going to test and by setting up the mocks. I know I'll need to instantiate the mocks and the presenter object for each test, and that effort is the same, so I'll go ahead and create a private method to do that. I like to return the interface instead of the actual object.

private GroceryListPresenterInterface SetupPresenter()
{
     serviceMock = new Mock<GroceryListServiceInterface>(MockBehavior.Strict)
            { CallBase = true };
     viewMock = new Mock<GroceryListViewInterface>(MockBehavior.Struct)
            { CallBase = true };
     return new GroceryListPresenter(
            serviceMock.Object, viewMock.Object);
}

You can see that the implementation of the presenter takes the service and the view as parameters on the constructor. I could have done this differently, but this is a simple approach that guarantees that the presenter will get its dependencies.

Now that the presenter object is ready and the mocks have been instantiated, I can proceed with the setup portion of the test, which is ultimately the design of the method.

This method does not require much. All I need to do is get the selected Grocery and then pass it to the service through a delete method. So, I'll setup the mock objects to express this:

...
// setup
IGroceryListPresenterInterface presenter = SetupPresenter();
viewMock.SetupGet( v => v.SelectedGrocery).Returns(new Grocery());
serviceMock.Setup( s => s.Delete(It.IsAny<Grocery>());
...

See the Moq documentation for details on how the mock objects work.

That's about it. All I need to do now is execute the presenter method and validate the mocks:

// execute
presenter.ConfirmDelete();

// validate
viewMock.VerifyAll();
serviceMock.VerifyAll();

The next step will be to implement the test for the exception scenario. In that case, I would have the serviceMock throw an exception and express how the it would be shown in the view, through some other method or property on the view. That means, of course, I will amend my view interface. That's the very kind of detail I want to discuss in writing these tests!

Once the tests for all the scenarios have been implemented - and, of course, they all fail - I will have accomplished an Outside-In approach to designing and developing using TDD principals. The service layer is now succinctly defined for this particular feature. I have the methods I need and no more. Even the domain objects have been designed without anything extra.


Building from the Outside-In

The next step is to implement the presenter itself. I can do this before I even consider the design of the service methods because of the tests I have written. I expect them all to eventually pass before I proceed with the service design. I might even move on to other UIs before designing the service. This is a significant part of the Outside-In approach. By doing this, I can discover flaws in my designs before I've gone very far at all. Since the service layer is just an interface, it is really quite easy to adjust. That holds true for the domain objects, as well.

After the presenter has been implemented and all tests have passed, I can proceed in the same manner with designing and building the service layer. After that, the data access layer is next. At that point I'll need to deal with how the domain objects will be persisted and queried. It is noteworthy that until this point, I will not have built a database at all. In fact, using the Code First features of Entity Framework (see Code First Development with Entity Frameworkf 4) I am able to complete the data access layer without really building the database (you can have it automatically generated into SQLCE or MSDE, but that's a topic for another post.)

There are many advantages to this approach for designing and building software:
  1. By starting with the requirements (the UI in this case) I start with the customer's perspective and build only what is necessary to fulfill those requirements.
  2. Missing functionality becomes more evident early in the process. In fact, functionality in the back-end is far less likely to be missed because I will not have defined what the backend looks like until I've completed the tests for the front end.
  3. Using unit tests as a low-level design tool provides me with a complete set of unit tests throughout the development process. Continuous refactoring is part of the TDD process; the unit tests make sure my code continues to work as I refactor.

Friday, June 8, 2012

Getting Started

I thought I'd start this blog by reflecting on what my intentions are. Software development is a very fluid profession - stuff changes all the time. Technology changes, requirements change and how we approach problems change quite often. In dealing with this ever changing environment, we stumble upon new ideas (some our own, but most at least inspired by the work of others) all the time. It is the reflections, lessons and raw knowledge that I encounter in those situations about which I'd like to blog.

The blogs that follow will deal with both practicle and academic matters. I intend to document techniques that I encounter or devise to overcome practice software development problems. These blogs will be very specific and limited in scope, but I hope will be of use to others who undoubtedly will encounter the same problems; or perhaps they will inspire somewhat different solutions. I aslo intend to document my thoughts on the practice of software development. These blogs will cover how I develop software, the methods I use (as I develop those methods, of course), musing on concepts put forth by others, and what I hope will be insights useful to even more people. Well, at the very least, these blogs will help me organize and document my thoughts even if no one else finds any benefits in them.