Friday 27 April 2012

Getting automation done - inside an iteration!

In my previous post I talked about what I believe to be the key to implementing a successful automation strategy which was building on good foundations using a process that encourages communication and collaboration between team members in order to define your requirements in a more automatable way. This process is known as Behaviour Driven Development (BDD) and it defines a structure for your stories and more importantly your acceptance criteria which promotes re-use and makes the application of automation a lot easier in the long term.

When teams start to implement automation they often find that they find it difficult to deliver completed automation tests within the same time frame as the developed code, for example a two week iteration. I think this tends to happen when the team are not working closely together to achieve the goal of automation and you will tend to find that testers are “waiting for developers to complete the code” before they begin writing the tests. 

In my mind one of the most important things you are trying to do when defining your scenarios is to think of as many possible ways in which the feature may be executed, you are trying to brain storm all of the possible permutations of the feature and capturing them in simple one line scenario statements. Another important thing to note here though is that whatever scenarios you come up with initially do not have to remain fixed for the entire iteration, you should be able to manipulate them and the combine or split them where appropriate when going through a formalisation phase – provided that you continue to communicate that with the rest of the team!

Laying the foundations

During the early stages of implementing automation there will be a fair amount of setup required in order to get just a single test running from end to end, some of the things you need to consider are:

  • Do you have a Continuous Integration (CI) server?
    This is an absolute must have, if you are not running your tests on a regular basis and immediately responding to broken builds then you will not be gaining the full benefit of implementing test automation – I would definitely recommend Team City by Jet Brains for this
  • Is your product deployable automatically?
    One of the first questions you need to ask is whether your product can actually be automatically deployed to a test server during the CI build. If not, what are the steps that you will need to take in order to make this happen – talk with the other developers on your team and plan to get this done as soon as possible
  • Do you have a dedicated build server?
    Acceptance Tests, especially UI based tests, can be impacted by conditions on the server. Simple things such as how long it takes to open IE, FF or Chrome can cause tests to break and give you a false negative. By running your acceptance tests on a separate server you are giving yourself the best chance of making your tests stable. If your tests break too often due to environment issues you can quickly lose the confidence of your development team and eventually they will no longer trust the build results.
  • Can all members of the team run tests at any time?
    Having thousands of tests running over night on a remote server reporting failing tests is pretty good for nothing if it takes a developer hours to recreate the problem, when building your test set consider all of the potential running modes such as whether it is on the build server or it is a single developer trying to isolate a problem. Think about how you can automate the setup of the environment for running the tests in a couple of clicks
  • What is your data set management strategy?
    The nature of your problem domain will largely dictate this. You are generally aiming to produce tests which execute as quickly as possible but with as much coverage as possible. Deciding how you setup and tear down your data set will play a significant role in the time it takes to run each test. If all of your tests are date dependant and require modifying the current system date it might make sense to organise your test steps to execute over a common timeline to get the most test coverage over the shortest test time – backing up and restoring a database can take a significant amount of time and can quickly become a non-starter. An alternative might be to make each test responsible for creating it’s test data and clear up after itself.
  • What is the skill set of the team?
    This will have a big impact on the tools that you might choose to implement your automation strategy. I prefer to write tests in the same programming language that is used by the production code and in my case this is C# but you could look to Ruby or Python or some other language that your team feel comfortable taking on. My theory behind using the language of your production code is that you should already have “experts” in this language within your team that can provide support when building your automation framework and you may also gain more traction and “buy in” from them. Using a different language can potentially further exacerbate the problem of separate test and development teams but if your team is up for a bit of ployglot programming then go for it.
  • What automation tools do you want/need to use?
    This may well be led by the skillset of your team but there are a vast array of options when it comes to choosing automation tools in my experience if you are automating web applications then you cannot go far wrong with Selenium 2 Web Driver, WatiN or WebAii, for Windows or Silverlight you could look at Project White or if you have a fair bit of cash to throw behind it you could look at Coded UI and Visual Studio Lab Manager or a multitude of other tooling options. In my next post I’ll talk about the tools I have been using.

These are just some of the things you might need to consider and you will need to assess your own environment to identify any of the key blockers to making automation happen – once you start on the road of automation the overwhelming expectation from your bosses will be that “it just works” and provides value for money and a good return on investment so you to need to remove any impediments that might prevent this from happening as soon as you find them.

The importance of Given, When, Then

In my mind using Given, When, Then to describe a ubiquitous language provides the corner stone for building automated tests but the challenge tends to be getting everyone to talk in terms of that ubiquitous language and this can take longer than you might first expect. It seems that although using these simple words, from the English language combined with whatever terms are relative to the business, can still be considered alien because of the structure of the paragraph. There is still a need to “manipulate” the requirement into the “automatable language of the business”. What this means is that it may well take several iterations before everyone just “gets it” and you can become truly productive because there is no more learning or debating about the structure. So do not get disheartened when it feels like you are not making traction, there will almost certainly be an uphill struggle to convince everybody that this is the way to go but once you reach the summit it will be worth the effort.

So don’t expect everybody to just “get it”, as with any new technology, things take time and different people take different amounts of time to truly grasp the underlying concepts. Begin by drip feeding the concepts of GWT acceptance criteria and then look to automate a single end to end test using it and build from there.

I cannot stress enough the importance of using Given, When, Then when defining automated tests. Each step type plays a key role in enabling the test developer to break down the scenario into manageable, reusable fragments of functionality which can be combined to form numerous other scenarios to ensure greater coverage of the system. I have tried to explain what I think the various step blocks in the GWT structure are for in the following breakdown of Given, When, Then steps.

Phases of the Iteration

In Agile we are looking to get things done and accepted as early as possible in order to gain feedback and continuously deliver working software without risk of destabilising the product. The following is a breakdown of some of the key stages prior to and throughout an iteration that help to make automation happen.

Pre-Iteration Planning

  1. Story Definition - Customer Team works with Stakeholders to get basic Story definition
    This happens prior to the iteration starting and should include sufficient acceptance criteria to estimate, at this point I would look to capture key information points in an informal manner that help to promote the right discussion of the story when moving into the elaboration phase

During the Iteration

  1. Story Elaboration - Product Owner, Testers and Developers work together
    During planning and the early stages of the iteration scenarios are discussed in detail and evaluated for how best to accept the story – this requires an understanding of what unit, acceptance and end to end tests will be written - I would generally expect at least the high level scenario titles to be defined at this point to give a “feel” for the number of scenarios associated with the feature, we do not necessarily need the full GWT break down and this could be done by smaller teams or pairs
  2. Get to work
    After working together to to elaborate the stories the Product Owner focuses on getting answers to any initial questions raised. Taking each story the team should start to white board/blog/wiki designs discussing implementation strategies to gain agreement of the right approach to deliver the requirements. After this testers can set to work on creating “failing” acceptance tests including developing the underlying supporting framework and Developers start developing “failing” unit tests and implementing the required business logic. You should be able to identify key interfaces that allow the developers and testers to work on separate parts of the same feature at the same time and then tie it all together as and when it makes sense to
  3. Review - Daily Stand Up
    As the understanding of requirements increases and code begins to be implemented the failing tests should start to go green, Developers continue to implement scenarios to make more tests pass, Testers begin to do more exploratory testing and communicate any undefined scenarios found with the Product Owner to determine whether or not the scenarios should be supported in the current version
  4. Revise
    As more tests go green the code should be modified and refactored where appropriate to ensure code quality is high and maintainable whilst keeping tests green. The Developers should be putting more effort into testing towards the end of the iteration, the code to implement a feature should be done as early as possible. If you are checking in code to start a new feature in the last few days of the iteration then you should seriously consider whether it will be feasible to complete that story end to end – would your time be better spent making sure the current product you have is truly “ready to ship”?
  5. Done
    All UI, acceptance and unit tests go green, manual exploratory tests have been done and application is considered “ready to ship”

Summary

When you first start to implement an automation strategy (especially on a brownfield project) try not to set yourself up for a fall by committing to too much, the introduction of automation will be an alien concept and a complete paradigm shift to most people. This will take time for them to “get their heads around”. Try to start by introducing the concept of automatable requirements and get the team talking in those business terms and look to configure your environment to support automation. Plan in setting up of your continuous integration and build server configurations and start to get your team up to speed on your chosen technologies.

Next Steps

Take a look at your current development environment, is it geared up to start automating immediately – if not, what is missing and what are you going to do about getting it sorted? In my next post I’ll take a bit more of a deeper dive into tool sets of choice and how to apply them to create a maintainable test set.

Tuesday 24 April 2012

What is the key to successful test automation?

Wouldn’t it be nice if I had the definitive answer! Well certainly do have an answer, but whether it is something you agree with I shall leave up to you to decide, here are my thoughts on it anyway:

I believe that the key to successful test automation begins not with thinking about “how” to automate but instead concentrating on “what” to automate.

Do not start by thinking about the tools you are going to use, whether it be Selenium, WatiN, WebAii, Project White, <InsertYourFavouriteTestToolHere>, but focus more the input into the automation process. You have to look to the start of the development process, the requirements capture, and ensure that your approach to this provides you with a solid foundation for the creation of automated tests. If you are already on an existing project and you are looking to implement automation try not to be overwhelmed by the sheer scale of the task. You can very easily be “scared off” of implementing automation because it just looks too big but as with anything in Agile by taking small iterative steps and learning from your mistakes you can start to make a dent and let’s face it any automation (done right) must be better than no automation?

You do not need to commit to automating everything straight away but you can start to make your requirements more automatable. Avoiding the ‘overwhelm’ of automation is an important part of making the first steps into reducing your technical debt.

How do I make requirements more automatable?

There are numerous theories on how this can be done but the theory I prefer is to use Behaviour Driven Development (BDD) and it is something I truly believe can help to bridge the communication gap, reduce time spent maintaining requirement specs and increase the team’s overall understanding of the business requirements and therefore lead to a better implemented product.

Now I do not profess to be an expert in BDD in anyway but this is my general understanding of the concepts:

“Software delivery is about writing software to achieve business outcomes” – Dan North

Dan North is generally considered to be the father of BDD and a simple Google search on his name will reveal numerous articles on the subject and I would encourage any reader of this article to take a look at his blog and specifically introducing BDD. The concept behind BDD is that is takes the idea that you “can” turn an idea for a requirement into implemented, tested, production-ready code simply and effectively and in order to achieve this any requirement needs to be specific enough so that “everyone” knows what is going on. The overall goal of BDD is to achieve a common definition of “Done” in order to avoid “that’s not what I asked for” or “I forgot to tell you about this other thing”.

BDD takes the general Agile concept of User Stories and looks to extend them with acceptance criteria that is defined in the form of scenarios, a typical BDD user story is defined as follows:

Title (One line describing the story)
Narrative:
In order to [benefit]
As a [role]
I want [feature]

Scenario 1: Title
Given [context]
And [some more context]
When [event]
Then [outcome]
And [some other outcome]

Scenario 2: …

By working together to define a user story and all of it’s associated scenarios the team gains more knowledge of the requirement and they have one place (the user story) that can form the focal point for the definition of the feature. Any new scenarios that are thought of at any point prior or during the iteration should be added to the user story at the very least as a Scenario title that can be expanded at implementation time.

Who writes the user story?

This seems to be a common question especially amongst new Agile teams and I believe the answer is that everyone writes the user story. That is not to say that everyone writes the story at the same time but that the story will evolve over time and become more detailed as it touches on each discipline within the development team.

  1. Product Owner communicates with the Stakeholders and help them to frame the narrative i.e. what is the high level feature?, and capture salient elements of that story in an informal way – the stakeholder does not always want or need to get bogged down in the language used to formally define a user story and scenarios
  2. Testers help to further define the scope of the story and extract acceptance criteria by determining which scenarios matter and which are less useful
  3. Developers may provide alternative approaches to delivering the story which in turn may influence the structure and focus of the story

More complex stories should involve whiteboard discussions to ensure that each member of the team has a common understanding of the story. Whether the story is elicited from end to end in a single meeting or over numerous time boxed iterative reviews the key to delivering a successful user story is communicating with the stakeholder to ensure that what you intend to deliver is what they expect you to deliver. Obviously in order to achieve this giving the team as much access to the stakeholder as possible would be ideal but where this is not possible then the Product Owner should take responsibility for ensuring that the stakeholder is kept up to date with any changes to the story definition.

Elaborating and formalising the story

The output from the story elicitation meetings will generally be in the form of a fairly loosely defined story with some informal concepts and ideas that capture the essence of the story and promote discussion about the key areas of the feature. The next phase is to begin taking those ideas and formalising them into well defined scenarios using a Given, When, Then (GWT) notation.

The goal of using GWT steps is to achieve a common language to describe the domain in which the feature exists. Having a structured common language makes automation far more achievable because it begins to make each team member talk using the same business terms and it defines steps that may apply to numerous test steps and aids the longer term maintainability of the specifications. By defining the feature in business terms and not being implementation specific you can achieve scenarios that are robust on top of an ever changing system – the goal of the feature should not have to change even if the application is re-written from a web app to a WPF app. “How” you prove that the system still meets the expected behaviour may change but the definition of “What” the system should do should remain fairly consistent and this should be a design goal when creating scenarios for your user stories. This blog post is a useful read to clarify the domain of a specification who's domain is it anyway.

Summary

Getting a handle on your requirements will play a major role in your ability to automate your acceptance tests and start on the road to reducing your technical debt, but once you have laid the foundations you will begin to see the opportunities to automate open up and become apparent in ways that you never thought would be possible.

Next Steps

Why not take either a new requirement or a relatively simple existing requirement and focus some energy to make the requirement more automatable, can you apply the ideas promoted by BDD to your user stories? Have a play with different ways of expressing your scenarios in the Given, When, Then format and then start to capture user stories in this format in your next elicitation or elaboration sessions.

In my next post I’ll discuss some practices to help getting automation done within the time scales of an iteration.

Monday 23 April 2012

Automated testing: you know you need it, but how do you do it?

The world is becoming more and more Agile and many companies are discovering the undeniable truth that Agile software development provides a cost effective solution for delivering working software. Adopting Agile processes within your company and changing the general ethos of your working environment can be harder than it might first appear. Sure, the introduction of daily stand ups, user stories, story points, planning sessions and retrospectives all help to improve the efficiency and communication within your company but there is one important part to the process that gets left out because it is deemed "too hard" to start doing immediately - and that is Automation.

Automation is a fundamental piece to the Agile development puzzle that often gets overlooked and it is only when a development team is struggling to cover all of it's regression tests and bugs are starting to be found by customers that companies generally look to do anything about it. Without automation, every piece of software that gets written, tested and shipped to the customer becomes another piece of technical debt that must be manually tested in order to ensure that any future work does not impact the original expected behaviour of that software. Most software development teams try to negate this by hand picking "just enough" regression tests to cover what they believe should have been impacted by any changes within the current release and generally this works well enough until something "just gets missed".

Numerous studies have been made into the cost effectiveness of finding a bug as early as possible and it should be no different for finding regression bugs, but without increasing your test team exponentially as your technical debt grows it is inevitable that you will have to face the fact that you will need to automate your software testing, the question then becomes how?

Getting started with test automation?

I believe a good place to start is to look at what your current testing practices look like in the Automation triangle. There have been a number of authors (including but not limited to Cohn, Mezaro and Crispin) that have written about the concept of an automation triangle and this something that I wholly believe in. The image below shows some of the subtle variations of the automation triangle amongst authors and I recommend that you do some further reading around.

image

The base of the triangle represents your unit testing level and this should contain the vast majority of your tests. The second level are tests that check the service or api layer, this is where I believe you can obtain the most "bang for your buck" in testing and yet it seems to be the area where most test teams are lacking. The third level of the automation triangle is UI automation tests and finally at the fourth level there is still some need for exploratory manual testing but the critical point here is that if an issue is found whilst manually testing the system then this should be captured as an automation test as soon as possible in order to minimise the technical debt.

In my experience, most software development teams already do a lot of unit testing which is ideal for working in small units and testing elements in isolation but when it comes to automation of acceptance tests they are somewhat lacking. When the discussion of automation of acceptance tests begins it almost certainly starts by looking at automating the user interface (UI)  of the product, after all this is what the customer sees, right? Unfortunately there is an implied cost to implementing UI automation tests that means they are generally the most brittle, the most costly to write and they take the longest time to run. That is not to say that there is no place for automation of the UI, quite the contrary in fact there is immense value in performing UI tests but these must be carefully chosen to ensure you are getting the most value in the least brittle way and the least time possible.

As companies start to implement automation tests the trend to gravitate towards implementing UI automation tests ends up making the automation triangle look a bit more like an hour glass shaped automation rhomboid with few if any service/api level tests. I believe the reason for this is that UI testing does not generally require a technical understanding of how the application works underneath. Tests can be written (using a huge variety of different types of tool) from the view of a system user and as such only tests things that the user can see through the UI of the system. This generally results in all test setup, execution and assertions to be performed via the UI which all takes time (too much time in most cases).

Typically the UI tests will be created using a point and click style of UI testing tool that allows the test team to learn and embrace a tool that they can own and take control of that will free themselves from some of the manual day to day grind of regression testing without the need for the developer team's involvement. Unfortunately choosing UI automation test tool is not enough to ensure the successful implementation of an automation test strategy.

How can I implement a successful automation strategy?

In my opinion, communication and collaboration is one of the most important aspects of Agile software development. Any tools that a development team chooses should first and foremost help to radiate knowledge and increase the overall ability of the team to interact and adapt to their ever changing environment in the quickest way possible. What I mean by this is that the process of determining what automation is necessary should be an integral part of the development teams process from the very start of the project and each subsequent iteration instead of something that the test team goes off and just gets done.

Summary

I thoroughly believe that in order to get the most bang for your buck from your test automation strategy you need to implement of a process that helps improve the communication and collaboration within your team so that testers are working closely with developers at all levels of the automation triangle, the goal of this process is to make your developers better testers and your testers better developers. After all each member of an Agile team should be cross functional – it may well be that your testers are your worst developers and your developers are your worst testers but the key thing to strive for is that they work together in order to deliver a better quality product.

Next Steps

Why not have a think about different ways that you could try and improve the way in which your development team members (Product Owner, Developers and Testers) collaborate and in my next post I’ll look at what I believe is the key to implementing a successful automation test strategy.

Friday 20 April 2012

Getting the current framework SDK path from within MS Build

In LeaveWizard we currently use Linq To Sql and to generate a data context we use SqlMetal as part of the pre-build process of our data access assembly, the xml looks something like this:

 <Target Name="BeforeBuild">
<Exec Command="&quot;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin\SqlMetal.exe&quot; /server:.\HEATHERSQL01 /Database:LeaveWizard_UnitTests /code:LeaveWizardDataContext.cs /context:LeaveWizardDataContext /namespace:FeatureBase.LeaveWizard.DataAccess /views /sprocs /functions /pluralize /timeout:1200" WorkingDirectory="$(ProjectDir)" />
</Target>



As you can see the path to the SDK version number was being hard coded, which has been fine for a long time until I configured a brand new build server which did not have v6.0a of the SDK installed it had v7.1, so I needed a solution.



Thankfully it seems easy enough to do this, simply get the current framework SDK path location like so:



  <Target Name="BeforeBuild">
<GetFrameworkSdkPath>
<Output TaskParameter="Path" PropertyName="WindowsSdkPath" />
</GetFrameworkSdkPath>
<Exec Command="&quot;$(WindowsSdkPath)bin\SqlMetal.exe&quot; /server:.\HEATHERSQL01 /Database:LeaveWizard_UnitTests /code:LeaveWizardDataContext.cs /context:LeaveWizardDataContext /namespace:FeatureBase.LeaveWizard.DataAccess /views /sprocs /functions /pluralize /timeout:1200" WorkingDirectory="$(ProjectDir)" />
</Target>


Job done!


“sgen.exe” error while setting up Team City Build Server

I’ve recently set up a local build server for LeaveWizard and came across an issue:

“C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(2249, 9): error MSB3086: Task could not find "sgen.exe" using the SdkToolsPath "" or the registry key "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v7.0A". Make sure the SdkToolsPath is set and the tool exists in the correct processor specific location under the SdkToolsPath and that the Microsoft Windows SDK is installed”

After a bit of Google searching the answer was pretty simple, just download the latest version of the Windows SDK and then install the .NET Development Tools, simples Smile.

Update:

I came across the same issue whilst installing another build server, this time it was on a Windows 2008 Server. I installed the SDK as stated previously but I was still getting the problem. It seems that the registry key for v7.0A just did not exist. Further Googling identified that you can configure which SDK version should be used by performing the following steps:

Go to “Start | All Programs | Microsoft Windows SDK 7.1” and start the “Windows SDK 7.1 Command Prompt”.

Type:

cd Setup
WindowsSdkVer.exe –version:v7.1

Now re-run your build and hopefully the problem has gone away, if not good luck finding the issue.

Life, Times and LeaveWizard

Well it has been far too long since I last ventured into the world of blogging but it seems that life suddenly got a little busy for me. What with having my family growing with the addition of two fantastic boys Jacob and Lucas and the very latest arrival (just over 1 week old) is my new beautiful baby girl Isabelle, co-coordinating of NxtGenUG Southampton, software contracting with Fitness First and more recently Active Navigation, working on a fantastic online leave management product with my good friend Plamen Balkanski time just seems to slip away from me.

Plamen wrote a post a little while ago about our product LeaveWizard for the first time since we’ve been developing it and this prompted me to think about my own blog (albeit a couple of months later) and how I could resurrect it by starting to talk about the kind of things we are doing with LeaveWizard. Previously most of my posts have had a bit of technical bias and I am sure that there will be plenty more of that to come but to kick the blog off again I thought I would just talk about what we’ve been doing with LeaveWizard.

So what is LeaveWizard anyway?

LeaveWizard is something that both Plamen and I are completely passionate about, we both seemed to have generally bad experiences when it came to holiday bookings. It always seemed like a hassle going something a little like this:

  1. Go the HR team to pick up a holiday card
  2. Determine what holiday allowance you actually have remaining (after a few discussions with the HR team about what leave you didn’t end up taking in the end)
  3. Fill in all the relevant details and dates
  4. Put the holiday card on your managers desk for approval (it generally goes straight into their in-tray because they are very busy and they are never at their desk)
  5. Wait a couple of hours/days/weeks and ask your manager whether the holiday has been approved or not
  6. Your manager finally approves the holiday and passes it on to the secondary approver (once again going into an in-tray because they are not at their desk)
  7. Wait a couple more hours/days/weeks and ask the secondary approver if it has been approved or not
  8. The secondary approver finally approves the holiday and gets the card back to HR without actually telling you whether it was approved or not
  9. You then have to chase HR or the secondary approver to ask whether the holiday has been approved and you eventually get the confirmation you needed (this is normally the day before you are actually leaving for your holiday)
  10. At this point all is well from your point of view but more times than not it turns out that during the time it took to get your holiday approved, a couple of other people on the team also had their leave approved and they end up being on leave at the same time as you!!!
  11. This eventually causes a nightmare because you and this other guy should never be off at the same time and you end up getting in the neck because you didn’t collaborate over when you should or should not be on holiday!!

If this kind of thing sounds in anyway familiar then you will totally understand why we created LeaveWizard, we just figured it shouldn’t be that hard. So we set about creating a tool that would make the process something a bit more like this:

  1. Log on to an online system that is available any time and anywhere across the globe
  2. Instantly have access to your current leave allowance
  3. Click a request leave button and enter your leave dates and submit your request (if too many people in your department are off on those dates already you are instantly notified)
  4. Your leave request is instantly sent to your manager via email who can quickly see who else is on leave at the same time and instantly approve the leave request
  5. You are sent an email stating “<ManagersName> has approved your leave and it has now been sent to <SecondaryApproversName> for secondary approver” so you know exactly what stage your leave request is at
  6. The secondary approver receives an email detailing the leave request and can also see who is on leave at the same time and simply clicks “Approve” directly from the email content
  7. You are then sent a confirmation email stating that your leave has been approved…woo hoo time to go on holiday!

Now I don’t know about you but I know which process I prefer which is why we created a tool to allow us to do exactly that and a whole lot more from request and approving overtime, defining flexible work patterns, configuring time off in lieu LeaveWizard has become an extremely flexible holiday management tool.

If you think that you company could benefit from using a tool like LeaveWizard to management your staff holiday, absences and vacations then why not try it out, it is totally free to use for up to 5 users. I would also love to get your feedback on anything you would care to comment on.