We have already covered unit testing which tests the various modules that make up your application. This chapter will focus on a different type of test called acceptance tests that focus on the public interface of the API. These differ from unit tests in that they focus on what the end user will see and their aim is to check that the system complies with the business requirements. Unlike unit tests they don't need to be written in the same programming language used for the API development itself.
This chapter will cover two alternative tools that can be used for acceptance testing.
- The Frisby framework. *
- Using a Bahaviour-Driven Development methodology.
The sections marked with an asterix *
should be considered essential content.
Acceptance testing will be carried out using the frisby package which is built on the jasmine-node framework you have already used.
- open the
shopping/test/todo-spec.js
script and read it, paying particular attention to the detailed comments. - start the API and check it is visible using Postman.
- change the base urls to match your cloud9 API.
- run the acceptance tests by entering
./node_modules/.bin/jasmine-node test/ --verbose
, this will output the test results to the terminal. Take a few moments to understand this output. - run the acceptance tests again. Why do they fail this time, use the error trace to find out what failed and why.
- create a new test
DELETE /lists
and add it at the start of the test. Eventually this should clear all the lists in the API and return success, lists deleted. - run your tests, they should fail (we have not written the new API feature!
- add the
--autotest --watch .
flags when you run the test script (see the previous worksheet), this will automatically run your acceptance tests every time you save a file. - implement
DELETE /lists
. The tests will pass to indicate success.
npm install --save-dev cucumber apickli
Domain-Specific Language
User Stories -> feature files
add scenarios
step definitions
Gherkin (feature files), Cucumber,
Each week you will be expected to complete a series of lab activities. You will be required to reflect on these in your assignment so make sure you keep records of what you have done. The supporting presentation can be found at https://goo.gl/Tx8nWx
As you begin each sprint you should aim to apply more of the knowledge and skills covered in the lectures. For this sprint you will need to include:
- The sprint planning and review meetings
- The daily standups
- Pair programming
- Version control including a good branching strategy
You will also be implementing acceptance testing at one of three levels:
- You should work with a DSL to define your tests
- You should use an appropriate framework to define these as automated tests
- Finally, you should attempt to write step-definitions to automatically convert the tests written in a DSL to automated tests.
Start by modelling the problem domain. This can be done either on paper or using a whiteboard. Make sure you include:
- The Entities
- The Relationships
- The Responsibilities
Now take each of your completed user stories and map them against this problem domain.
Before starting your next sprint, revisit each of the completed user stories and define each of them using a business-readable DSL such as Gherkin.
- Create a
features/
directory. - Create a file with a
.feature
extension for each user story. - At the top of each of the files create a feature and add the user story.
- Now define a number of scenarios to clearly and unambiguously define all the tests you need to carry out.
- Under each scenario, list the steps required.
- Install and run a tool such as
gherkindoc
to generate your documentation site so it can be viewed through github.coventry.ac.uk
By this stage you have completed first sprint and so you will need to hold a sprint review meeting. Make sure you have invited your client.
- The team:
- displays the documentation site generated from the
.feature
files and recap the tasks that were agreed on during the previous meeting. - demonstrate the product, showing that the agreed user story(s) have been completed and that the product is useful to the client.
- The client gives feedback and may be in a position to sign off the work carried out so far.
- The client and developers use the Kanban board to identify any issues in the sprint backlog that were not completed:
- Issues are added to the issue tracker in GitHub.
- These issues are added to the sprint backlog column on the Kanban board.
- The client and developers update the User Story Map:
- Change the story priority based on the client's current requirements.
- Decide what will be included in the next sprint.
- A new
.feature
file is created for each user story in the new sprint and the team work with the customer to write the scenarios and steps. - The Kanban board is updated with the user stories from the new sprint.
During this sprint, try to implement each of the scenarios as an automated acceptance test. By the end of the sprint you should have one test per scenario and all the tests should have passed.
As a bare minimum you will need to write the tests but you should have a go at writing a step-definition file which will read the .feature
files and automatically generate and run the acceptance tests.
There are a number of supporting labs that will help you implement your acceptance tests.
- If you are creating a website, use the materials in the casperjs directory.
- If you are creating an API, use the materials in the frisbyjs directory.