This demo aims at showing how state machines can be used to modelize reactive systems, in particular user interfaces. They have long been used for embedded systems, in particular for safety-critical software.
We will use a real case of a multi-step workflow (the visual interface however has been changed, but the logic is the same). A user is applying to a volunteering opportunity, and to do so navigate through a 5-step process, with a specific screen dedicated to each step. When moving from one step to another, the data entered by the user is validated then saved asynchronously.
That multi-step workflow will be implemented in two iterations :
- In the first iteration, we will do optimistic saves, i.e. we will not wait or check for a confirmation message and directly move to the next step. We will also fetch data remotely and assume that fetch will always be successful (call that optimistic fetch). This will helps us showcase the definition and behaviour of an extended state machine.
- In the second iteration, we will implement retries with exponential back-off for the initial data-fetching. We will also implement pessimistic save for the most 'expensive' step in the workflow. This will in turn serve to showcase an hierarchical extended state machine.
With those two examples, we will be able to conclude by recapitulating the advantages and trade-off associated to using state machines for specifying and implementing user interfaces.
The implementation uses
cyclejs as framework, and
state-transducer as a state machine library.
Here are the initial specifications for the volunteer application workflow, as extracted from the UX designers. Those initial specifications are light in details, and are simple lo-fi wireframes.
In addition, the following must hold :
- it should be possible for the user to interrupt at any time its application and continue it later from where it stopped
- user-generated data must be validated
- after entering all necessary data for his application, the user can review them and decide to modify some of them, by returning to the appropriate screen (cf. pencil icons in the wireframe)
Modelizing the user flow with an extended state machine
On the first iteration, the provided wireframes are refined into a workable state machine, which reproduces the provided user flow, while addressing key implementation details (error flows, data fetching).
The behaviour is pretty self-explanatory. The machines moves from its initial state to the fetch state which awaits for a fetch event carrying the fetched data (previously saved application data). From that, the sequence of screens flows in function of the user flow and rules defined.
Note that we could have included processing of the fetch event inside our state machine. We could have instead fetched the relevant data, and then start the state machine with an initial INIT event which carries the fetched data. Another option is also to start the state machine with an initial extended state which includes the fetched data.
It is important to understand that the defined state machine acts as a precise specification for the reactive system under development. It is called a model of the reactive system for this reason. This model can double as implementation for that reactive system, in fact as implementation for the part of the reactive system it modelizes. For instance, our model does not modelize actual actions, nor the interfaced systems, e.g. HTTP requests, the network, etc. We have chosen to modelize only the input/output (event/action) relation, under the hypothesis that the interfaced systems can be tested separately, for instance during acceptance or integration tests.
So we went from an informal specification of our reactive system to a precise specification of such. That is great and desirable but then we have to actually check that we have done so without loosing or adding requirements, i.e. that both specifications are equivalent. The only way to do this is by testing the model prior to using it, and those tests are necessarily validated manually (i.e. you are the tester), as only the informal specification (which you are supposed to build an accurate mental model of) can be used to such purposes.
The good news is that, while a part of the related testing process is manual, a part can be automatically generated. Typically it is possible to generate test input sequences automatically, by using the model specification of the inputs it accepts. It is not however possible in general to check automatically the correctness of the output. First of all, the input sequences derived might be wrong, if the model specifies them incorrectly. Second, if we had a computation which could accurately predict an output sequnce for every input sequence (what is termed an oracle), then we can use that as a model, and there is no need to test.
The bottom line is, we have informal UI requirements, we produce a state-machine-based detailed specification from that, and subsequently we generate input sequences and the corresponding output sequences, which we validate manually.
There are three ways to test the model :
- property-based testing
- contracts, applying to transition, guards and states
- hand-validated tests
In this demo, we will rely on hand-validated tests :
- it is not clear what set of properties would guarantee a coverage of the system under
development which generates trust
- a property of the system could be that the
save dataoutput is only emitted immediately as a consequence of a
- we could go on with more properties, but it is difficult to think about properties which would relate entire sequences of inputs and outputs, in an exhaustive way. In fact if we could, we would have an alternative formal specification for the reactive system under development
- a property of the system could be that the
- we may easily encounter elements of a specification that we cannot (or will not) check by contracts (preconditions, post-conditions, and invariants). It may be too expensive, or too complicated to program boolean functions which represent these elements.
So we will use hand-validated tests as a test method.
Test strategy TODO fuse with whats before
As mentioned previously, we will have a productivity cost related to human-based testing. We will alleviate that by :
- generating a set of input sequences which coverage criteria over a given threshold
- all paths in the graph derived from the model will be taken
- compute the associated output for each input sequence
- from the input/output relation, generate a BDD-style file (using Gherkin) which will describes
the test in human language
- this aims at enabling faster validation by the manual tester, and communication with the non-technical domain people who participated in the specification, in case of doubt
- the set of validated tests can be cherry picked, parameterized and used to generate higher coverage of the extended state of the state machine model, this time with an oracle derived from the result of the non-parameterized version of the test. TODO explain bettr
We will use the state machine as a model, and automatically generate input tests from it. explain MBT testing procedure, the input generation, the oracle, the abstract tests into concrete tests, shrinking failing tests, and also because EFSM should add some data-flow testing, i.e. generators which randomizes the data - or randomize between a predefined set
coming soon I swear
- Model Based Testing - An Evaluation
- advantages and disadvantages of MBT
- Sometime it happens that a behavioral model contains error. So the model must be debugged! prior to generate tests
- advantages of MBT, more succintly presented
- model-based testing process
- 3.3.3 Function specifications - difference between model-driven (use for specification i.e. implementation) and model-based (used for testing for instance, does not need same level of details)
- Testing strategy ; coverage etc.
- in general a pretty clearly explained thesis, even if the decision tables approach (2017 rally?) seems inferior to me to state machine modelling
We use the stream-oriented
cyclejs framework to showcase our state machine library. To that purpose, we use the
makeStreamingStateMachine from our library to match a stream of actions to a stream of events.
We then wire that stream of actions with cyclejs sinks. In this iteration, we make use of two
drivers : the DOM driver for updating the screen, and a domain driver for fetching data.
Code available in dedicated branch.
Check-out the branch on your local computer then type
npm run start in the root directory for