A methodology is a set of measurable and repeatable steps by which some objective can be met. In the case of software systems this needs to reflect an understanding of the roles that are played out by people in the delivery of a system and the artefacts that they may produce and which can be used to measure a system against its requirements whatever they may be. The measurement of artefacts is essential in being able to track progress towards achieving a common goal which is to deliver a solution that meets a specific set of requirements. Immediately we can see that in an ideal world if we can measure the artefacts against a set of requirements we can determine if the system does what it is supposed to do.
The methodology we describe herein does not deal with the wider issues of scope and requirements capture, these are best left to existing approaches such as TOGAF, rather we concentrate on the delivery of suitable artefacts and how they are used by different roles and how they can be used to measure the system at various stages from design to operation and to guide the construction of such a system.
The roles that are played out in the delivery of a software system start at the business stake-holder and involve business analysts, architects, implementers, project managers and operations managers. These roles are a reflection of tried and tested methodologies that are used in construction projects. In a construction project there is also a business stake-holder, namely the person or persons that expressed a need. There is also an architect who captures those needs and writes them down as a set of drawings, there are structural engineers who add derived requirements for load bearing issues, there are builders who construct, there are project managers who manage the construction and report to the business stake-holders and there are the people maintain the system day to day. The roles for a methodology for software construction are analogous The aim being that what is delivered measurably meets the requirements.
To carry the analogy further we can list those roles played out in construction projects and list the equivalent roles in software construction.
In explaining what the equivalent software construction role do we describe the relationships that they have with each other.
The business stake holder, the operations manager and the business analyst work together to document the requirements. The business stake-holders articulate the business requirements for meeting the key business goals and the operations manager articulates the requirements from a day to day operational perspective. The business analysts role is to elicit requirements from the business stake holder and the operations manager and record them in a way that enables the business stake holder and operations manager to agree to them and the enterprise architect to model them.
The enterprise architect liaises with the business analyst to model the requirements and fashion them into two artefacts. A dynamic model that describes the flow between services or application and a static model that described the data requirements that the dynamic model uses.
The technical architect liaises with the enterprise architect (often they are the same person but this is not always the case) to ensure that the technical constraints (e.g. what technologies can be employed, what the expected throughput might be and what the user response times should be) can be met.
The implementers liaise with both the technical and enterprise architects to construct the necessary pieces that make up the system be they services, applications and so on.
The project manager has to manage the way in which all of the roles above meet the requirements agreed with the business stake holders and to ensure that the system is delivered on time and on budget.
The business stake-holder and the operations manager also liaise with the enterprise architect to determine how the system is to demonstrate its acceptability to the users. This may be done in part by showing that the system implements the documented requirements and in part by use acceptance testing, the latter being better suited to issues such as usability and meeting non-functional requirements. In the case of usability issues nothing can substitute for user testing and so no automated mechanism can be employed. Whereas it may be possible to check non-functional requirements in the context of some dynamic model of the system to determine consistency and so show where the system as whole may fail to meet one or more of these requirements or under what circumstances such failure will occur.
We have already alluded to one artefact, namely the documented requirements. Whilst we do not focus on how these are gathered we do need to focus on what is recorded. By so doing we can use them to measure other artefacts as we drive towards the delivery of a system. Using the requirements as a benchmark for success will ensure that the delivered system does what it is supposed to do and so meets the overall business goals of a project.
Requirements come in three forms and are of two types. There are functional requirements, such as “the system receives orders and payments and emits products for delivery based on the orders and payments” and “orders occur before payments which occur before delivery”, and non-functional requirements, such as “the time taken between an order being placed and the good delivered is not more than 3 days” and “the system needs to be built using JBoss ESB and Java.
The non-functional requirements can be captured as a set of constraints or rules that the system needs to obey in order to deliver a good service. An example is the end to end time from a customer placing and order to the order being satisfied. A more granular example is the time it takes for a customer to place an order and receive a response that the order has been accepted. In either case these can be represented as rules using a language such as RuleML although other rule and constraint language may well suffice.
The functional requirements can be decomposed into static and dynamic functional requirements.
Static functional requirements deal with data and what is needed to describe a product, a customer, an order and so on. The dynamic functional requirements describe how such data, as artefacts in their own right, are exchanged between different parts of a solution. For example how a product might be sent to a shipper as well as sent to a customer and how a customer sends an order to an order system which might give rise to a product that is shipped to the customer.
The dynamic functional requirements build upon the static functional requirements to lay out responsibilities and to order exchanges of artefacts between the different areas of responsibility. For example an order system is responsible for fulfilling order but a warehouse system is responsible for stocking and shipping products to a shipper or a customer and a payment system is responsible for dealing with payments and updating the books and records. Each service (e.g. the order system, the warehouse system and the payments system) has specific responsibilities but need to co-operate with each other to deliver a business goal such as selling products for money. The dynamic functional requirements describe, as examples or scenarios, who does what (their responsibility) and whom they exchange information with (the message exchanges) and when they do so (the ordering of those message exchanges).
It is the responsibility of the business analysts to gather and document these requirements. In so doing they need to deliver them as concrete artefacts and to do so they need tool support, although it should be possible to do all of this without tools using a paper and pen, tool make their life easier and more productive.
A tool to enable the non-functional requirements to be captured should provide an editor that enables rules and constraints to be written in English and can then turns them into RuleML. This would enable RuleML compliance processes to be used to determine if any of the constraints or rules are inconsistent and provide feedback as to where such inconsistencies lie. For example one might have a constraint that say the end to end performance from the point a user buys a product to receiving the product is less than or equal to 2 days. Whereas the point at which the order is acknowledged is less than or equal to 3 days. A consistency checker can easily pick up errors in which the wider constraint is breached by a more narrow one.
Likewise the functional static requirements need to be supported with an editor to create forms that can be used to create example messages, perhaps saving the message in different formats including XML. The XML messages then becomes artefacts in the same way as the non-functional RuleML artefacts.
And finally the dynamic functional requirements need tooling support to enable the business analysts to describe the flows between responsible services as sequence diagrams that can refer to the static functional requirement and non-functional requirement artefacts.
In short the base-artefacts from requirements gathering would be as follows:
- a set of RuleML rules and constraints
- a set of example XML message
- a set of sequence diagrams over the messages, rules and constraints
Providing easy to use tools that support these artefact enables the business analyst to gather requirement rapidly in a form that is easy to review with the business stake-holders, can be measured for consistency and can be passed to the architects to draw up suitable blueprints that meet the requirements.
When the business analysts hands over the artefacts that they have created the role of the architect is to create static models of the data and dynamic models of the behaviour that meet all of the requirements. Collectively we may consider these models as the blueprint for the system. The artefacts that the business analysts hands over to the architect serve not only as input to the static and dynamic models created but serve as a means of testing their validity and so measuring them against the requirements.
A static model needs to describe the data that the system uses as input and emits as output. Such a model needs to constrain the permissible values as well as constraining the combinations of data elements needed. There are many tools available which do this today. Some are based on UML and some are based on XML Schema. The ones we like can do both and this provides a reference model for the data that can be tested against the described static functional requirements to ensure that the model captures everything that has been previously agreed.
The artefacts from modelling are as follows:
- a data model that represents all of the documents requirements for data
- a dynamic model that represents all of the documented dynamic requirements
Validating models against requirements is not a one time operation. It certainly ensures that the system when delivered does represent all of the necessary data but also acts as a governance mechanism when the system runs, as data can be validated on the fly against the model and can also be used to control and guide further modifications of the system as it evolves.
The dynamic model needs to describe what data is input and when and what the observable consequences might be. For example a dynamic model might need to describe that an order is input and then a check made against the inventory to ensure the good are available. If they are available then payment can be taken and if payment is accepted then the good scheduled for delivery. Of course we might need to model this as a repeating process in which a customer may order several good in one go or after each check against the inventory and so on. Clearly any dynamic model needs to be able to describe repeating processes, things that happen in sequence and things that are conditional.
Validating dynamic models will require us to check our sequence diagrams against the dynamic model to ensure that the dynamic model captures the described dynamic functional requirements of the system and so show that the model behaves according to these requirements. As with the static data model this is a process that can be repeated to ensure that the system behaves correctly as it evolves and that the system behaves correctly when it is operational. The key aspect of such a dynamic model is that it is unambiguous and can be understood by applications which may verify models against input data from sequence diagrams as well as messages that flow through a running system. This provided behavioural governance of a system through its life-cycle.
WS-CDL provides a suitable language for describing dynamic models. It makes no assumptions about the underlying technology that might be used to build systems, it is unambiguous and it is neutral in its description so that it can describe how disparate services (i.e. legacy and new services) need to work together in order to deliver some business goal.
The dynamic model only describes the order of messages (data) that are exchanged between different services, so it would describe the exchange of an order followed by the exchange of a payment and the subsequent credit check and so on. It does not describe how credit checking is performed nor does it describe how an order is processed. Rather it describes when and what messages are exchanges.
To implement, or construct, the services we can use the unambiguous dynamic model to determine what the interfaces of each service should be. You can think of these as the functional contract that each service presents. We can also use the dynamic model to lay out the behaviour of the services and so the order in which the functions can be invoked. Collectively this provides a behavioural contract to which the service must adhere to ensure that they do the right thing at the right time with respect to the dynamic model. Using the dynamic model in this way ensures structural and behavioural conformance to the model and so to the requirements against which the model has been validated.
Contractually the business logic need only receive messages, process them and ensure that the messages that need to be sent are available. The service contracts ensure that messages are received and sent as appropriate.
Validation of messages against the static model whilst the system is operational can be added as configuration rather than in code and validation against the dynamic model can be added similarly. This ensures continual governance of running systems against models which have been validated against requirements and so show that the system itself conforms to the requirements. This level of governance is akin to continual inspection that an architects and business stake holders and other parties may apply to construction projects.
Summary of the Methodology
The artefacts and the roles that are involved in their production and the way in which they artefacts are used are summarised in the two tables below:
Table 1: Artefacts and their uses
Table 2: Artefacts and roles