Friday, 12 December 2008

Offshoring and Specification - Improving the Global Delivery Model

Offshoring has been something with us for at least 10 years. Initially much of it was targeted at testing. There has been a lot written about offshoring and on the role specification plays in achieving a successful offshoring delivery model. So it is to this relationship between an engine that has a global delivery capability (and Cognizant is a fine example of this) and how to make it even better that I blog.

Offshoring is often done on fixed price terms. The financial size of the project and the way in which the contract and subsequent remuneration is managed relies heavily on specifications, governance of the delivery and change management. To date most offshoring companies have similar, although with subtle differences, models for doing this.

The advent of UML as a language for modeling and specifications has certainly helped in understanding what should be delivered. The use of programme management and enterprise architecture functions to govern delivery with clients is a common feature. Many offshoring companies and indeed those that were local Systems Integrators have to a large extent pioneered the roles that we play; from the programme management roles to the enterprise architect roles. Governance and supporting frameworks have been developed to help the process and have been used to great effect. TOGAF certified architects, MSP certified programme managers are in demand. Of course this does not mean TOGAF and MSP are the only games in town, I only mention them because they are well known.

The fundamental cost in offshoring the Software Development Life Cycle (SDLC) is one of people. And because people are involved the removal of ambiguity in specifications where possible is a critical success factor it both protecting the margins of offshoring companies as well as protecting the delivery of solutions to customers.

Not much has changes in terms of the specifications and the methods to create specifications nor in the way in which specification are used to guide the SDLC. Typically success is characterised by a model in which an Enterprise Architect may work alongside Business Analysts onshore to frame a solution and the Programme Manager liaises with the customer and Enterprise Architect to ensure a suitable roadmap for phased delivery of the solution. The same two roles work together on the contract details which reflect the roadmap. The programme (that is the collection of projects needed to deliver the solution) kicks off and the Enterprise Architect and Programme Manager (often with similar roles from the client) ensure the programme is well governed meeting all the targets and adhering to the standards of communication and delivery needed as well as managing changes.

Within the process that is enacted the specification of the solution becomes a key part for both governance and change management. The specification is the document (or documents) that detail what needs to be built in order to deliver the solution. Of course the entire process is larger than just this piece but for the most part the rest of the process deals with change management and seeks to ensure alignment of the solution to the customers needs throughout the life of the programme - hence change management because what we think is correct never is what is needed over the life of a programme; things change.

The specifications delivered to an offshore engine need to make clear what functions need to be written, what inputs they might have and what outputs they might deliver. The specification needs to make clear the order that functions can be used. I have not mentioned data explicitly because it has no meaning without the functions. A piece of data is just that until we decide to do something with it. What we do with it is essentially a function.

The challenge to creating and delivering good specifications is to ensure that they are internally consistent, that they are externally complementary - by which we mean that two pieces of software will work together or interoperate, that they are aligned to the business goals of the customer.

The way in which we meet the first challenge - internally consistent - is largely through governance and within it review. They way we meet the second challenge is the same. The way we meet the third challenge is through transparency with the customer of the SDLC and the solutioning process and of course through continued governance.

Of course when things change we need to assess the impact of that change to projects already in flight. Again this is done through review under governance.

The pattern here is that the key challenges are met with people based processes. And the problem with that is that it is ambiguous. The specification are often ambiguous internally and even more so externally. Not because people are not trying by because the level of abstraction for specifications is not amenable to adequately describe what is needed.

If we use UML State Machines and WSDL contracts as an example. We might have 100 services to create. They all have WSDL contracts. Some of the services will be stateless (typically data look ups, technical services and so on), some will be stateful (this doesn't mean they hold state within the interface but it may well mean that they have knowledge of one or more business transactions and so state relative to the business transaction becomes very important). If 20 of these services need to collaborate then the UML State Machines need to reflect that collaboration. The way we ensure it works is to check each State Machine against what we think are the related State Machines. As an example look at the two below, they should be complementary but are they?





Life got a little better when BPMN came along. It's level of abstraction is higher. But the problem with this is that it very quickly becomes unmangeable and unintelligable because the complexity is delivered as links across services (the roles or actors). Imagine a BPMN diagram for a seemingly straight forward problem. I show one below and you see immediately that the complexity makes it unreadable (and yes I know it is small but the point is that the picture is clearly complex).



So how do we do it better? How can we deliver better specifications and so improve the global delivery model and leverage offshoring at an industrial scale without needed so many heavy people based processes? In short how can we automate some of this and use computational power to make our lives easier?

The answer is to use testable architecture. In my next blog I shall show how it works and how it can be applied and the benefits that ensue. I shall leave you now with a simple business view of testable architecture for a simple example that was generated from a language that is unambiguous and really does specify at the right level of abstraction:

Friday, 21 November 2008

Some fun stuff

I lived and worked in the US from November 1985 to August 1987. During that time I probably learned more about IT than at any other time. I worked on some cool projects met some cool people and had a great time.

I spent many a Saturday night on the lower east side of Manhattan and small bar were I listen and generally lost myself to "Joey Miserable and Worms". See the link to the left.

Normally I blog on tech stuff but this time, for fun and in memory and affection of those times I leave you Joey Miserable and the worms and Worm Opus.

Tuesday, 18 November 2008

Enterprise Transformation


Overview

Transformation seems to be a bit of buzz word these days. I have seen an increase in bids that involve or are centred around transformation. There is much written about transformation that is available on the web and most of it is in the same vein, namely that IT is an enabler and that integration is costly and that SOA provides some way to reduce this cost and so increase the agility of the underlying IT assets. What I want to do is provide more detail on why this is so.

Enterprise Transformation and IT
So what is transformation in an enterprise and IT context.

Transformation at an enterprise level is about process, people and organisational structure. IT is only an enabler, its role is to automate where possible and assist where full automation is not possible.

For transformation to succeed it must have a business context. When we transform we know where we start from and where we need to go to. Transformation is better achieved and returns greater value if it ties return on investment along the way and becomes a programme of many projects that implement the transformation such that it is aligned to business imperatives. If there are no business imperatives that can be articulated then there is no reason to transform.

Enterprises engage on transformational programmes of change because they want to improve their own efficiency and because they wish to re-align their business with, what they see as, future market conditions. Of course the market changes and so the ability to continually adapt to change becomes ever more important to the place an enterprise needs to go. Equally an enterprise notion of efficiency also changes over time. Some processes that previously would be in-house become commodities and can be outsourced. But to achieve this the enterprise needs a framework in which the processes and the IT landscape sit that can support major adaptation.

The people in this transformation, as they have always been, are very flexible. Transformation can be painful but in the grand scheme of things people adapt and perform in line with the business imperatives that drive the transformation. The inhibitor to transformation is more often than not IT. Computer systems and software are not good at adapting to new situations and this is the bigger challenge to any transformation.

At an enterprise level, transformation change requires re-alignment of process, people, organisational structure and supporting IT to business imperatives. It isn’t just about IT, it isn’t just about people, it isn’t just about process or organisational structure. It is about business imperatives, business context and the way in which business goals can be achieved. Processes are simply a way of ensuring that all of the moving parts (people and IT) do what they need to do at the right time. People and organisational structure is about authority and roles to achieve business goals. IT is simply a way to automate and assist people through encoding of some or all of the processes to ensure greater efficiency. Given it is all about business goals and involves all of these moving parts it should come as little surprise that this is what Enterprise Architects do. They don’t do technology, they do transformation.

I've worked on process modelling since the late 1980's. First with Sema Groupe and then with Object Design for IBM on Fidelio (the code name for Flowmark which is now part of Websphere). It because apparent then that business processes would be very important to enterprises as IT became more prevalent in our society. The downside was always the integration of components and not the process topology. Designing a process as a model is the easy bit but enacting it over disparate resources is always the more costly exercise. Those of us that worked on process modelling back in those days knew that integration and reducing its footprint in the equation would always be the holy grail. If we could make the cost of integration low enough and the time taken to integrate short enough then IT would no longer be the impediment.

The role of the Enterprise Architect
Good enterprise architects have an understanding of the technology drivers that enable transformation to be effective and at the same time have an understanding of organisational structure and roles that need to be in place to enact transformation. They don’t have all of the answers but they think at a similar level to a CXO – which is why they often face off to them. They facilitate the reshaping of an organisation, an enterprise if you will, with the CXO driving and the enterprise architect providing grounded forward thinking advice and guidance on how transformation can be supported by IT in an holistic way commensurate with the business goals.

The role of Models, SOA, BPM and Business Rules
Having a good understanding of IT and IT trends in the future is one of the key skills that good enterprise architects bring to the table. The advent of SOA, the resurgence of BPM and the newer resurgence in Business Rules make transformational change much easier to achieve and much easier to drive. SOA cuts the cost of integration to more manageable levels through the use of standards and standards compliant tooling. BPM provides a framework for process encoding that provides the structural clarity of how an enterprise will work (across the IT and people divide). And Business Rules provide the controls for steering the enterprise ship as it sails towards it’s archipelago of business goals.

But of course none of this is any good unless we can capture where we are and where we need to go to. Having a profound understanding of an AS-IS landscape – by which I mean the process, people, organisation structure and IT – is a base for transformation. Such an understanding needs to minimise ambiguity and maximise clarity. Good models become essential. Good models that can be tested become highly valuable and good models that can be reasoned over become a differentiator.

Having that AS-IS state allows the CXO team to better understand where they are and so construct similar models that describe where they want to go.

The journey to a TO-BE landscape cannot always be determined in advance, it is a journey. To be able to flex and adapt the enterprise without being hide-bound by IT enables the CXO to try things out rapidly. In much the same way as we use agile methods to develop solutions in the face of inexact requirements and in much the same way we might use CMMI to drive iterative improvements we can apply the same techniques to the enterprise as a whole and so continually adapt it to improve efficiency and meet business goals.

Where to start
The first step of this journey is to move towards service enabling the enterprise. Capturing the key processes, automating where we can picking low hanging fruit to turn an AS-IS landscape into a more agile AS-IS landscape. Picking the low hanging fruit can often help to pay for the enablement over time. When enough enablement has been done the transformational journey can begin in earnest because at that point enough of the enterprise is flexible and agile enough to be flexed.

A danger in picking low hanging fruit is to silo enterprise transformation thinking. Looking at one process is never sufficient as it does not engender any synergy. So a single process view does little to help transformation. Rather a more global approach is needed. Capturing the AS-IS and then transforming to a TO-BE state over time requires a global model approach which is neutral to the technology and crosses process boundaries. Testable architecture concepts help to capture a global model by documenting an AS-IS (and enabling a journey to a TO-BE state) to be written formally as a collaboration in which processes may collaborate at the top, middle or bottom of the process graphs. This helps to rationalise the IT real-estate and encourages synergy.

Conclusions
Consider the humble Ford car manufacturing plant and the American football team. In the case of the former they can re-purpose a plant to manufacture on car type or another in 24 hours. They have delivered adaptability for their business. In the case of the later they coach has a play book and a set of behaviours (the players). The goal is to win the game and the objective is to make a first down each time or stop the other team or take a field goal. The coach matches behaviours to plays at each stage of the game demonstrating adaptability.

Thus for me the key to enterprise transformation is to free the enterprise from the moribund pace of change of IT through service enablement, focussing on early ROI to justify the enablement and with a longer term vision of CXO’s to have a flexible and adaptable enterprise for the future. This is why global models are important (and so testable architecture given first through CDL tools). This is why SOA is important. This is why BPM is important and this is why Business Rules are important. They all help in enabling adaptability at differing levels of a enterprise, from the top, in the middle and at the bottom.

And finally, the guardians of technology enablement are not the vendors – they have little insight as to how transformations are performed – it rests with the systems integrators leading the industrialisation of IT.


References:

IT for Enterprise Agility

BPM is Not the Same as BPR

Building the Agile Enterprise

Friday, 12 September 2008

Finding the right services to a problem

Almost certainly the will be the start of a stream of blog entires. I am still mulling over the plethora of techniques we are told about and still trying to see if any of them really help or if it is a case of good design principles rehashed yet again. Perhaps if I come up with something concrete we will call it YASIM (Yet Another Service IdentificationMethodology). Anyway here goes part 1.

One of the most difficult things to do in SOA-based projects is finding the right services. There is much fear uncertainly and doubt about this. To meet the challenge many vendors and integrators are seeking to create methodologies to support the process. Consequently there are lots of methodologies about logical designs and how to derive services from them. Some use data flow techniques other use process modeling techniques and all have some taxonomy of services that classify services (albeit with different names) as "business", "data", "technical" and "utility".

The data flow and modeling techniques are intended to tease out services that are business aligned by looking at the data (data flow) or the process. Services are derived from such views and then classified.

Data modeling apart - largely because it is not the way in which the prevailing winds blow - relies on some notion of activity grounded to a role. BPMN is a typical graphical language that is used. This technique is fine for the most part although in the case of BPMN scalability is an issue because it put roles at the front and obscures the process as a result, but nonetheless it works for the most part in deriving services.

Of course deriving what services are needed is only one step in the process of delivery. What is needed in support of this is to identify services that might fulfill need. In a greenfield site this is not a problem but as SOA adoption grows one of the benefits is supposed to be reuse. So how do we ensure reuse and what does it mean?

Finding a service that has both the correct method signatures and the correct behavior, the keystone of reuse, is for the most part an aspiration. It would be wonderful if we could ask a service repository for a service with methods "foo(FooXML)" and "bar(BarXML)" in which "foo" is always called before "bar" or in which order is irrelevant and for the repository to tell us what matches our query.

The reason why we cannot do this sort of search is in part because the behavior of a service is not captured in the repository and because the architecture description (data flow model or process model) does not support any formal linkage between it and the service descriptions in the repository. So we are left to figure it out and the best we can do is find that a service has a "foo" that takes a "FooXML" and a "bar" that takes a "BarXML" as input (similarly for the outputs).

I was at a recent Scribble meeting at Imperial College and I saw the future is now. The Pi4 Technologies Foundation along with Imperial College and Queen Mary College and a few others have been working on a language called Scribble. It is "son of" WS-CDL. A much cleaner curly brace notation for describing teh dynamic behavior of an architecture. Most importantly it has a behavioral type system. The demonstration showed that an encoding of WS-CDL can be rendered into Scribble and then the type system can be used to check for behavioral conformance of JBossESB actions. If you change the WS-CDL having generated the JBossESB (directly or by hand) you can see areas of conformance and non-conformance. Finally we have the linkage between the intent of the architect and the implementation contract of a service in a true SOA platform.

I am looking forward to playing with this when released as part of project Overlord and I expect it make reuse the norm rather than the exception and to do so at a fraction of the cost.

Monday, 21 July 2008

Mulling over architecture

I've been mulling in no specific direction today. Letting the wind take me to wherever it wishes and that often leads me to think more clearly.

I was talking to my dad over the weekend. He was involved in avionics most of his working life and was the examiner of the institute of Quality Assurance in the UK. We were talking about the Farnborough Airshow - which I attended on Friday - and about fly by wire. The European Fighter Aircraft (Typhoon) is a fly by wire as is the A 380. I don't know about anyone else but the standard of software development would make me concerned if I had to fly in one of those. I try to forget it all when I board an aircraft so I don't worry.

It turns out that much of what they do to increase the reliability of fly-by-wire is to use redundancy. Triple systems is one approach that is used. The components are developed independently and then monitored. When they diverge majority voting is used to determine the likely correct behavior. It is statistical because there is no guarantee that the behavior of the two that agree is correct. It is simply the case that two agreeing is likely to lead to correctness.

In high availability system of this nature they use multiple compliers too and they might choose to use different chip sets.

Given my stance on top-down it is heartening to see that project definition is what often guides these systems. No code is cut until after project definition which in turn provides the specification of the system. So what happens when they change things? They negotiate the change and provide a plan for the introduction of the change. It cannot be done on the fly because the complexity of the change often mitigates against a more agile approach. So they simulate they test and so on. What they have is very good governance processes which document the change, the impact and the resulting tests prior to productionisation. All similar to the approaches we use in commerce oriented solutions without simulation and without testing the architecture in any way.

The role of project definition is to provide the requirements against which testing can be measured and against which simulation can occur. The simulation provides a first step towards testable architecture ensuring that the overal design is commensurate with the requirements.

As an aside High Integrity Software development became a real vogue in the 1980's. One of the earliest proponents of formal methods which have underpinned High Integrity Software has been Tony Hoare (Elliot Brothers 1960 until 1968) who oddly enough was at Elliot Brothers during my dad's tenure (John Talbot 1961 until 1965 and then again 1968 until retirement in the 1990's).

So what does this mean to the world of software and commerce. Formal methods are valuable but not a panacea. They need to be introduced by stealth with their benefits laid out. They need to be employed early. As Anthony Hall elaborates:

"It is well known that the early activities in the lifecycle are the most important. According to the 1995 Standish Chaos report , half of all project failures were because of requirements problems. It follows that the most effective use of formal methods is at these early stages: requirements analysis, specification, high-level design. For example it is effective to write a specification formally rather than to write an informal specification then translate it .It is effective to analyse the formal specification as early as possible to detect inconsistency and incompleteness. Similarly, defining an architecture formally means that you can check early on that it satisfies key [functional and non-functional] requirements such as [message order and content], security [and performance SLA's]."

Formal method have been used in avionics for some time. Hence Tony Hoare's involvement in Elliot Brothers. They are for he most part hidden and become a function of tools that are used for design. The use of the Z notation in the early 1980's found popular acclaim in Ada based systems. The problem was it's cryptic nature and therefore lack of skills in using it. Which is why stealth is a good thing. Tie it up in a tool and make it easy (just like type systems in programming languages).

We have a much clearer formal understanding today of distributed computing through the work of Tony Hoare and Robin Milner. What is needed are tools to help us define architectures that remove the ambiguity of human translation and provide a mechanism for the analysis that is needed, henace the cry for formalism. The pi4soa tool suite is but a start. It can become more refined and integrated with other tools (such as Archimate and those specific tools that support Archimate). Architecture tooling and tooling for design is not the most popular of directions because of the lack of runtime scale for remuneration. But they are much needed as in the end they will enable solutions to be build faster, with higher quality and at lower cost whilst remaining suitable agile and aligned to the business and this is what formalism (suitably hidden) can provide.

Tuesday, 1 July 2008

The Industrialisation of IT

Possible the most important invention that gave rise to the industrial revolution was the micrometer. The inventor of the micrometer was William Gascoigne in the 17th century. It was directly responsible for the engineering discipline in constructing the steam engine and in constructing the Enfield rifle that was used during the civil war in the United States.

What the micrometer did was remove ambiguity. It gave rise to a language of design that enabled precision engineering which by extension gave rise to industrialisation with bullets being made in one place and gun barrels in another.

What has this got to do with CDL?

CDL is possibly as important in the industrialisation of IT. It gives us a language of precision of a system of services which in turn ensures that services are precise (by design) and so interoperate properly - just as the micrometer did for Enfield and Stevensons Rocket.

In classic engineering today simulation is used along with some formal mathematics to test out a design to ensure that it will work. In CDL the same principle is used to simulate and to test so that before a line of code is cut the CDL description is shown to be valid against the requirements and to be correct in computational terms (i.e. free from live locks, dead locks and race conditions).

Testable architecture along with a language of discourse that is precise removes the ambiguity between implementation and requirements. It enables industrialisation and facilitates off shoring of implementation in the same way that Enfield used the micrometer and the precision it gave to design to manufacture solutions in different locations and yet ensure that things work when they are put together.

Monday, 23 June 2008

Response to Luis's comments in my last blog entry

Firstly thank you for the comment Luis.

It has been a little while since I last blogged. In that time the latest Pi4SOA release came out. Some of it's capabilities are listed below in the narrative.

To get the latest release follow the instructions at the end.

On to my response to Luis's comments.

Luis wrote:

Very good article indeed. It clearly describes the main doubts most people have regarding the real added value of CDL and how to addressed it. It is clear vendors are more worried about short term ROI other than best practices and real heterogeneous enterprise architecture; evidence of this is how vendors sell “SOA Suites” as a product / package and not as an architecture approach or practice as it should be (this is probably one of the reasons why many SOA projects end up in DOA –dead on arrival– instead of addressing the real business needs and technological value)

One suggestion; it would be nice to mention how and when CDL can be plug together with other technologies such as BPMN, SCA and BPEL on the different stages of a SOA project and the roll it has one the solution is in place (e.g. in the governance).


Here is my response:

The Pi4SOA tools suite currently supports BPMN. It is simply and export format in which CDL is exported to BPMN. However CDL is richer that BPMN so it is not free from loss of the expressions in CDL. However it may serve the role of review collateral.

BPEL is also supported. You can generate BPEL just as you can generate Java. The downside is that BPEL is more restrictive than CDL because CDL is wider than classic web services and provides message exchange patterns that BPEL cannot handle but which are applicable to SOA (i.e. first class notifications). The BPEL generation is still prototypical as it does not garner much demand. However I think that Overlord will change that.

As to SCA, no current bindings exist to SCA but I do not think that SCA in and of itself is really a target for CDL as it fits two level below rather than one level below. One might of course make the association of SCA to ESB in which case the latest release supports JBossESB out of the box. The problem with ESB's is that there is not real standard. Rather an ESB is a collection of integrated components (Message Bus, Orchestration, Registry, Business Rules and adapters).

Governance is all about control and traceability. This is exactly what CDL provides as an SOA blueprint for a dynamic model. It is the language of architecture in which Java, C# and BPEL provide business logic inside services and in which CDL describes the collaborative behavior of the services as peers. It provides traceability back to requirements (example messages and sequence diagrams) today and in the future will also deal with the non-functionals (SLA's WS-Policy statements and so on) and it provides total coherence to implementation through Java generation, UML (XMI model) generation, and BPEL generation as well as providing runtime checking of behavior against the description. So it provides top to bottom alignment from requirements to implementation. In this way it provides a governance platform that can help to manage the complexity of change over a large real-estate of services. Which is why Cognizant and Redhat are interested in CDL.

Downstream we might try to join it up with ArchiMate, which will provide the route to requirements from the business. But that is some time away and work has yet to start. If anyone is interested then contact me at at the Foundation (see below).

To use the latest release follow the steps below:

1. Go to www.pi4soa.org
2. Select "Download->Browse All Files
3. Press the green "Download" button
4. Select the appropriate platform and download (See below)

pi4soa2.0.0.CR1-eclipse3.3.2-linux.tar.gz
pi4soa2.0.0.CR1-eclipse3.3.2-macosx.zip
pi4soa2.0.0.CR1-eclipse3.3.2-win32.zip
pi4soa2.0.0.CR1-Release-Note.pdf
pi4soa2.0.0.CR1-withsrc.zip
pi4soa2.0.0.CR1.zip
pi4soa.sar

I also have lots of example which I am happy to share. To get them you will need to email me at pi4tech (steve@pi4tech.com)

Wednesday, 7 May 2008

Show me the money in CDL

Summary
The value of CDL is that it takes us from art to engineering by ensuring good governance that is supported by tools throughout. It gets you to a solution much faster and more accurately with higher quality. It does this through formal validation of CDL descriptions against requirements, through monitoring of services against CDL descriptions and directed guidance of services construction through generation based on a CDL description. The formal tie in of a description to requirements and the linkage directly to monitoring and generation is what provides quality and time to market coupled with the level at which CDL as a description languages operates.

WS-CDL (CDL) has been around now for some time. The Pi4SOA tool suite has found great favour among vertical standards (ISDA, ISO, TWIST, HL7 and many others). Researchers continue to plough a furrow towards ever more interesting analysis of CDL as well as looking towards son-of-CDL. With Redhat's announcement of their SOA governance project (Overlord) and the central role played by CDL and the Pi4SOA tool suite one could be forgiven for assuming that SOA platforms vendors and tool vendors show interest in CDL and are actively looking at working on it and providing it to their communities.

So why is this not the case?

Some of it is still rooted in the adverse and misleading publicity that CDL received at the hands of some analysts and many large vendors. The deliberate confusion created in the minds of the population at large through the BPEL vs CDL debate caused much damage and was founded on many many fallacies and misunderstandings. But perhaps the fundamental reason for the lack of tools from vendors is rooted in the classic question "show me the money".

Despite some large corporates asking for CDL tooling, vendors have not really provided anything at all. And so we are left with a few tools largely open source from smaller players.

The SOA platform vendors make their money on run times not on design so they are focussed on execution and not description. This is why VC's often say you cannot make money from tools. It is why Eclipse is open source. So Java, BPEL J2EE Application Servers, ESB's and all of that stuff is all about execution and the scale that run time brings to revenue for vendors.

CDL is a description language not an executable language. At first glance it is not clear where there is any revenue. It has no runtime component. It's value at first glance is in clean concise accurate descriptions of System Architectures in terms of their observable behavior and message exchanges. So it is all about design and not execution.

One might argue that WSDL is also a description and yet has value. But that value is entirely driven by its use in execution. To have Web Services a WSDL description is needed. The attendant software to type check, marshal and un-marshall operations and data and to ensure correct message exchange patterns are adhered to is all part of the run time machinery needed to use WSDL.

WSDL has another value in that it is a language to describe the technical contract of a single service. It might not capture behaviour but it does capture the functional footprint and data types that a service needs to be useful at all. But this value, as a contract, is not why SOA Platform vendors invest in WSDL. The reason they invest is because they can clearly see the value in their run time environments and so the revenue that accrues.

So what of CDL? As with WSDL, CDL is a very good language for describing all of the contracts needed for a set of services. It describes the collaborative contract that is, in effect, a System Architecture description. But that, as far as SOA Platform vendors would be concerned, does not generate sufficient revenue to justify investment.

On the other hand there are many UML vendors who provide design time tools that users do buy. So there is a value. IBM have Rational, Magic have Magic Draw and so on. But it is not a big revenue ticket business. It is niche.

CDL can play a role in the execution phase and this is where the key revenue value can be clearly seen. If we have a System Architecture description that described the externally observable behavior of services then we can monitor the run time services against that description. This technique is like BAM but is really BAM with teeth as we can measure what we observe against what we expect. The result is monitoring, or governing, services such that we can determine where problems exist based on an artefact that also governs the change of the system as whole and may be used to drive evolution. This is why CDL is a key part of Redhat's SOA Governance project called Overlord.

From a user perspective, that is some entity wishing to deliver a solution, the value is more profound. On one side CDL provides possibly the very first formalised way of delivering an artefact, the System Architecture description, that can be validated against requirements and so yields testable architecture. This artefact represents to TO-BE state of a system. On the other side a CDL description can be used to describe, in much the same way, the AS-IS state of the system. In this case the artefact can be validated by automated observation (monitoring) to ensure that it truly represent the AS-IS state. When there is no variance in the monitoring we have a stable AS-IS description.

So CDL provides a means of having testable System Architectures for a TO-BE description and provides a means of defining an AS-IS description and means of validating that description against an existing IT real estate.

Clearly the difference between the AS-IS and TO-BE state represents the scope of work to be carried out. Because CDL is a formal description, analysis to identify the gap, between AS-IS and TO-BE, is much easier, faster and can be supported by tools to ensure that the gap is correctly identified. Furthermore the changes to the AS-IS state can be managed over the lifetime of a programme of work with continuous real time governance, through monitoring, to ensure that the solution delivered is correct at all times.

The value of CDL, in a nutshell, as has been blogged before, is that it takes us from art to engineering by ensuring good governance, that is supported by tools throughout. It gets you to a solution much faster and more accurately with higher quality.

As Redhat start to use the monitoring aspects of CDL SOA vendors will inevitably move towards it and I would anticipate SOA Governance and BAM to settle on CDL.

Sunday, 16 March 2008

The end of coding as we know it?

I was talking to a friend and colleague, who shall remain nameless, about the use of models as a principle means of deriving applications. Oddly enough, the day before, I was also talking to one of my new colleagues at Cognizant about something not dissimilar. In the former case there is at least one (and probably many) organisation who now seek to reduce the coding burden and have made efforts to turn their coding shops into testing shops with a little coding on the side. In the case of the latter we were talking about the real IP being in the process models and not the coding.

Clearly there is much work in MDA circles to solve these problems. After all there have been attempts at moving to executable UML Few if any have really succeeded. And those that come close tend to do so based on a more siloed view of an application. There are also initiatives within OMG to codify process models based on BPDM and relate them back to UML. This latter move is of considerable interest because on the one hand it recognises that UML today does not facilitate the encoding of business processes and it recognises the need for some description of the peered observable behavior of a set of roles which we might call a choreography description.

Can we really move towards a world in which models drive everything else and do so automatically? And if we can what do these models need to provide. What are the requirements?

I would contend that any such model, we might call it a dynamic blueprint for a SOA, needs to fulfill at least the following requirements:

  1. A dynamic model MUST be able to describe the common collaborative behaviors of a set of peered roles.
  2. A dynamic model MUST NOT dictate any one physical implementation.
  3. A dynamic model MUST but be verifiable against requirements.
  4. An implementation of a dynamic model MUST be verifiable against that dynamic model.
  5. A dynamic model MUST be verifiable against liveness properties (freedom from dead locks, live locks and race conditions).
  6. A dynamic model MAY be shown to be free from live locks deadlocks and race conditions.
  7. A dynamic model MUST be able to be simulated based on a set of input criteria.
  8. A dynamic model MUST enable generation of role based state behaviours to a range of targets including but not limited to UML activity diagrams, UML state charts, Abstract WS-BPEL, WS-BPEL, WSDL, Java and C#.
Let me examine what these really mean and then I shall summarise what I think the implications are on the software markets as a whole.

Requirement 1 really states that any model must be able to describe the way in which services (which might be the embodiment of a role) exchange information and the ordering rules by which they do so. The type of information exchanged might be given as a static model (requirement 2). The ordering rules would be the conditional paths, loops, parallel paths that constitute the collaborative behavior of the model with respect to the peered roles.

Requirement 2 simply states that a static information or data model is required and that this can either be in place when creating a dynamic model or it may be created along side the dynamic model which would provide context for the information types. When we iterate between sequence diagrams and static data models today this is essentially what we do anyway. The difference is that the dynamic model is also complete unlike the sequence diagrams which provide context on the basis of the scenario that they represent.

Requirement 3 says that any dynamic model should not dictate any specific physical implementation, that is it should not require a solution to be hub and spoke, peered, hierarchical and so on. It should be capable of being implemented in a range of physical architectures which are independent of the dynamic model.

Requirements 4 to 6 say that a model must be subject to and support various forms of automatic verification just like programming languages are today when we compile them and the compiler picks up errors and so prevents the code from being made executable. In the case of a dynamic model we would want to ensure that the dynamic model meets a set of requirements for the domain that it represents. This might be achieved by validating a dynamic model against an agreed set of messages and an agreed set of sequence diagrams which collectively describe one or more use cases. On the other hand we would want to use a validated dynamic model, which as a result of validation we know meets our requirements, to verify that an implementation of that model conforms to the model. That is that there does not exist any observed execution of the implementation across all of it's constituent services any set of observable exchanges or conditions that cannot be directly mapped to the dynamic model. Putting it another way we want to use the dynamic model as input to some form of runtime governance applied to the behavior of our set of peered services. The requirements that mention liveness, live locks and so on are really not any different to saying that in any programming language it is illegal to access an array of 10 elements by writing x = array[11]. The difference is that we are looking to prevent badly formed and potentially disastrous problems arising in a distributed system and not in a single application as do compilers.. Model checking applied to a dynamic model for distributed systems is one way of ensuring that this does not happen in much the same way that type checking prevents errors at a localised application level. I mentioned something akin to this in my blog on the workshop I attended which was entitled "OO Languages with session types ".

Requirements 7 states that a model must be able to be simulated. What this means in practice is that if a dynamic model captures the collaborative behavior of a set of peered roles then we must be able to provide such a model with some input data and see the dynamic model activated. For example if a dynamic model starts with the offering of a product then we must be able to direct it to some product information and see the exchanges that then occur. Equally if we introduce a number of bidders in an auction system we need to be able to enact the choreography.

Requirement 8 is all about reviewability - if such a word exists. Simply stated it is the ability to generate or display a dynamic model in a way that reviewers can understand, comment and so sign off on.

If we had a language that we could use to describe such dynamic models, and of course I would contend that WS-CDL is a good starting point along with BPDM, and if you are interested in the future then look at Scribble too, then what does this means for for the software market as whole? In simple terms it changes the shape and size of delivery and has an impact on testing. It compresses things.

On the one hand we can view the dynamic model as UML artefacts empowering implementors. If we know that the dynamic model is correct with respect to the requirements, and we know that the dynamic model is correct with respect to any unintended consequences (aka liveness) then we can be sure that the implementors will have a precise and correct specification in UML of what they should write on a per service/role basis and so ensure that all the implemented services will not have any integration problems. It makes it much more efficient to outsource development because the dynamic modeling can be done close to the domain and the development can be done where it is most cost effective to do so - hooray for off shore development. Coupled with the ability to use the dynamic model as input for testing it also becomes possible to verify that a service is playing the correct role as it is being executed in testing.

On the other hand, if we can generate Java do we really need coders? And this is the dilemma. Can we really do without coders. If the high level dynamic model only deals with the externally observable behavior then somehow we still need the internal behavior (the business logic). If the internal behavior can be described fully in UML and code subsequently generated we can indeed generate everything. The dynamic model of the system as a whole plus the UML models for service business logic combine to provide a two step high level description of a system in which no code needs to be specified at all. So no coders? Of course it sounds too good to be true. Where does the code go? Where are the actions specified? In UML this could be done using an appropriate action language, something that has found it's way into UML2.0 but as yet not fully formed as a concrete language. Someone still has to write this stuff, so the coders just move up the stack and become more productive as they did with the onset of OO languages as a whole.

Is this the end of coding as we know it? One thing is for sure it is not the end right now. How far and how fast the growing wave towards modeling as opposed to coding will take us, for me at least I cannot see it is all the way. The action semantics still need to be coded or written textually and that is really coding by another name. I remember in the early 1990's much was made of visual programming languages. None of them made it.

What it is all about is structure and making the structure of things visible and so easier to manipulate and that is a huge leap forward because we simply have never had anything that enforces structure for a distributed system before.

In the grand scheme of things the very fact that we can articulate such possibilities (the end of coding as we know it) means that our industry as a whole is maturing and our understanding of the complexity inherent in distributed systems (SOA and the rest) is becoming clearer every day. It does not mean that we are there yet but because we can now think about the requirements of a language needed to describe such structure we are at least on the right path.

Monday, 25 February 2008

General news

Not often I blog on something quite so general and to keep your interest I do intend to do a major piece on SOA for business and management rather than just a the usual techie dimension. It just takes a while to get my thoughts in order to do it.

On a more general topic I have changed jobs. Having been an entrepreneur for the last 10 years I have decided that I wish to devote time and energy to doing what I can in a larger organisation but one that is fast enough moving that I can use my entrepreneurial skills in. To this end I am now working for Cognizant as the lead architect for their Advanced Solutions Practice in Europe. All very exciting stuff because these guys are really growing fast and that gives me a whole lot of problems based on methodology and technology support for it to focus on.

I'm still very much Pi4 Tech and still associated with Hattrick (non-exec on the board).

I'm looking forward to bringing the experiences from my new role at Cognizant into the blogsphere.

Wednesday, 13 February 2008

Workshop on Web services, business processes and infrastructure

I attended a really good workshop last week and thought that I should probably blog a bit about what it was all about. There were some really good presentations. To the authors of these, alas sorry I missed a couple so I cannot blog about those I missed. But here goes ....

There was lot on global models during the course of the two days and a lot on session types (these are the types that WS-CDL was based upon and provides a behavioral type checking facility to determine liveness properties - i.e. does it deadlock or livelock and so on).

Several papers stood out for me.

There was one from Andy Gordon on F# - a formally grounded language for encoding data centre run books, similar in intent to that used at one of my old companies Enigmatec Corporation Ltd. What Andy has done is formalise a language for managing data centre, which when you step back and think about managing such assets really does have to be precise so formalisation is really important.

There was one from Joshua Guttman on global models (using WS-CDL) and security. Great paper because it added contextual security based on process. Something I have longed to see but have never had the ammunition to consider.

There was one from Mark Little about distribution and scalability which was very enlightening and showed many of the issues that concern today's web based applications and what we need to focus on to describe and manage transactions in an unreliable context (see link to mark on my links).

There was one on Sock and Jolie from Claudio Guidi's and colleague. Sock being the formal model and Jolie and implementation that enables complex systems of services to be enacted using a curly brace language (Jolie). They gave a great demo too.

There were a couple on session-based languages that really stood out. The first was from on her work on OO Languages with session types and the second was from Ray Hu on Distributed Java. The latter, which I have seen before, added a few jars which abstract communication away. Coupled with what is akin to an interface in Java but is actually a session type documenting behavior between processes it provided really good session typing and checking to ensure things work correctly between processes. For me this fits the bill as regards language extensions to Java that make Java a good end-point language with a strng notion of contract.

One paper dealt with a topic very close to my heart which is how can we use a global model (aka WS-CDL) to find services that meet the necessary behavioral footprint expressed as roles in WS-CDL If we could do this then when we write down out SOA blueprint for both existing systems and extensions we wish to make we can ensure a higher degree of reuse, not just at a functional matching level but at a behavioral level. The work is by Mario Bravetti and I hope very much that we can get Mario involved in the Foundation to help us move this to a reality for many people.

On some more comerically oriented presentations there was one from Matthew Rawlings and one from Dave Frankel. These two dealt with the realities of modeling and documenting standards that are very complex. Matthew is well known as an Architect and deep thinker in financial services and Dave is well known as one of the key people behind UML

The final paper I wanted to mention was given by my long time colleague, Gary Brown. The Foundation which was started by Gary and I has moved on. WS-CDL is where it is but we embarked (well less me and much more Gary and Kohei Honda) on looking at how better to describe global models. And so Scribble was born. Gary presented Scribble and in particular showed a simple HelloWorld process and how it is represented in WS-CDL and in Scribble. Scribble was great! It is early days for Scribble and I am aware that Scribble will try to be compatible with WS-CDL (so don't wait) but it is so clearly the was to go and I look forward to using it when it has all been done.

Thanks to the organisers (Marco Carbone, Nobuko Yoshida and Kohei Honda). It was very stimulating indeed.

Wednesday, 6 February 2008

Orchestration and Choreography, WS-BPEL and WS-CDL

I am somewhat compelled to write on the topic of WS-BPEL and WS-CDL. This is a topic that has surrounded the development of WS-CDL since it's inception and alas there has been a great deal of myth making in the telling of the story by large vendors and analysts alike. For those that are interested and of open mind I hope this helps position the two.

Rather than trawl back through history I shall focus forward and look at problems and solutions instead as I think these are much more important than the politics of standardisation and the great debate of orchestration vs choreography.

Fundamentally it is all about abstraction and expression. Since the days of Alan Turing and Alonso Church computer science has looked at higher and higher levels of abstraction in which all that went before can be incorporated in some way with what is on offer. We see the demise of assembler and the rise of structured programming languages. We see the demise of structured programming languages and the rise of object-oriented languages - now of course I egg it up and many of the older languages are still used and are very useful but I play to the main gallery in this particular blog.

Abstraction that is well founded (and so has some formal underpinnings) is a very useful concept in academia and one might be forgiven in thinking what benefit does this bring to business. Generally it boils down to two things. Firstly as the level of abstraction increases (and yet the expression remains the same) so the speed of delivery of an executable increases

This was the true with the move from C to C++ and then to Java and C#. As the level of abstraction rises so the gap between the requirements and the solution becomes smaller and in this way the alignment of solution to requirements or as is often said IT to business becomes cleaner and crisper. The message is that abstraction really does matter and really does impact business and is not just an academic nicety,

So where does this fit with WS-BPEL and WS-CDL. After all you probably think that they are the same. If you look at what they do and how they do it and how they describe things you will very quickly understand that the levels of abstraction they deal in are different. If you imagine wanting to build a car and you go out and select all of the tools and technology to do so it might at the high end look a little similar in abstraction to WS-BPEL. A bunch of parts that are delivered as units that are orchestrated into ever higher units which are orchestrated and delivered as a service. If you wanted to build a motorway (interstate highway) system you would not use the same tools and technologies that you use to build cars. Road systems are not orchestrated, rather they have some rules that govern them but they essentially acts as channels between peers (roads between cities or towns or villages). WS-CDL describes interactions between peers without describing how those peers do their business internally. It described a blueprint of communication. WS-BPEL simply does not describe what WS-CDL does and WS-CDL does n ot describe what WS-BPEL does. They are tools for different purposes. Just like cars and roads they are complimentary. Obviously you can drive a car off road. You can use WS-BPEL without WS-CDL. Obviously you can use a road without a car and so WS-CDL can be used without WS-BPEL. But magic stuff happens when roads and cars come together just like WS-CDL and WS-BPEL.

Now the tools we actually use for building cars can be used to build roads. We could use a spanner and hammer and a screw driver to help dig. But it makes more sense to use a spade or digger. And this is the point. We can all use assembler still but for the most part we choose not to because the level of abstraction does not match the problem we try to solve.

At this point I want to look to others to support the differences and the complementary nature of the WS-BPEL and WS-CDL. Here are two interesting takes on it.

A recent report from Burton group stated:

"Right now developers working on complex service-oriented architecture implementations face a Hobson's choice between an existing standard that doesn't meet their needs and an emerging standard that they can't use yet, according to a Burton Group report released this week."

and went on to say,...

"Despite its limitations, Howard sees BPEL with its vendor tools support continuing to play a role in SOA development. Eventually, however, BPEL sub-processes will play a subordinate role in the larger choreography and WS-CDL with vendor support will be the big picture standard."

Others, in response to the above said

"I think it is out of confusion that people get hung up on the equality of BPEL and WS-CDL. They address very different perspectives in a service-oriented approach (i.e. orchestration and choreography). Therefore to say BPEL is limited, is true if you are considering choreography but not so if you take what it aims to fulfil - orchestration, the executable process. Let us move on, and stick with the fact that they complement each other..."

So I am not alone in thinking that WS-CDL is important and I am not alone in thinking that WS-CDL worked or could work rather well with WS-BPEL and that they are in fact different beasts doing different jobs which themselves are complementary.

If you are still not convinced then let me point you to an example in WS-CDL and see if you can encode the same example in WS-BPEL as efficiently. You can certainly create systems using just WS-BPEL but the problem is that as soon as you need to deliver more than one service collaboration is needed and this is where WS-CDL works well. If you try to solve collaborative service problems using WS-BPEL you will find that the complexity and subsequent management starts to impinge on success. You will end up with many moving parts (one WS-BPEL per service and one set of partner links to each service that talks to another and so on) and many files. With WS-CDL you will end up with one file for the collaboration and can use this to generate the necessary WS-BPEL files ensuring that they all match up. Doing it without WS-CDL you will be forced to match all of the WS-BPEL files by hand - time consuming and error prone.

As one well known computational scientist once said (Jon Bentley's Programming Pearls) "I'd rather write programs to write programs", the very essence of abstraction.

And as John Koisch (HL7) told me today WS-BPEL is a very good white box service language and WS-CDL is a very good black box system language.

It really is akin to the difference between assembler and say Java. You can do it, but should you do it that way when better ways exist that will help you deliver faster and with a greater quality of result.

I would welcome further comments.

Wednesday, 30 January 2008

SOA Governance: WS-CDL meets Policy

Governance seems to be a pretty hot topic right now. Everyone seems to be focusing on it. IBM, Oracle and Microsoft have portals dedicated to it and offer solutions. I noticed that Mark Little of Redhat and JBoss fame has been blogging recently on it, "Tooling and Governance". All very interesting in particular Mark Little's blog.

The OMG has been firing up on the topic of governance too. Dr Said Tabet, co-founder of RuleML, gave a talk recently at the OMG conference entitled "Regulatory Compliance: Key to the success of BPM and SOA". This talk was interesting for a number of reasons. Back in 2004 Dr Tabet, Prof Chakravarthy, Dr Brown and I gave a position paper at a W3C workshop on policy "A generalized RuleML-based Declarative Policy specification language for Web Services,". In this paper, written before WS-CDL came into being, we postulated the idea of policy and WSDL and how they relate. Our thoughts have since clarified and the link is more meaningful, more valuable and more appropriate between WS-CDL and RuleML, more generally it is the link between a global description of an SOA solution and policy statements attached to that description.

All of this is great stuff but I feel a little bemused that few if any (none that I could find) have really gone back to the basics and understood and certainly explained what governance means. Instead we are left with market-speak such as "SOA governance is an extension of IT governance" and "".

What we need to do first of all is understand just what we might mean by governance? Then we might understand what it is we need rather than join the bandwagon and talk about SOA governance in either a purely technical way or in an abstract way that is not related to requirements and solutions.

It is surprisingly difficult to arrive at a consensus definition of what governance means. Online dictionary definitions do little to shed light on what we might mean. Having searched the web the one I liked most, which conveniently fits what I want to talk about is "The systems and processes in place for ensuring proper accountability and openness in the conduct of an organization's business.". I was very glad to see that the same problems related to understanding of what is meant by governance is not confined to me alone. One of the early movers in the space, Actional, thought so too.

I like this definition because as a serial entrepreneur I have been subjected to governance as it pertains to the running of a company and this pretty much describes one of the key responsibilities of a board of directors.

A system of processes would imply that there are documented procedures that one should adhere to. Accountability might be seen as a measure against those documented processes and openness might be seen as a means of conveying the levels of compliance (an interesting word given the talk Dr Tabet gave) to those processes.

How can we apply this to an SOA solution? Obviously for a people organisation records are kept and managers manage according to some rules. Should rules be broken documentation is provided to show where, how and why the rules were broken and if needed corrective action can be instigated. As people we are good at doing this sort of thing because we are flexible and can deal with missing information much better than computer systems.

Computer systems like clarity and ambiguity is not something that they deal with very well at all. So we need some way of describing unambiguously what the procedures and the rules might be against which we can govern. Without unambiguous procedures and rules we have no hope of governing anything computationally at all.

Governance and how it may be applied to an SOA solution for my mind must deal with some understanding of what the solution is supposed to do. It matters not a jot if we govern a solution that fails to meet its requirements because the governance will be abstract and have no grounding in the business to which it is being applied.

If a business fails to meet it's basic requirements, regardless of what they are, the board of directors would be failing in their duty of governance if they did not take action to steer the company in a direction in which it did meet it's requirement however that is be achieved.

It seems to me that it would be a good idea to be able to demonstrate that an SOA solution meets it's requirements.


Requirements applied to any system (SOA is irrelevant here) are often divided into functional and non-functional. Functional requirements tend to give rise to data models (i.e. a bid in an auction and all of it's attributes) and to behavior (i.e. the auction system accepts bids and notifies bidders and accepts items for auctions and notifies sellers). The former is often captured in a model using UML or XMLSchema and so on. The latter is often captured as a set of functional signatures (i.e. newBid(Bid), notify(LeadingBid),notify(LoosingBid)) and sequence diagrams showing how things plug together.

Non-functional requirements are constraints over the functional requirements. They might relate to the technology used (i.e. use JBossESB use WS-Transaction and use WS-RM and use WS-Security) or they might relate to performance (i.e. a business transaction will always complete within 2 minutes). In all cases we can think of these as policy statements about a solution. Those that relate to performance are dynamic policy statements and those that do not are static policy statements.

Good governance of an SOA solution is one that can show for any transaction across it that it complies to the requirements. That is it does what it is supposed to do. More specifically we can state that for any transaction across multiple services in an SOA solution that all messages flowing between services are valid, meaning that they are valid against some model of the data and that they are in the correct order (i.e. payment follows ordering). Furthermore that the flow of messages completes at the user within the agreed SLA for that user (i.e. it all took less than 2 minutes).


Figure 1: Example of a sequence diagram

Today's governance solutions lack a description of what is supposed to happen. They have no notion of correct order. This is probably why most solution are siloed and concentrate on individual services in an SOA solution rather than looking at the big picture.

If you imagine a set of services that make up an SOA solution each may have constraints upon it. The auction system may have a constraint that bidders are notified when outbid within 30 seconds. Payment processing completes within 1 minute after the auction finished and so on. The entire process however rarely is described and so the effect that individual policy statements on each service might have across the entire solution is lost. There is no where to say it and no description of the entire system against which to attach it.

One might suggest that BAM solves such problems. But is does not because it has no view of the underlying requirements of the system and so cannot determine correctness of behavior.

If we have a description of an SOA solution in terms of how services interact (the messages they exchange and the conditions under which exchange occur) and that description is unambiguous then we can start to see how governance can be achieved. Such a description is the basis for the procedures against which we measure governance. Governance is always measured, otherwise we cannot say we have good or bad governance. If such a description existed we can attach policy statements to the description. Some might be static and some dynamic. What it gives is an overall context to governance and enables us to say that a transaction was correct and importantly that another is not and examine why it is not.



Figure 2: Example of a description of an auction system (top level only)

This is why WS-CDL as a description is so powerful. It can be validated against sequence diagrams to ensure it models the functional requirements. It can be used to drive construction and with policy attachments can ensure that the right technologies are employed. During runtime it can be used to ensure that messages are correctly sequenced and that messages are flowing within allowable tolerances based on attached dynamic policies.



Figure 3: Example of policy attachment (to the bidding process)

Attaching policies to WS-CDL provides an overall governance framework in which we can be sure that exceptions are better identified and the solution as a whole steered in a more effective manner to achieve good governance across the board. Without it, or at least without an overall description against which policies are attached we are simply navigating without a compass or stars to guide us. Governance then becomes a matter of guesswork rather than something concrete that we can measure.

Friday, 25 January 2008

Is the Pi4SOA toolsuite supportive of RM-ODP?

I have been looking at RM-ODP of late. John Koisch recommended I look at it. I must admit I find it very interesting indeed. Here are some extracts that I want to blog about:

RM-ODP defines five viewpoints. A viewpoint (on a system) is an abstraction that yields a specification of the whole system related to a particular set of concerns. The five viewpoints defined by RM-ODP have been chosen to be both simple and complete, covering all the domains of architectural design. They are, the enterprise, information, computational, engineering and technology viewpoints.

RM-ODP also provides a framework for assessing system conformance. The basic characteristics of heterogeneity and evolution imply that different parts of a distributed system can be purchased separately, from different vendors. It is therefore very important that the behaviours of the different parts of a system are clearly defined, and that it is possible to assign responsibility for any failure to meet the system's specifications. RM-ODP goes further and defines where these conformance points would sit.

What is interesting here is to match up what we are trying to do at the Pi4 Technologies Foundation with WS-CDL-based tooling. Of the RM-ODP's five viewpoints the view that is of interest here is the "computational viewpoint". It is defined as:

"the computational viewpoint, which is concerned with the functional decomposition of the system into a set of objects that interact at interfaces - enabling system distribution; "

The key phrase is "a set of objects that interact at interfaces" because this is exactly what WS-CDL can express and it does so in both an architecturally and service neutral way from a global perspective. Thus WS-CDL would appear to be a really good fit as a language to support the "computational viewpoint".

The other area of interest lies in the notion of conformance and essentially service substitutability, the ability to "purchase separately" different parts of the system whilst maintaining overall behavioral correctness - the system as a whole continues to do what it is supposed to do from a computational viewpoint. Again this is very much where the pi4soa tool suite and the work in the Pi4 Technologies Foundation is heading.

The ability to test a computational viewpoint against higher order viewpoints is the basis of ensuring that the computation viewpoint is correct. In the pi4soa tool suite this is done using example messages from the information viewpoint (as information types) and scenarios (sequence diagrams) that represent flows. The model is checked for conformance. Conformance in this case means the model meets the requirements used to check it. Once this is done we can ensure that the expected the behavior of the services that participate are correct by monitoring them if they already exist and by generating their state behaviors if they do not exist.

In short the pi4soa tool suite provides a sound basis for a computational viewpoint language, and provides much in the way of automated conformance checking both at design time and at runtime. Thus the tool suite supports RM-ODP.

Comments please .....

Sunday, 20 January 2008

A Methodology for SOA

Methodologies

A methodology is a set of measurable and repeatable steps by which some objective can be met. In the case of software systems this needs to reflect an understanding of the roles that are played out by people in the delivery of a system and the artefacts that they may produce and which can be used to measure a system against its requirements whatever they may be. The measurement of artefacts is essential in being able to track progress towards achieving a common goal which is to deliver a solution that meets a specific set of requirements. Immediately we can see that in an ideal world if we can measure the artefacts against a set of requirements we can determine if the system does what it is supposed to do.

The methodology we describe herein does not deal with the wider issues of scope and requirements capture, these are best left to existing approaches such as TOGAF, rather we concentrate on the delivery of suitable artefacts and how they are used by different roles and how they can be used to measure the system at various stages from design to operation and to guide the construction of such a system.

Roles

The roles that are played out in the delivery of a software system start at the business stake-holder and involve business analysts, architects, implementers, project managers and operations managers. These roles are a reflection of tried and tested methodologies that are used in construction projects. In a construction project there is also a business stake-holder, namely the person or persons that expressed a need. There is also an architect who captures those needs and writes them down as a set of drawings, there are structural engineers who add derived requirements for load bearing issues, there are builders who construct, there are project managers who manage the construction and report to the business stake-holders and there are the people maintain the system day to day. The roles for a methodology for software construction are analogous The aim being that what is delivered measurably meets the requirements.

To carry the analogy further we can list those roles played out in construction projects and list the equivalent roles in software construction.



In explaining what the equivalent software construction role do we describe the relationships that they have with each other.

The business stake holder, the operations manager and the business analyst work together to document the requirements. The business stake-holders articulate the business requirements for meeting the key business goals and the operations manager articulates the requirements from a day to day operational perspective. The business analysts role is to elicit requirements from the business stake holder and the operations manager and record them in a way that enables the business stake holder and operations manager to agree to them and the enterprise architect to model them.

The enterprise architect liaises with the business analyst to model the requirements and fashion them into two artefacts. A dynamic model that describes the flow between services or application and a static model that described the data requirements that the dynamic model uses.

The technical architect liaises with the enterprise architect (often they are the same person but this is not always the case) to ensure that the technical constraints (e.g. what technologies can be employed, what the expected throughput might be and what the user response times should be) can be met.

The implementers liaise with both the technical and enterprise architects to construct the necessary pieces that make up the system be they services, applications and so on.

The project manager has to manage the way in which all of the roles above meet the requirements agreed with the business stake holders and to ensure that the system is delivered on time and on budget.

The business stake-holder and the operations manager also liaise with the enterprise architect to determine how the system is to demonstrate its acceptability to the users. This may be done in part by showing that the system implements the documented requirements and in part by use acceptance testing, the latter being better suited to issues such as usability and meeting non-functional requirements. In the case of usability issues nothing can substitute for user testing and so no automated mechanism can be employed. Whereas it may be possible to check non-functional requirements in the context of some dynamic model of the system to determine consistency and so show where the system as whole may fail to meet one or more of these requirements or under what circumstances such failure will occur.

Artefacts

We have already alluded to one artefact, namely the documented requirements. Whilst we do not focus on how these are gathered we do need to focus on what is recorded. By so doing we can use them to measure other artefacts as we drive towards the delivery of a system. Using the requirements as a benchmark for success will ensure that the delivered system does what it is supposed to do and so meets the overall business goals of a project.

Requirements
Requirements come in three forms and are of two types. There are functional requirements, such as “the system receives orders and payments and emits products for delivery based on the orders and payments” and “orders occur before payments which occur before delivery”, and non-functional requirements, such as “the time taken between an order being placed and the good delivered is not more than 3 days” and “the system needs to be built using JBoss ESB and Java.

The non-functional requirements can be captured as a set of constraints or rules that the system needs to obey in order to deliver a good service. An example is the end to end time from a customer placing and order to the order being satisfied. A more granular example is the time it takes for a customer to place an order and receive a response that the order has been accepted. In either case these can be represented as rules using a language such as RuleML although other rule and constraint language may well suffice.

The functional requirements can be decomposed into static and dynamic functional requirements.

Static functional requirements deal with data and what is needed to describe a product, a customer, an order and so on. The dynamic functional requirements describe how such data, as artefacts in their own right, are exchanged between different parts of a solution. For example how a product might be sent to a shipper as well as sent to a customer and how a customer sends an order to an order system which might give rise to a product that is shipped to the customer.

The dynamic functional requirements build upon the static functional requirements to lay out responsibilities and to order exchanges of artefacts between the different areas of responsibility. For example an order system is responsible for fulfilling order but a warehouse system is responsible for stocking and shipping products to a shipper or a customer and a payment system is responsible for dealing with payments and updating the books and records. Each service (e.g. the order system, the warehouse system and the payments system) has specific responsibilities but need to co-operate with each other to deliver a business goal such as selling products for money. The dynamic functional requirements describe, as examples or scenarios, who does what (their responsibility) and whom they exchange information with (the message exchanges) and when they do so (the ordering of those message exchanges).

It is the responsibility of the business analysts to gather and document these requirements. In so doing they need to deliver them as concrete artefacts and to do so they need tool support, although it should be possible to do all of this without tools using a paper and pen, tool make their life easier and more productive.

A tool to enable the non-functional requirements to be captured should provide an editor that enables rules and constraints to be written in English and can then turns them into RuleML. This would enable RuleML compliance processes to be used to determine if any of the constraints or rules are inconsistent and provide feedback as to where such inconsistencies lie. For example one might have a constraint that say the end to end performance from the point a user buys a product to receiving the product is less than or equal to 2 days. Whereas the point at which the order is acknowledged is less than or equal to 3 days. A consistency checker can easily pick up errors in which the wider constraint is breached by a more narrow one.

Likewise the functional static requirements need to be supported with an editor to create forms that can be used to create example messages, perhaps saving the message in different formats including XML. The XML messages then becomes artefacts in the same way as the non-functional RuleML artefacts.

And finally the dynamic functional requirements need tooling support to enable the business analysts to describe the flows between responsible services as sequence diagrams that can refer to the static functional requirement and non-functional requirement artefacts.

In short the base-artefacts from requirements gathering would be as follows:

  • a set of RuleML rules and constraints

  • a set of example XML message

  • a set of sequence diagrams over the messages, rules and constraints


Providing easy to use tools that support these artefact enables the business analyst to gather requirement rapidly in a form that is easy to review with the business stake-holders, can be measured for consistency and can be passed to the architects to draw up suitable blueprints that meet the requirements.

Models
When the business analysts hands over the artefacts that they have created the role of the architect is to create static models of the data and dynamic models of the behaviour that meet all of the requirements. Collectively we may consider these models as the blueprint for the system. The artefacts that the business analysts hands over to the architect serve not only as input to the static and dynamic models created but serve as a means of testing their validity and so measuring them against the requirements.

A static model needs to describe the data that the system uses as input and emits as output. Such a model needs to constrain the permissible values as well as constraining the combinations of data elements needed. There are many tools available which do this today. Some are based on UML and some are based on XML Schema. The ones we like can do both and this provides a reference model for the data that can be tested against the described static functional requirements to ensure that the model captures everything that has been previously agreed.

The artefacts from modelling are as follows:

  • a data model that represents all of the documents requirements for data

  • a dynamic model that represents all of the documented dynamic requirements


Validating models against requirements is not a one time operation. It certainly ensures that the system when delivered does represent all of the necessary data but also acts as a governance mechanism when the system runs, as data can be validated on the fly against the model and can also be used to control and guide further modifications of the system as it evolves.

The dynamic model needs to describe what data is input and when and what the observable consequences might be. For example a dynamic model might need to describe that an order is input and then a check made against the inventory to ensure the good are available. If they are available then payment can be taken and if payment is accepted then the good scheduled for delivery. Of course we might need to model this as a repeating process in which a customer may order several good in one go or after each check against the inventory and so on. Clearly any dynamic model needs to be able to describe repeating processes, things that happen in sequence and things that are conditional.

Validating dynamic models will require us to check our sequence diagrams against the dynamic model to ensure that the dynamic model captures the described dynamic functional requirements of the system and so show that the model behaves according to these requirements. As with the static data model this is a process that can be repeated to ensure that the system behaves correctly as it evolves and that the system behaves correctly when it is operational. The key aspect of such a dynamic model is that it is unambiguous and can be understood by applications which may verify models against input data from sequence diagrams as well as messages that flow through a running system. This provided behavioural governance of a system through its life-cycle.

WS-CDL provides a suitable language for describing dynamic models. It makes no assumptions about the underlying technology that might be used to build systems, it is unambiguous and it is neutral in its description so that it can describe how disparate services (i.e. legacy and new services) need to work together in order to deliver some business goal.

Services
The dynamic model only describes the order of messages (data) that are exchanged between different services, so it would describe the exchange of an order followed by the exchange of a payment and the subsequent credit check and so on. It does not describe how credit checking is performed nor does it describe how an order is processed. Rather it describes when and what messages are exchanges.

To implement, or construct, the services we can use the unambiguous dynamic model to determine what the interfaces of each service should be. You can think of these as the functional contract that each service presents. We can also use the dynamic model to lay out the behaviour of the services and so the order in which the functions can be invoked. Collectively this provides a behavioural contract to which the service must adhere to ensure that they do the right thing at the right time with respect to the dynamic model. Using the dynamic model in this way ensures structural and behavioural conformance to the model and so to the requirements against which the model has been validated.

Contractually the business logic need only receive messages, process them and ensure that the messages that need to be sent are available. The service contracts ensure that messages are received and sent as appropriate.

Validation of messages against the static model whilst the system is operational can be added as configuration rather than in code and validation against the dynamic model can be added similarly. This ensures continual governance of running systems against models which have been validated against requirements and so show that the system itself conforms to the requirements. This level of governance is akin to continual inspection that an architects and business stake holders and other parties may apply to construction projects.

Summary of the Methodology

The artefacts and the roles that are involved in their production and the way in which they artefacts are used are summarised in the two tables below:



Table 1: Artefacts and their uses



Table 2: Artefacts and roles