Wednesday, April 27, 2011

Architectural Granularity

There's an old joke about defining granularity:

Granularity is a bit like pornography, it's hard to define but you know it when you see it.

The whole topic of granularity of software components is important to the Software Architect for a number of reasons:
  • If you are creating too many fine-grained components then you are probably not doing architecture but design.
  • As I discussed previously, getting components at the right level of granularity is important when it comes to placing those components (on nodes) and satisfying non-functional requirements such as availability and performance.
  • The granularity of components impacts performance. Too many fine-grained components will probably result in increased traffic on the network because the client using the component must make several queries to complete a task or get the information it needs.
  • Generally speaking, the coarser grained a component the more likely it is to have a direct correlation to something the business finds useful (e.g. a 'Reservation' component that handles all aspects of allowing a customer to reserve a room is more useful than one that 'finds a vacant room').
The terms “fine-grained”, “medium-grained” and “coarse-grained” are frequently used to describe architectural components (or indeed services) but there seems to be no common definition for these terms. This leads to confusion and ambiguity in their use. Whilst there is no agreed definition of what these terms mean there does seem to be a consensus that granularity is about more than just the number of operations an interface on a component has. Everware-CBDI (see their June 2005 edition which is only available via subscription) suggests that other factors might be (and I'm simplifying here):
  • The number of components (C) in the directed-acyclic-graph (DAG) that are invoked through a given operation on that components interface.
  • The function points (F) for each component.
  • The number of database tables (D) created, read, updated or deleted.
So one measure of granularity (G) might be:

G = C + F + D

Put simply, if a component is self-contained (C = 1), has a single function point (F = 1) and updates a single entry in a database (D = 1) it will have a granularity of '3'. Such a 'component' is going to be something like a class 'Integer' with a single operation such as 'increment' which adds one to the previous value. Whilst such a component may be infinitely reusable it is not particularly useful from an architectural (or business) standpoint. We could honestly call such a component 'fine-grained'. At the other extreme a CRM system will probably have C, F and D each equal to several thousand giving it a granularity in the tens of thousands. Again, from an architectural standpoint, this is not very useful. There is not much we can do with such a 'component' other than observe it is (very) coarse-grained. I suggest that we architects are more likely to be interested in creating, and (re)using components that sit somewhere in between these extremes (maybe in the low 100's region using this measure of granularity).

What we are actually talking about here is what ZapThink refer to as 'functional granularity' (as opposed to 'interface granularity') that is, granularity of the component itself rather than the granularity of any interfaces the component may expose. For the architect it is getting this functional granularity right that is most important. The coarser the (functional) granularity of the component the more likely it is to have a useful business context. So a 'Reservation' component that deals with all aspects of reserving a room for a customer (i.e. takes the dates, room preferences, hotel location, customer loyalty status etc) and finds a sutable room is a useful component at both an architectural level (i.e it addresses the points I started with above) as well as a business level (i.e. it is understandable by the business and can therefore be mapped to business users requirements).

When composing systems into components working out what level of granularity you need is a key aspect of creating the systems architecture so you have the right number of components that are most useful in both an architectural as well as business context.

Wednesday, April 20, 2011

EA Wars

Lots of great discussion in the blogosphere right now on the relevance of Enterprise Architecture in the brave new world of the 'extended' enterprise and whether architecture is something that is planned or 'emerges'. This is largely prompted, I suspect, by the good folk at ZapThink asking Why Nobody is Doing Enterprise Architecture (possibly to plant seeds of doubt in the minds of CIOs and send them rushing to one of their "licensed ZapThink architecture courses"). For a nice, succinct and totally dismissive riposte to the ZapThink article check out David Sprott's blog entry here. For a more reasoned and skeptical discussion on whether emergent architecture actually exists (I think it does, see below) read Richard Veryard's blog entry here.

However... despite some real FUD'iness on the part of ZapThink there are some elements in their argument that definitely ring true and which I have observed in a number of clients I have worked with over the last few years. In particular: 
  • Emergence (i.e. "the way complex systems and patterns arise out of a multiplicity of relatively simple interactions") is, like it or not, a definite factor in the architecture of modern-day enterprises. This is especially true when the human dimension is factored into the mix. The current Gen Y and upcoming Gen V are not going to hang around while the EA department figure out how to factor their 10th generation iPhone, which offers 3-D holographic body-time, into an EA blueprint. They are just going to bypass the current systems and use it regardless. The enterprise had better quickly figure out the implications of such devices (whatever they turn out to be) or risk becoming a technological backwater.
  • EA departments seem to very quickly become disjoint from both the business which they should be serving and the technicians which they should be governing. One is definitely tempted to ask "who is governing the governors" when it comes to the EA department? Accountability in many organisations definitely seems to be lacking. This feels to me like another example of gapology that seems to be increasingly apparent in such organisations.
  • Even though we are better placed than ever to be able to capture good methodological approaches to systems development I still see precious little adoption of true systems development lifecycles (SDLC's) in organisations. Admittedly methods have had very bad press over the years as they are often seen as been an unnecessary overhead which, with the rise of agile, have been pushed into the background as something an organisation must have in order to be ISO 9000 compliant or whatever but everyone really just gets on with it and ignores all that stuff.
  • Finally, as with many things in IT, the situation has been confused by having multiple and overlapping standards and frameworks in the EA space (TOGAF, Zachman and MODAF to name but three). Whilst none of these may be perfect I think the important thing is to go with one and adapt it accordingly to what works for your organisation. What we should not be doing is inventing more frameworks (and standards bodies to promote them). As with developing an EA itself the approach an EA department should take to selecting an EA framework is to start small and grow on an as needs basis. 

Friday, April 15, 2011

Applying Architectural Tactics

The use of architectural tactics, as proposed by the Software Engineering Institute, provides a systematic way of dealing with a systems non-functional requirements (sometime referred to as the systems quality attributes or just qualities). These can be both runtime qualities such as performance, availability and security as well as non-runtime such as maintainability, portability and so on. In my experience, dealing with both functional and non-functional requirements, as well as capturing them using a suitable modeling tool is something that is not always handled very methodically. Here's an approach that tries to enforce some architectural rigour using the Unified Modeling Language (UML) and any UML compliant modeling tool.
 
Architecturally, systems can be decomposed from an enterprise or system-wide view (i.e. meaning people, processes, data and IT systems), to an IT system view to a component view and finally to a sub-component view as shown going clock-wise in Figure 1. These diagrams show how an example hotel management system (something I've used before to illustrate some architectural principles) might eventually be decomposed into components and sub-components.
Figure 1: System Decomposition
This decomposition typically happens by considering what functionality needs to be associated with each of the system elements at different levels of decomposition. So, as shown in Figure 1 above, first we associate ‘large-grained’ functionality (e.g. we need a hotel system) at the system level and gradually break this down to finer and finer grained levels until we have attributed all functionality across all components (e.g. we need a user interface component that handles the customer management aspects of the system).

Crucially from the point of view of deployment of components we need to have decomposed the system to at least that of the sub-component level in Figure 1 so that we have a clear idea of each of the types of component (i.e. do they handle user input or manage data etc) and know how they collaborate with each other in satisfying use cases. There are a number of patterns which can be adopted for doing this. For example the model-view-controller pattern as shown in Figure 2 is a way of ascribing functionality to components in a standard way using rules for how these components collaborate. This pattern has been used for the sub-component view of Figure 1.
Figure 2: Model-View-Controller Pattern
So far we have shown how to decompose a system based on functional requirements and thinking about which components will realise those requirements. What about non-functional requirements though? Table 1 shows how non-functional requirements can be decomposed and assigned to architectural elements as they are identified. Initially non-functional requirements are stated at the whole system level but as we decompose into finer-grained architectural elements (AKA components) we can begin to think about how those elements support particular non-functional requirements also. In this way non-functional requirements get decomposed and associated with each level of system functionality. Non-functional requirements would ideally be assigned as attributes to each relevant component (preferably inside our chosen UML modelling tool) so they do not get lost or forgotten.

Table 1
System Element Non-Functional Requirement
Hotel System (i.e. including all actors and IT systems). The hotel system must allow customers to check-in 24 hours a day, 365 days a year. Note this is typically the accuracy non-functional requirements are stated at initially. Further analysis is usually needed to provide measurable values.
Hotel Management System (i.e. the hotel IT system). The hotel management system must allow the front-desk clerk to check-in a customer 24 hours a day, 365 days a year with a 99.99% availability value.
Customer Manager (i.e. a system element within the hotel’s IT system). The customer manager system element (component) must allow customer details to be created, read or updated (but not deleted) 24 hours a day, 365 days a year with a 99.99% availability value.
Customer Manager Interface (i.e. the user interface that belongs to the Customer Manager system element). The customer manager interface must allow customer details to be created, read or updated (but not deleted) 24 hours a day, 365 days a year with a 99.99% availability value.

Once it is understood what non-functional requirement each component needs to support we can apply the approach of architectural tactics proposed by the Software Engineering Institute (SEI) to determine how to handle those non-functional requirements.

An architectural tactic represents “codified knowledge” on how to satisfy non-functional requirements by applying one or more patterns or reasoning frameworks (for example queuing or scheduling theory) to the architecture. Tactics show how (the parameters of) a non-functional requirement (e.g. the required response time or availability) can be addressed through architectural decisions to achieve the desired capability.

In the example we are focusing on in Table 1 we need some tactics that allow the desired quality attribute of 99.99% availability (which corresponds to a downtime of 52 min, 34 sec per year) to be achieved by the customer manager interface. A detailed set of availability tactics can be found here but for the purposes of this example availability tactics can be categorized according to whether they address fault detection, recovery, or prevention. Here are some potential tactics for these:
  • Employing good software engineering practices for fault prevention such as code inspections, usability testing and so on to the design and implementation of the interface.
  • Deploying components on highly-available platforms which employ fault detection and recovery approaches such as system monitoring, active failover etc.
  • Developing a backup and recovery approach that allows the platform running the user interface to be replaced within the target availability times.
As this example shows, not all non-functional requirements can be realised suitably by a component alone; sometimes full-realisation can only be done when that component is placed (deployed) onto a suitable computer platform. Once we know what non-functional requirements need to be realised by what components we can then think about how to package these components together to be deployed onto the appropriate computer platform which supports those non-functional requirements (for example on a platform that will support 99.99% availability and so on). Figure 3 shows how this deployment can be modelled in UML adopting the Hot Standby Load Balancer pattern.
Figure 3: Deployment View
Here we have taken one component, the ‘Customer Manager’, and showed how it would be deployed with other components (a ‘Room Manager' and a ‘Reservation Manager'’) that support the same non-functional requirements onto two application server nodes. A third UML element, an artefact, packages together like components via a UML «manifest» relationship. It is the artefact that actually gets placed onto the nodes. An artefact is a standard UML element that “embodies or manifests a number of model elements. The artefact owns the manifestations, each representing the utilization of a packageable element”.

So far all of this has been done at a logical level; that is there is no mention of technology. However moving from a logical level to a physical (technology dependent level) is a relatively simple step. The packaging notion of an artefact can equally be used for packaging physical components, for example in this case the three components shown in Figure 3 above could Enterprise Java components or .NET components.

This is a simple example to illustrate three main points:
  1. Architecting a system based on functional and non-functional requirements.
  2. Use of a standard notation (i.e. UML) and modelling tool.
  3. Adoption of tactics and patterns to show how a systems qualities can be achieved.
None of it rocket science but something you don't see done much.

Wednesday, April 13, 2011

Oops There Goes Our Reputation

I'm guessing that up to two weeks ago most people, like me, had never heard of a company called Epsilon. Now, unfortunately for them, too many people know of them for all the wrong reasons. If you sign up to any services from household names such as Marks and Spencer, Hilton, Marriott or McKinsey you will have probably had several emails in the last two weeks advising you of a security breach which led to "unauthorized entry into Epsilon's email system". Unfortunately because Epsilon is a marketing vendor that manages customer email lists for these and other well known household brands chances are your email has been obtained by this unauthorised entry as well. Now, it just might be a pure coincidence, but in the last two weeks I have also received emails from the Chinese government inviting me to a conference on some topic I've never heard of, from Kofi Annan, ex-Secretary General of the United Nations and from a lady in Nigeria asking for my bank account details so she can deposit $18.4M into the account so she can leave the country!

According to the information on Epsilon's web site the information that was obtained was limited to email addresses and/or customer names only. So, should we be worried by this and what are the implications on architecture of such systems?

I think we should be worried for at least three reasons:
  1. Whilst the increased spam that is seemingly inevitable following an incident such as this is mildly annoying a deeper concern is how could the criminal elements who now have information on the places I do business on the web put this information together to learn more about me and possibly construct a more sophisticated phishing attack? Unfortunately it's not only the good guys that have access to data analytics tools.
  2. Many people probably have a single password to access multiple web sites. The criminals who now have your email as well as knowledge of which sites you do business at only have to crack one of these and potentially have access to multiple sites, some of which may have more sensitive information.
  3. Finally how come information I trusted to well known (and by implication 'secure') brands and their web sites has been handed over to a third party without me even knowing about it? Can I trust those companies not to be doing this with more sensitive information and should I be withdrawing my business from them?. This is a serious breach of trust and I suspect that many of these brands own reputations will have been damaged.
So what are the impacts to us as IT architects in a case like this? Here are a few:
  1. As IT architects we make architectural decisions all the time. Some of these are relatively trivial (I'll assign that function to that component etc) whereas others are not. Clearly decisions about which part of the system to entrust personal information to is not trivial. I always advocate documenting significant architectural decisions in a formal way where all the options you considered are captured as well as the rationale and implications behind the decision you made. As our systems get ever more complex and distributed the implications of particular decisions become harder to quantify. I wonder how many architects consider the implications to a companies reputation of entrusting even seemingly low grade personal information to third parties?
  2. It is very likely that incidents such as this are going to result in increased legislation that covers personal information just like there is legislation on Payment Card Industry (PCI) standards. This will demand more architectural rigour as new standards essentially impose new constraints on how we design our systems.
  3. As we trundle slowly to a world where more and more of our data is to be held in the cloud using a so called multi-tenant deployment model it's surely only a matter of time before unauthorised access to one of our cloud data stores will result in access to many other data sources and a wealth of our personal information. What is needed here is new thinking around patterns of layered security that are tried and tested and, crucially, which can be 'sold' to consumers of these new services so they can be reassured that their data is secure. As Software-as-a-Service (SaaS) takes off and new providers join the market we will increasingly need to be reassured they are to be trusted with our personal data. After all if we cannot trust existing, large corporations how can we be expected to trust new, small startups?
  4. Finally I suspect that it is only a matter of time before legislation aimed at systems designers themselves is enforced that make us as IT architects liable for some of those architectural decisions I mentioned earlier. I imagine there are several lawyers engaged by the parties whose customers email addresses were obtained and whose trust and reputation with those customers may now be compromised. I wonder if some of those lawyers will be thinking about the design of such systems in the first place and, by implication, the people who designed those systems?

Tuesday, April 5, 2011

Open vs. Closed Architectures

There has been much Apple bashing in cyberspace as well as the 'dead-wood' parts of the press of late. To the extent that some people are now turning on those that own one of Apple's wunder-devices (an iPad) accusing them of being "selfish elites". Phew! I thought it was a typically British trait to knock anything and anyone that was remotely successful but it now seems that the whole world has it in for Mr Jobs' empire.

Back in the pre-google days of 1994 Umberto Eco declared that "the Macintosh is Catholic and that DOS is Protestant. Indeed, the Macintosh is counter-reformist and has been influenced by the ratio studiorum of the Jesuits. It is cheerful, friendly, conciliatory; it tells the faithful how they must proceed step by step to reach -- if not the kingdom of Heaven -- the moment in which their document is printed."

The big gripe most people have with Apple is their closed architecture which controls not only who is allowed to write apps for their OS's but who can produce devices that actually run those OS's (er, that would be Apple). It's one of life's great anomalies as to why Apple is so successful in building products with closed architectures when most everyone would agree that open architectures and systems are ultimately the way to go as, in the end, they lead to greater innovation, wider-usage and, presumably, more profit for those involved. The classic case of an open architecture leading to wide-spread usage is that of the original IBM Personal Computer. Because IBM wanted to fast-track its introduction many of the parts were, unusually for IBM, provided by third-parties including, most significantly the processor (from Intel) and the operating system (from the fledgling Microsoft). This together with the fact that the technical information on the innards of the computer were made publicly available essentially made the IBM PC 'open'. This more than anything gave it an unprecedented penetration into the marketplace allowing many vendors to provide IBM PC 'clones'.

There is of course a 'dark side' to all of this. Thousands of vendors all providing hardware add-ons and extensions as well as applications resulted in huge inter-working problems which in the early days at least required you to be something of a computer engineer if you wanted to get everything working together. This is where Apple stepped in. As Umberto Eco said, Apple guides the faithful every step of the way. What they sacrifice in openness and choice they gain in everything working out the box, sometimes in three simple steps.

So, is open always best when it comes to architecture or does it sometimes pay to have a closed architecture? What does the architect do when faced with such a choice? Here's my take:
  • Know your audience. The early PC's, like it or not were bought by technophiles who enjoyed technology for the sake of technology. The early Mac's were bought by people who just wanted to use computers to get the job done. In those days both had a market.
  • Know where you want to go. Apple stuck solidly with creating user friendly (not to mention well designed devices) that people would want to own and use. The plethora of PC providers (which there soon were) couldn't by and large give a damn about design. They just wanted to sell as many devices as possible and let others worry about how to stitch everything together. This in itself generated a huge industry which in a strange self-fulfilling way led to more devices and world domination of the PC and left Apple in a niche market. Openness certainly seemed to be paying.
  • Know how to capitalise on your architectural philosopy. Ultimately openness leads to commoditization. When anyone can do it price dominates and the cheapest always wins. If you own the space then you control the price. Apple's recent success has been not to capitalise on an open architecture but to capitalise on good design which has enabled it to create high value, desirable products showing that good design trounces an open architecture.
So how about combining the utility of an open architecture with the significance of a well thought through architecture to create a great design? Which funnily enough is what Dan Pink meant by this:

Significance + Utility = Design

Huh, beaten to a good idea again!