Tuesday, May 31, 2011

The Legacy Issue

Much of the work we do as architects involves dealing with the dreaded "legacy systems". Of course legacy actually means the last system built, not one that is necessarily 5, 10, 20 or more years old. As soon as a system goes into production it is basically "legacy". As soon as new features get added that legacy system gets harder to maintain and  more difficult to understand; entropy (in the sense of an expression of disorder or randomness) sets in.

Apple have recently been in the news again for the wrong reasons because some of the latest iPod's do not work with previous versions of Mac OSX. Users have been complaining that they are being forced to to upgrade to the latest version of OSX in order to get their shiny new iPods to work. To make matters worse however Apple do support the relatively ancient version of Windows XP. Apple have always taken a fairly hard line when it comes to legacy by not supporting backwards compatibility particularly well when their OS gets upgraded. The upside is the operating systems does not suffer from the "OS bloat" that Windows seems to (the last version of OSX actually had a smaller footprint than the previous version).

As architects it is difficult to focus both on maintaining legacy systems and also figuring out how to replace them. As Seth Godin says:"Driving with your eyes on the rearview mirror is difficult indeed". At some point you need to figure out whether it is better to abandon the legacy system and replace it or soldier on supporting an ever harder to maintain system. There comes a point where the effort and cost in maintaining legacy is greater than that needed to replace the system entirely. I'm not aware of any formal methods that would help answer this particularly hard architectural decision but it's one I think any architect should try and answer before embarking on a risky upgrade program that involves updating existing systems.

Thursday, May 26, 2011

Ethics and Architecture

If you've not seen the BBC2 documentary All Watched Over By Machines of Loving Grace catch it now on the BBC iPlayer while you can (doesn't work outside the UK unfortunately). You can see a preview of the series (another two to go) on Adam Curtis' (the film maker) web site here. The basic premise of the first programme is as follows.

Back in the 50's a small group of people took up the ideas of the novelist Ayn Rand whose philosophy of Objectivism advocated reason as the only means of acquiring knowledge and rejecting all forms of faith and religion. They saw themselves as a prototype for a future society where everyone could follow their own selfish desires. One of the Rand 'disciples' was Alan Greenspan. Cut to the 1990's where several Silicon Valley entrepreneurs,  also followers of Rand's philosophy, believed that the new computer networks would allow the creation of a society where everyone could follow their own desires without there being any anarchy. Alan Greenspan, now Chairman of the Federal Reserve, also became convinced that the computers were creating a new kind of stable capitalism and convinced President Bill Clinton of a radical approach to cut the United States huge deficit. He proposed that Clinton cut government spending and reduce interest rates letting the markets control the fate of the economy, the country and ultimately the world. Whilst this approach appeared to work in the short term, it set off a chain of events which, according to Curtis' hypothesis, led to 9/11, the Asian financial crash of 1997/98, the current economic crisis and the rise of China as a superpower that will soon surpass that of the United States. What happened was that the "blind faith" we put in the machines that were meant to serve us led us to a "dream world" where we trusted the machines to manage the markets for us but in fact they were operating in ways we could not understand resulting in outcomes we could never predict.

So what the heck has this got to do with architecture?  Back in the mid-80's when I worked in Silicon Valley I remember reading an article in the San Jose Mercury News about a programmer who had left his job because he didn't like the applications that the software he'd been working on were being put to (something of a military nature I suspect). Quite a noble act you might think (though given where he worked I suspect the guy didn't have too much trouble finding another job pretty quickly). I wonder how many of us really think about what the uses of the software systems we are working on are being put to?

Clearly if you are working on the control software for a guided missile it's pretty clear cut what the application is going to be used for. However what about if you are creating some piece of generic middleware? Yes it could be put to good use in hospital information systems or food-aid distribution systems however the same software could be used for the ERP system of a tobacco company or in controlling surveillance systems that "watch over us with loving grace".

Any piece of software can be used for both good and evil and the developers of that software can hardly have it on their conscious to worry about what that end use will be. Just like nuclear power leads to both good (nuclear reactors, okay, okay I know that's debatable given what's just happened in Japan) and bad (bombs) it is the application of a particular technology that decides whether something is good or bad. However, here's the rub. As architects aren't we the ones who are meant to be deciding on how software components are put together to solve problems, for both better and for worse? Is it not within our remit to control those 'end uses' therefore and to walk away from those projects that will result in systems that are being built for bad rather than good purposes? We all have our own moral compass and it is up to us as individuals to decide which way we point our own compasses. From my point of view I would hope that I never got involved in systems that in anyway lead to an infringement of a persons basic human rights but how do I decide or know this? I doubt the people that built the systems that are the subject of the Adam Curtis films ever dreamed they would be used in ways which have almost led to the economic collapse of our society? I guess it is beholden on all of us to research and investigate as much as we can those systems we find ourselves working on and decide for ourselves whether we think we are creating machines that watch over us with "loving grace" or which are likely to have more sinister intents. As ever, Aurthur C. Clarke predicted this several decades ago and if you have not read his short story Dial F for Frankenstein now might be a good time to do so.

Thursday, May 19, 2011

The Essence of Being an Architect

There are many skills frameworks out there that tell us what skills we should have for 'doing architecture'. My company has (at least) one and I'm sure yours does as well. There are also organisations that specialise in creating such frameworks (check out the Skills Framework for the Information Age for example). Whilst there are some very specific skills that software architects need (developing applications using xyz programming language, building systems using a particular ERP package and so on) which come and go as technology evolves there are some enduring skills which I believe all architects must acquire as they progress in their careers. What I refer to as being the essence of being an architect. Seth Godin recently posted a blog entry called What's high school for? where he listed 10 things all schools should be teaching that should sit above any of the usual stuff kids get taught (maths, chemistry, history etc). A sort of list of meta-skills if you like. Borrowing from, and extending, this list gives me my list of essential architect skills.
  1. How to focus intently on a problem until it's solved. There is much talk these days about how the internet, the TV networks and the print media are leading to a dumbed down society in which we have an inability to focus on anything for longer than 30 minutes. Today's business problems are increasingly complex and often require prolonged periods of time to really focus on what the problem is before charging in with a (software) solution. Unfortunately the temptation is always often to provide the cheapest or the quickest to market solution. You need to fight against these pressures and stay focused on the problem until it is solved.
  2. How to read critically. As architects we cannot hope to understand everything there is to know about every software product, technology or technique that is out there. Often we need to rely on what vendors tell us about their products. Clearly there is a danger here that they tell us what we, or our clients, want to hear glossing over faults or features that are more 'marketechture' than archihtecture. Learn how to read vendor product descriptions and whitepapers with a critical eye and ask difficult questions.  
  3. The power of being able to lead groups of peers without receiving clear delegated authority. The role of an architect is to build solutions by assembling components in new and interesting ways. You are the person who needs to both understand what the business wants and how to translate those 'wants' into technology. Business people, by and large, cannot tell you how to do that. You need to lead your peers (both business people and technologists) to arrive at an effective solution.
  4. How to persuasively present ideas in multiple forms, especially in writing and before a group. Obvious really, you can have the greatest idea in the world but if you cannot present it properly and effectively it will just stay that, an idea.
  5. Project management, self-management and the management of ideas, projects and people. How to manage your and others time to stay focused and deliver what the client wants in a timely fashion.
  6. An insatiable desire (and the ability) to learn more. Forever! This job cannot be done without continuous learning and acquiring of knowledge. Everyone has their own learning style and preferences for how they acquire knwledge, find out what your style is and deploy it regularly. Don't stick to IT, I've discussed the role of the versatilist extensively (see here for example). Be 'V' shaped not 'T' shaped.
  7. The self-reliance that comes from understanding that relentless hard work can be applied to solve problems worth solving. Belief in ones ideas and the ability to deploy them when all around you are doubting you is probably one of the hardest skills to acquire. There is a fine balance between arrogance and self-belief. In my experience this is not an easily repeatable skill. Sometimes you will be wrong!
  8. Know how to focus on what is important and to ignore what is not. If you have not heard of Parkinson's Law of Triviality take a look at it.
  9. Know who the real client is and focus on satisfying him/her/them. There can be lots of distractions in our working lives, and I'm not just talking about twittering, blogging (sic) and the rest of the social networking gamut. Projects can sometimes become too inward focused and lose track of what they are meant to be delivering. We live in a world where numbers have achieved ascendency over purpose. We can sometimes spend too much time measuring, reviewing and meeting targets rather than actually doing. I love this quote from Deming: "If you give a manager a numerical target, he’ll make it, even if he has to destroy the company in the process". There is little merit in a well executed project that no one wants the output from.
  10. Use software/system delivery lifecycle (SDLC) processes wisely. SDLC's are meant to be enablers but can end up being disablers! Always customise an SDLC to fit the project not the other way around.
If all of this seems hard work that's because it is. As Steven Pressfield says in his book The War of Art: "The essence of professionalism is the focus upon the work and its demands, while we are doing it, to the exclusion of all else".

Friday, May 13, 2011

Sketching with the UML

In his book UML Distilled – A Brief Guide to the Standard Object Modeling Language Martin Fowler describes three ways he sees UML being used:
  • UML as sketch: Developers use UML to communicate some aspect of a system. These sketches can be used in both forward (i.e. devising new systems) as well as reverse (i.e. understanding existing systems) engineering. Sketches are used to selectively talk about those parts of the system of interest. Not all of the system is sketched. Sketches are drawn using lightweight drawing tools (e.g. whiteboards and marker pens) in sessions lasting anything from a few minutes to a few hours. The notation and adherence to standards is non-strict. Sketching works best in a collaborative environment. Sketches are explorative.
  • UML as a blueprint: In forward engineering the idea is that developers create a blueprint that can be taken and built from (possibly with some more detailed design first). Blueprints are about completeness. The design/architecture should be sufficiently complete that all of the key design decisions have been made leaving as little to chance as possible, In reverse engineering blueprints aim to convey detailed information about the system, sometimes graphically. Blue prints require more sophisticated modelling tools rather than drawing tools. Blueprints are definitive.
  • UML as a programming language: Here developers draw UML diagrams that are compiled down to executable code. This mode requires the most sophisticated tooling of all. The idea of forward and reverse engineering does not exist because the model is the code. Model Driven Architecture (MDA) fits this space.
In practice there is a spectrum of uses of UML which I've shown below in Figure 1. In my experience very few organisations are up at level 10 and I would say most are in the range 3 - 5. I would classify these as being those folk who use UML tools to capture key parts of the system (either in a forward or reverse engineering way) and export these pictures into documents which are then reviewed and signed-off.
Figure 1
An interesting addition to the 'UML as sketch' concept is that at least two vendors that I know of (IBM and Sparx) are offering 'sketching' capabilities within their modeling tools. In Rational Software Architect the elements in a sketch include shapes such as rectangles, circles, cylinders, stick figures that represent people, and text labels. Checkout this YouTube video for a demo. Unlike other types of models, sketches have only a few types of elements and only one type of relationship between elements. Also, sketches do not contain multiple diagrams; each sketch is an independent diagram. You can however move from a sketch to a more formal UML diagram or create a sketch out of an existing UML diagram so allowing you to work with a diagram in a more abstract way.

Below in Figure 2 is an example of a sketch for my example Hotel Management System, that I've used a few times now to illustrate some architectural concepts, drawn using the current version of Rational Software Architect. I guess you might call this an Architecture Overview used to show the main system actors and architectural elements.

Figure 2
I guess the ability to be able to sketch with modeling tools could be a bit of a double edged sword. On the one hand it means sketches are at least captured in a formal modeling environment which means they can be kept in one centrailsed repository and can be maintained more effectively. It also means they can potentially be turned into more formal diagrams thus providing a relatively automated way of moving along the scale shown in Figure 1. The downside might be that sketching is as far as people go and never bother to provide anything more formal. I guess only time will tell whether this kind of capability gains much traction amongst developers and architects alike. For my part I would like to see sketching in this way as a formal part of a process which encourages architects and developers to create models using this approach, get them roughly right and then turn them ino a more formal and detailed model.

Tuesday, May 10, 2011

Change Cases and the Limits of Testing

This recent blog entry from Seth Godin on "the culture of testing" set me thinking about software architecture and testing. Is it possible to 'over test' applications or systems? Is there a point at which you need to stop testing and let your software 'go free' so users can 'complete' testing themselves? Let's be clear here:
  • Software needs testing very rigorously. No one wants to fly on an airplane where the on-board flight control software has not been fully tested.
  • I'm also not talking about the well established practice of releasing beta versions of software where you get together a bunch of early adopters to complete testing for you and iron out the "last few bugs".
  • Testing software against known requirements is most definitely a good thing to do and I'm not advocating just running your system tests against a subset of those functional and non-functional requirements.
Where it gets interesting is when you don't overly constrain the architecture so that it allows users to take resulting application or system and evolve it (test it) in new and interesting ways. When defining an architecture we usually talk about it having to address functional as well as non-functional requirements but there is a third, often overlooked, class of requirement referred to as change cases or future requirements. Change cases are used to describe new potential requirements for a system or modifications to existing requirements. Change cases usually come from the specifier of the system (e.g. "in three years time we want introduce a loyalty scheme for our regular guests"). Some change cases however are not envisaged in advance and it's only when users of the application get hold of it that they explore and find new ways of using the application that may not have originally been thought of. Such applications need to be carefully architected, and tested, such that these change cases can be discovered without, of course, breaking the application altogether.

So, by all means ship systems or application that do what they say on the tin but also lets users do other, possibly more interesting things, with them that may lead to new and innovative uses that the original specifiers of the software had not thought about.