Friday, November 30, 2012

The Moral Architect

I started my career in the telecommunications division of the General Electrical Company (GEC) as a software engineer designing digital signalling systems for Private Branch Exchanges based on the Digital Private Network Signalling System. As part of that role I represented GEC on the working party that defined the DPNSS standard which was owned by British Telecom. I remember at one of the meetings the head of the working party, whose name I unfortunately forget, posed the question: what would have happened if regimes such as those of Nazi Germany or the Stalinist Soviet Union had access to the powerful (sic) technology we were developing? When I look back at that time (early 80's) such "powerful technology" looks positively antiquated - we were actually talking about little more than the ability to know who was calling whom using calling line identification! However that question was an important one to ask and is now one we should be asking more than ever today.

One of the roles of the architect is to ask the questions that others tend to either forget about or purposely don't ask because the answer is "too hard". Questions like:
  • So you expect 10,000 people to use your website but what happens if it really takes off and the number of users is 10 or 100 times that?
  • So you're giving your workforce mobile devices that can be used to access your sales systems, what happens when one of your employees leaves their tablet on a plane/train/taxi?
  • So we are buying database software from a new vendor who will help us migrate from our old systems but what in-house skills do we have to manage and operate this new software?
  • Etc
In many ways these are the easy questions, for a slightly harder question consider this one posed by Nicholas Carr in this blog post.
"So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?"
Pity the poor architect who has to design for that particular use case (and probably several hundred others not yet thought of)! Whilst this might seem to be someway off, the future, as they say, is actually a lot closer than you think. As Carr points out, the US Department of Defence has just issued guidelines designed to:
"minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements."  
Guidelines which presumably software architects and designers, amongst others, need to get their heads around.

For anyone who has even the remotest knowledge of the genre of science fiction this is probably going to sound familiar. As far back as 1942 the author Isaac Asimov formulated his famous three laws of robotics which current and future software architects may well be minded to adopt as an important set of architectural principles. These three laws, as stated in Asimov's 1942 short story Runaround, are:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
As stated here these laws are beautifully concise and unambiguous however the devil, of course, will be in the implementation. Asimov himself went on to make quite a career of writing stories that tussled with some of the ambiguities that could arise from the conflicts between these laws.

So back to the point of this blog. As our systems become ever more complex and infringe on more and more of our everyday lives are ethical or moral requirements such as these going to be another set of things that software architects need to deal with? I would say absolutely yes. More than ever we need to understand not just the impact on humanity of those systems we are building but also those systems (and tools) we are using everyday. As  Douglas Rushkoff says in his book Program or be Programmed:
"If you don't know what the software you're using is for, then you're not using it but being used by it."
In a recent blog post Seth Godin poses a number of questions of what freedom in a digital world really means. Many of these are difficult moral questions with no easy answer and yet systems we are building now, today are implicitly or explicitly embedding assumptions around some of these questions whether we like it or not. One could argue that we should always question whether a particular system should be built or not (just because we can do something does not necessarily mean we should) but often by the time you realise you should be asking such questions it's already too late. Many of the systems we have today were not built as such, but rather grew or emerged. Facebook may have started out as a means of connecting college friends but now it's a huge interconnected world of relationships and likes and dislikes and photographs and timelines and goodness knows what else that can be 'mined' for all sorts of purposes not originally envisaged.

One of the questions architects and technologists alike must surely be asking is how much mining (of personal data) is it right to do? Technology exists to track our digital presence wherever we go but how much should we be making use of that data and and to what end? The story of how the US retailer Target found out a teenage girl was pregnant before her father did has been doing the rounds for a while now. Apart from the huge embarrassment to the girl and her family this story probably had a fairly harmless outcome however what if that girl had lived in a part of the world where such behavior was treated with less sympathy?   

It is of course up to each of us to decide what sort of systems we are or are not prepared to work on in order to earn a living. Each of us must make a moral and ethical judgment based on our own values and beliefs. We should also take care in judging others that create systems we do not agree with or think are "wrong". What is important however is to always question the motives and the reasons behind those systems and be very clear why you are doing what you are doing and are able to sleep easy having made your decision.


Monday, November 19, 2012

Is the Raspberry Pi the new BBC Microcomputer?


There has been much discussion here in the UK over the last couple of years about the state of tech education and what should be done about it. The concern being that our schools are not doing enough to create the tech leaders and entrepreneurs of the future.
  
The current discussion kicked off  in January 2011 when Microsoft's director of education, Steve Beswick, claimed that in UK schools there is much "untapped potential" in how teenagers use technology. Beswick said that a Microsoft survey had found that 71% of teenagers believed they learned more about information technology outside of school than in formal information and communication technology (ICT) lessons. An interesting observation given that one of the criticisms often leveled at these ICT classes is that they just teach kids how to use Microsoft Office.

The discussion moved in August of 2011, this time at the Edinburgh International Television Festival where Google chairman Eric Schmidt said he thought education in Britain was holding back the country's chances of success in the digital media economy. Schmidt said he was flabbergasted to learn that computer science was not taught as standard in UK schools, despite what he called the "fabulous initiative" in the 1980s when the BBC not only broadcast programmes for children about coding, but shipped over a million BBC Micro computers into schools and homes.

January 2012 saw even the schools minister, Michael Gove, say that the ICT curriculum was "a mess" and must be radically revamped to prepare pupils for the future (Gove suspended the ICT Curriculum in September 2012). All well and good but as some have commented "not everybody is going to need to learn to code, but everyone does need office skills".

In May 2012 Schmidt was back in the UK again, this time at London's Science Museum where he announced that Google would provide the funds to support Teach First - a charity which puts graduates on a six-week training programme before deploying them to schools where they teach classes over a two-year period.

So, what now? With the new ICT curriculum not due out until 2014 what are the kids who are about to start their GCSE's to do? Does it matter they won't be able to learn ICT at school? The Guardian's John Naughton proposed a manifesto for teaching computer science in March 2012 as part of his papers digital literacy campaign.  As I've questioned before should it be the role of schools to teach the very specific programming skills being proposed; skills that might be out of date by the time the kids learning them enter the workforce? Clearly something needs to be done otherwise, as my colleague Dr Rick Robinson says, where will the next generation of technology millionaires come from?

Whatever shape the new curriculum takes, one example (one that Eric Schmidt himself used) of a success story in the learning of IT skills is that of the now almost legendary BBC Microcomputer. A project started 30 years ago this year. For those too young to remember, or were not around in the UK at the time, the BBC Microcomputer got its name from project devised by the BBC to enhance the nation's computer literacy. The BBC wanted a machine around which they could base a series called The Computer Programme, showing how computers could be used, not just for computer programming but also for graphics, sound and vision, artificial intelligence and controlling peripheral devices. To support the series the BBC drew up a spec for a computer that could be bought by people watching the programme to actually put into practice what they were watching. The machine was built by Acorn the spec of which you can read here.

The BBC Micro was not only a great success in terms of the television programme, it also helped spur on a whole generation of programmers. On turning the computer on you were faced with the screen on the right. The computer would not do anything unless you fed it instructions using the BASIC programming language so you were pretty much forced to learn programming! I can vouch for this personally because although I had just entered the IT profession at the time this was in the days of million pound mainframes hidden away in backrooms guarded jealously by teams of computer operators who only gave access via time-sharing for minutes at a time. Having your own computer which you could tap away on and get instant results was, for me, a revelation.

Happily it looks like the current gap in the IT curriculum may about to be filled by the humble Raspberry Pi computer. The idea behind the Raspberry Pi came from a group of computer scientists at Cambridge, England's computer laboratory back in 2006. As Ebon Upton founder and trustee of the Raspberry Pi Foundation said:
"Something had changed the way kids were interacting with computers. A number of problems were identified: the colonisation of the ICT curriculum with lessons on using Word and Excel, or writing webpages; the end of the dot-com boom; and the rise of the home PC and games console to replace the Amigas, BBC Micros, Spectrum ZX and Commodore 64 machines that people of an earlier generation learned to program on".
Out of this concern at the lack of programming and computer skills in today's youngsters was born the Raspberry Pi computer (see left) which began shipping in February 2012. Whilst the on board processor and peripheral controllers on this credit card sized, $25 device are orders of magnitude more powerful than anything the BBC Micros and Commodore 64 machines had, in other ways this computer is even more basic than any of those computers. It comes with no power supply, screen, keyboard, mouse or even operating system (Linux can be installed via a SD card). There is quite a learning curve just to get up and running although what Raspberry Pi has going for it that the BBC Micro did not is the web and the already large number of help pages as well as ideas for projects and even the odd Raspberry Pi Jam (get it). Hopefully this means these ingenious devices will not become just another piece of computer kit lying around in our school classrooms.


The Computer Literacy Project (CLP) which was behind the idea of the original BBC Micro and "had the grand ambition to change the culture of computing in Britain’s homes" produced a report in May of this year called The Legacy of the BBC Micro which, amongst other things, explores whether the CLP had any lasting legacy on the culture of computing in Britain. The full report can be downloaded here. One of the recommendations from the report is that "kit, clubs and formal learning need to be augmented by support for individual learners; they may be the entrepreneurs of the future". 30 years ago this support was provided by the BBC as well as schools. Whether the same could be done today in schools that seem to be largely results driven and a BBC that seems to be imploding in on itself is difficult to tell.

And so to the point of this post: is the Raspberry Pi the new BBC Micro in the way it spurred on a generation of programmers that spread their wings and went on to create the tech boom (and let's not forget odd bust) of the last 30 years? More to the point, is that what the world needs right now? Computers are getting getting far smarter "out of the box". IBM's recent announcements of it's PureSystems brand promise a "smarter approach to IT" in terms of installation, deployment, development and operations. Who knows what stage so called expert integrated systems will be at by the time today's students begin to hit the workforce in 5 - 10 years time? Does the Raspberry Pi have a place in this world? A world where many, if not most, programming jobs continue to be shipped to low cost regions, currently the BRIC, MIST countries and so on, I am sure, the largely untapped African sub-continent.

I believe that to some extent the fact that the Raspberry Pi is a computer and yes, with a bit of effort, you can program it, is largely an irrelevance. What's important is that the Raspberry Pi ignites an interest in a new generation of kids that gets them away from just consuming computing (playing games, reading Facebook entries, browsing the web etc) to actually creating something instead. It's this creative spark that is needed now, today and as we move forward that, no matter what computing platforms we have in 5, 10 or 50 years time, will always need creative thinkers to solve the worlds really difficult business and technical problems.

And by the way my Raspberry Pi is on order.

Saturday, November 10, 2012

Architects Don't Code

WikiWikiWeb is one of the first wiki experiences I, and I suspect many people of a certain age, had. WikiWikiWeb was created by Ward Cunningham for the Portland Pattern Repository, a fantastic source of informal guidance and advice by experts on how to build software. It contains a wealth of patterns (and antipatterns) on pretty much any software topic known to man and a good few that are fast disappearing into the mists of time (TurboPascal anyone?).

For a set of patterns related to the topics I cover in this blog go to the search page, type 'architect' into the search field and browse through some of the 169 (as of this date) topics found.  I was doing just this the other day and came across the ArchitectsDontCode pattern (or possibly antipattern). The problem statement for this pattern is as follows:
"The System Architect responsible for designing your system hasn't written a line of code in two years. But they've produced quite a lot of ISO9001-compliant documentation and are quite proud of it."
The impact of this is given as:
"A project where the code reflects a design that the SystemArchitect never thought of because the one he came up with was fundamentally flawed and the developers couldn't explain why it was flawed since the SystemArchitect never codes and is uninterested in implementation details."
Hmmm, pretty damning for System Architects. Just for the record such a person is defined here as being:
"[System Architect] - A person with the leading vision, the overall comprehension of how the hardware, software, and network fit together."
The meaning of job titles can of course vary massively from one organisation to another. What matters is the role itself and what that person does in the role. It is often the case that any role with 'architect' in the title is much denigrated by developers, especially in my experience on agile projects, who see such people as being an overhead who contribute nothing to a project but reams of documents, or worse UML models, that no one reads.

Sometimes software developers, by which I mean people who actually write code for a living, can take a somewhat parochial view of the world of software. In the picture below their world is often constrained to the middle Application layer, that is to say they are developing application software, maybe using two or three programming languages, with a quite clear boundary and set of requirements (or at least requirements that can be fairly easily agreed through appropriate techniques). Such software may of course run into tens of thousands of lines of code and have  several tens of developers working on it. There needs therefore to be someone who maintains an overall vision of what this application should do. Whether that is someone with the title of Application Architect, Lead Programmer or Chief Designer does not really matter; it is the fact they look after the overall integrity of the application that matters. Such a person on a small team may indeed do some of the coding or at least be very familiar with the current version of whatever programming language is being deployed. 


In the business world of bespoke applications, as opposed to 'shrink-wrapped' applications, things are a bit more complicated. New applications need to communicate with legacy software and often require middleware to aid that communication. Information will exist in a multitude of databases and may need some form of extract, transform and load (ETL) and master data management (MDM) tools to get access to and use that information as well as analytics tools to make sense of it. Finally there will be business processes that exist or need to be built which will coordinate and orchestrate activities across a whole series of new and legacy applications as well as manual processes. All of these require software or tooling of some sort and similarly need someone to maintain overall integrity. This I see as being the domain, or area of concern, of the Software Architect. Does such a person still code on the project? Well maybe, but on typical projects I see it is unlikely such a person has a much time for this activity. That's not to say however that she needs some level of (current) knowledge on how all the parts fit together and what they do. No mean task on a large business system.

Finally all this software (business processes, data, applications and middleware) has to be deployed onto actual hardware (computers, networks and storage). Whilst the choice and selection of such hardware may fall to another specialist role (sometimes referred to as an Infrastructure or Technical Architect) there is another level of overall system integrity that needs to be maintained. Such a role is often called the System Architect or maybe Chief Architect. At this stage it is possible that the background of such a person has never involved coding to any great degree so such a person is unlikely to write any code on a project and quite rightly so! This is often not just a technical role that worries about systems development but also a people role that worries about satisfying the numerous stakeholders that such large projects have.

Where you choose to sit in the above layered model and what role you take will of course depend on your experience and personal interests. All roles are important and each must work together if systems, that depend on so many moving parts, are to be delivered in time and on budget.