Computing ethics met computing sustainability at the weekend when we had lunch with Don Gotterbarn and Sylvia Nagl. Don was the lead author on most of what we recognise as computing’s codes of ethics (see my notes) while Sylvia’s work in complex systems is pushing the limits of computing. Don was in London on ACM/IFIP business so we grabbed the chance (Lesley and I were at the end of 36 hours travelling so my notes are somewhat cryptic).
Don’s premise is quite simple – if you’re going to build something, think about the impacts. With Simon Rogerson he developed the concept of a Software Development Impact Statement (SoDIS) as a model which can be used to identify the ethical dimensions of a software development project and to identify ways to mediate the potential negative consequence of that project. The approach is developed from a distilling of 28 codes of ethics and is based around considering harm. When the approach was first developed it first considered harm caused by the computer, this was expanded to responsibility as a professional and now the questions are much broader.
Don sees professional judgement as the basis for ethical behaviour rather than a set of rules. He says that the big mistake is when people mistake ethical principles for religious commandments. He gives many examples where professionals are forced to rank priorities – and these change according to the situation. Checklist approaches to ethics don’t work.
So, what is the relationship between Don’s ethics and sustainability. We attempted to draw a Venn diagram and failed: is ethics the superset? is sustainability? are they largely overlapping sets? or are they the same thing through different lens?
Don’s suggested approach to integrating sustainability is to include “environment” as a stakeholder. The structured questions “Is there harm on the <environment>?” would then prompt consideration of sustainability. Unfortunately, Lesley and I have tried that with limited success (OK, it didn’t work at all). We found that this was too much of a blank piece of paper – our students just left it blank. Don believes that this could be solved with more active prompting and questioning. He gives examples of how he challenges students with consequences they hadn’t thought of. Unfortunately the rest of the world doesn’t have Don in their classroom (or development team) facilitating discussion. Also, I worry about the ability of computing professionals to identify unexpected consequences resulting from incremental and perhaps insidious change in fields increasingly remote from their core expertise.
So here’s the challenge: somehow we need a balance between a checklist and rules (which we know doesn’t work) and a blank piece of paper with an instruction to ‘use professional judgement’ (which we know doesn’t work).
Don gives an example that is useful here. He talks about a remote car starting device – a button to press as you approach your car so that it is ready to run by the time you get to it. This would be a reasonably easy thing to program and to model. In UML terms we would have an actor and a use case involving “unlock” to which we add “start”, a defined system boundary makes it easy to write code. How do we get the developer to think about the consequences of this? In places other than the US most cars are manual – remotely starting them is going to run someone over.
Is it simply a matter of having a bigger bounding box in UML – could we instigate a new symbol – the dotted world box? How much bigger? I retold the tale of the London sewers where they doubled the engineering requirements but Don questions whether such inbuilt redundancy in software is always a good thing – particularly with regard to complexity of logic. He says that the “how do we know when we’re done?” question has long been the problem – how big a blank piece of paper are we giving people? He sees the answer as a series of concentric rings with diminishing impact and that you work these rings until you reach “the edge of your intellect”. I like this but it doesn’t solve the problem of the student/developer who is happy with having reached the edge of their intellect/impact in the first ring.
Sylvia would like to see more explicit models in the consideration. I think that this is the answer. Perhaps if we prompted people to think of their computing developments in terms of biological models then the sustainability impacts would be more obvious without it becoming a box ticking exercise. Perhaps the biomimicry and autopoiesis work could be useful here.
Don says that ethical consideration works best when people are trained, and can think outside the box. I think that the trick is to incorporate tools to help people reach this state.
March 7th, 2008 → 1:24 pm
[…] This focus brings the NZCS Code of Ethics to the fore. It is poised to become central to our careers. Don recognises that it is not very tractable (test yourself: write down the principles of the code, before refreshing yourself here). NZCS is organising working groups so I hope to see an opportunity for people to participate in this. We should also take the opportunity to make sustainability explicit – its currently at best only implied in NZ and all other codes of practice (see earlier posts 1, 2). […]