Saturday, 21 May 2016

Who is responsible for how knowledge is used?

We have a duty to use knowledge responsibly

~

In the last post I ended up perilously close to an unhelpful, fatalistic, conclusion. I'm sorry reader. My only intent was to counter what I often come across in my day to day goings on: unbridled optimism and/or pessimism regarding purported future states of the world in which certain kinds of knowledge have been implemented. More often it's optimism than pessimism, but catastrophic imaginings do occur.

My purpose was simply to highlight that when we think about implementing innovations we need to take a broad minded approach. By that I mean it is important to think beyond the contexts relevant to the idea whose transformative potential we are imagining to foreshadow how our innovation might impact on other contexts.

It boils down to our having a responsibility to use knowledge in a way that doesn't knowingly do harm. I am going to attach a caveat to what I mean by 'knowingly' so that deliberate ignorance is not a defense. By knowingly, I am invoking the requirement that reasonable steps have been taken to foreshadow possible domains of impact and then investigate them. This caveat covers the microbeads case we've been thinking through. It seems to me that, even from the privileged vantage of hindsight, environmental impacts should reasonably have been investigated before the innovation was enacted in the form of cosmetics.

What about something ubiquitous? What about computation? I am not going to try and give an authoritative summary of the history of computing. Suffice it to say that this is an ongoing set of innovations that continue to transform the world in which we live. Their applications facilitate both war and peace, environmental destruction and conservation, and all other manner of opposite aims. Does it make sense to hold the implementers and/or users of computing systems accountable for the harms that are caused by their applications? Perhaps - but we're talking about long chains of innovations comprising many many actors, where would responsibility start and stop? Can we even engage in a meaningful audit of benefits and harms in this case? I think not. The issue here is not just one of scale but relates to how we attribute credit or blame for particular kinds of beneficial or adverse impacts that result from an individual innovation.

The fact that innovations in computation can be pressed into service of both peaceful and violent ends is not the fault of all the developers, implementers or users of computing technology who have played some role in their development. In part, it is a measure of the usefulness of the innovation. It highlights how people with diverse goals can and do make use of the same innovation to fulfil them. But suggesting that the implementers (and/or users) of knowledge and not the producers of knowledge should carry the weight of responsibility for outcomes overlooks that producers of knowledge act with specific goals in mind, and that these may be beneficent or maleficent. It also overlooks the instances where knowledge producers may have an accurate idea about the impacts of applying their knowledge and therefore may have some responsibility for how it is used.

A further difficulty with this stance is that, as I noted in the last post, 'good' intentions and 'good' outcomes are not always aligned nor easily alignable - especially when we are talking about complex processes. Consequently, my argument suffers from the standard criticisms of duty based ethics, namely: there are limitations to how a duty-based ethics can manifest a just society and at least some responsibility should be attributable based on the consequences of ones actions rather than simply the intent.

No comments:

Post a Comment