Saturday, 28 May 2016

Knowledge translation as not using knowledge


Using knowledge well might mean not using it at all.
~
We have a duty to use knowledge to address challenges facing society. We also, as I argued in my last post, have a duty to think through the consequences of knowledge use. 

This project of using knowledge well has been the core aim of the rapidly burgeoning fields of KERTI (knowledge exchange, research translation and implementation). KERTI can be viewed as a kind of social movement. It is driven by a collective sense that we are not making the best use of what we know in order to improve the world around us. KERTI functions as a kind of 'call to action' to champion certain ways of creating and implementing new knowledge.

One possible reason for the rise of KERTI relates to the reduction in permanent employment opportunities in research. This, coupled with increasing costs of scientific research and competition over scarce resources, creates pressures to show benefit for one's research over unrealistically short time frames. In this light, KERTI can be thought of as a symptom of systemic problems in research governance and funding.

On a more optimistic note, KERTI can be viewed as part of the specialisation process, a result of the  expansion of academic knowledge and practices that creates an opportunity for people to hone their skills in helping knowledge to be used well, creating specialist knowledge of its own kind in the process.

While I support initiatives to make better use of knowledge and count myself a member of the KERTI movement, I also find it a curious prognostication. Let me explain why:

Patience

KERTI is a fairly recent activity in the history of science. Plato, Aristotle, Newton, Einstein, Darwin, the Curies and Galileo didn't have dedicated KERTI teams at their disposal (well, maybe Socrates did). Still, their innovations found a way into the scientific doctrine. Did this take time? Yes. Did their ideas face resistance? Yes. Still, the ideas have come to occupy fairly central roles in modern scientific understandings. My broad position on this is to take a realist approach: if enough awareness is raised in relation to an idea that is true and useful, political processes and players will have a limited ability to suppress it in the long term. In this way, I am attributing some agency to the quality of an idea and relying on an intrinsic pragmatism in humans. The qualifier of 'awareness' is important because I am not discounting certain impediments to an idea getting a fair airing. For example, the articulation of ideas by men has been more likely to achieve the prominence necessary to become established. But, even here, this is an issue of who gets credit for an idea rather than whether the idea itself takes hold. If the idea is both true and useful, it will be 'discovered' by more than one person. 

This is a tough criterion on knowledge, though - a kind of post-hoc survival of the fittest argument - and one that is more easily applied to the really big ideas that anchor our understanding of the world, rather than the middling level, context dependent ideas that most of us engage with in the specialities we contribute to.

It is commonly cited that effective 'translation' of innovations takes approximately 17 years. (As the authors of this paper highlight, there are good reasons for viewing this figure with some scepticism. Still, I think we can take this figure as a rough approximation i.e., the usual rate of uptake of an innovation is a matter of years or decades rather than days, weeks, months or centuries). Statistics like this are often given in the context of suggesting how difficult and/or time consuming it is for good ideas to make it into practical application. But how long should an innovation take to be applied? How can we say that 17 years is too long to wait? Maybe it's not enough? Perhaps it depends on the specific innovation?

Of course, we do not want to delay beneficial interventions from serving beneficial goals. But let's say, for example, there exists a life saving medication that could have saved millions of lives over the 17 year period it takes to go through clinical trials, safety certifications and registration processes. Have those millions of people been harmed? Well, not necessarily. Until we have knowledge of the efficacy of a treatment and that its benefits will outweigh possible harms, we cannot say that these people are actually being harmed in missing out on a treatment. Really, until we have this kind of information, the treatment can't be said to exist. What exists, instead, is the promise of the treatment. A tantalising promise, perhaps, but a promise nonetheless.

If the argument is that the intervention is needed to avoid almost certain catastrophe and no alternative exists, then the argument for implementation might shift in favour of fast-tracking implementation. However, even in the case of fast-tracking Ebola vaccine development, checks and balances cannot be altogether cast aside. These tests are part of what defines the result as a treatment, properly speaking. This gestures to a larger point: effective implementation of knowledge requires decisions not to implement, or to delay implementation, while due diligence is undertaken.

Democracy

My second critique is what I will call the technocrats dilemma. This is the problem faced when implementing contentious solutions to contentious problems e.g., harm reductionist approaches to illicit drug use or efforts to mitigate climate change. These are issues where research evidence can be counter intuitive or unpleasant and thus social, psychological and political pressures turn against the ideas being presented. It is incredibly frustrating when political-ideological or social-psychological forces undermine researchers' dedicated and well thought out elucidations of problems and solutions - when you really 'know' what the issue or solution is but are rendered impotent in your attempts to convince others. KERTI presents itself as a potential solution to this problem. Sometimes researchers think that opposition to scientific research betrays lack of understanding that could be overcome by education (what's called the 'deficit approach'). Others believe that, given appropriately democratic preconditions for knowledge production, consensus solutions can be found (this approach goes by various labels like 'participatory action research' or 'co-design'/'co-creation').

Sometimes the problem is lack of understanding or lack of engagement with stakeholders early on, but sometimes different views exist that are irreconcilable; allowing alternative views to co-exist, even when that means that the weight of opinion goes against us sometimes, is the price we pay for democracy.
~
These issues raise questions about what can be controlled and what should be controlled with respect to knowledge use. They also highlight how knowledge use cannot be disentangled from ethical considerations and the political and economic contexts in which we operate. These limitations on our control set limits as to our responsibility. However, they do not absolve us of the imperative to act dutifully. This discussion also underpins a point that is often not really articulated by KERTI practitioners: using knowledge well doesn't mean implementing knowledge indiscriminately. In some circumstances, using knowledge well might mean not using knowledge at all.

Thanks to JERE.be @JERECartoons for this image.

Saturday, 21 May 2016

Who is responsible for how knowledge is used?

We have a duty to use knowledge responsibly

~

In the last post I ended up perilously close to an unhelpful, fatalistic, conclusion. I'm sorry reader. My only intent was to counter what I often come across in my day to day goings on: unbridled optimism and/or pessimism regarding purported future states of the world in which certain kinds of knowledge have been implemented. More often it's optimism than pessimism, but catastrophic imaginings do occur.

My purpose was simply to highlight that when we think about implementing innovations we need to take a broad minded approach. By that I mean it is important to think beyond the contexts relevant to the idea whose transformative potential we are imagining to foreshadow how our innovation might impact on other contexts.

It boils down to our having a responsibility to use knowledge in a way that doesn't knowingly do harm. I am going to attach a caveat to what I mean by 'knowingly' so that deliberate ignorance is not a defense. By knowingly, I am invoking the requirement that reasonable steps have been taken to foreshadow possible domains of impact and then investigate them. This caveat covers the microbeads case we've been thinking through. It seems to me that, even from the privileged vantage of hindsight, environmental impacts should reasonably have been investigated before the innovation was enacted in the form of cosmetics.

What about something ubiquitous? What about computation? I am not going to try and give an authoritative summary of the history of computing. Suffice it to say that this is an ongoing set of innovations that continue to transform the world in which we live. Their applications facilitate both war and peace, environmental destruction and conservation, and all other manner of opposite aims. Does it make sense to hold the implementers and/or users of computing systems accountable for the harms that are caused by their applications? Perhaps - but we're talking about long chains of innovations comprising many many actors, where would responsibility start and stop? Can we even engage in a meaningful audit of benefits and harms in this case? I think not. The issue here is not just one of scale but relates to how we attribute credit or blame for particular kinds of beneficial or adverse impacts that result from an individual innovation.

The fact that innovations in computation can be pressed into service of both peaceful and violent ends is not the fault of all the developers, implementers or users of computing technology who have played some role in their development. In part, it is a measure of the usefulness of the innovation. It highlights how people with diverse goals can and do make use of the same innovation to fulfil them. But suggesting that the implementers (and/or users) of knowledge and not the producers of knowledge should carry the weight of responsibility for outcomes overlooks that producers of knowledge act with specific goals in mind, and that these may be beneficent or maleficent. It also overlooks the instances where knowledge producers may have an accurate idea about the impacts of applying their knowledge and therefore may have some responsibility for how it is used.

A further difficulty with this stance is that, as I noted in the last post, 'good' intentions and 'good' outcomes are not always aligned nor easily alignable - especially when we are talking about complex processes. Consequently, my argument suffers from the standard criticisms of duty based ethics, namely: there are limitations to how a duty-based ethics can manifest a just society and at least some responsibility should be attributable based on the consequences of ones actions rather than simply the intent.

Wednesday, 18 May 2016

Can we know what knowledge will do?



There are inherent limitations in our ability to predict what will happen when knowledge gets used.
~
In this post, I will follow the theme of my last post on truth and utility to examine the question of whether we can identify knowledge that will be useful.

In my post on microbeads, we looked at how knowledge was utilised by examining the issue retrospectively. That is to say, from the vantage of the present, we could look back on past events to make judgments about the utility of microbead related knowledge. I showed that the knowledge base had developed over time and in such a way that harms were created - and then also hopefully resolved.

From the vantage of the present, it seems that it would have been nice to foresee, and mitigate, the negative consequences of microbeads in advance. However, the task of identifying the utility of knowledge prospectively is not so easy. It is easy, perhaps, to anticipate (or imagine) the ‘good’ things that would follow if only people would adopt or act on the ideas we work so hard to create. It may be as easy to anticipate the terrible things that might follow from something that, for whatever reason, we just don’t like the sound of. But the future is inherently uncertain, and outcomes are rarely clear-cut. This uncertainty sets limits on what we can reliably predict will transpire in the future. The unforeseen implications of our activities, good, bad or otherwise are often referred to as unintended consequences.

While the future is always uncertain, it is not necessarily radically uncertain. Setting aside the problems of deriving knowledge based on inductive reasoning, I can be pretty sure that I will wake up tomorrow in the same house that I go to bed in tonight. Based on this proposition, and others like it, I can make a number of predictions about what is going to happen to me tomorrow and be pretty sure that most of them will come true. In general, I can rely on inertia; i.e., that in the absence of an external force, many of the activities and  processes that impact on me will stay largely the same from day to day. I also know certain things about the problem at hand (e.g., I am not prone to sleepwalking, I am not a doctor who is on call and thus who might be awoken before midnight to go into work, the prospect of fire or flood causing evacuation is fairly unlikely, etc.). So, I can make the prediction about where I will wake up because I have enough information to do so and because I am making a prediction about a relatively passive and uncontentious process.

However, predicting the future impact of a piece of knowledge, even in fairly simple scenarios, is inherently more complicated than the example I have just given. This is because we are not making a prediction about an issue unlikely to change but the opposite; we are trying to predict whether an action will cause the changes we want to occur and not generate adverse effects that undermine its benefit. Thus, the question we are posing is based on a (hoped for) perturbation to the current order. Second, it is not confined to the context in which we want to have impact but to all future contexts that our knowledge could possibly be used in. We don't want to dwell on this complexity too much because it leads us to an absurd conclusion that engenders an unhelpful fatalism. To take this proposition too seriously would mean that we would need to gather a massive amount of information about all the contexts an idea might effect just to have a chance of predicting (and avoiding) possible future deleterious impacts before acting upon them.

Saturday, 7 May 2016

Truth and utility

True (adjective) "in accordance with fact or reality" http://www.oxforddictionaries.com/

Useful (adjective“Able to be used for a practical purpose or in several ways”

http://www.oxforddictionaries.com/

What is true is not always useful; what is useful is not always true.


~
The statement "people are either over six feet tall or not over six feet tall" is an example of a true statement. It is, in fact, an example of tautology. Is it useful? Probably not. 

Statements perpetuating negative stereotypes of religious or cultural groups are examples of statements that may be useful but are not true; examples of propaganda to be precise. Such statements may be useful for people who want to achieve particular political and economic ends by playing on people's anxieties about their own material and existential security.


Of course, it's not that simple. Even though I cannot think of a use for the proposition about people's heights, I can't altogether rule out the possibility that it could be useful in some context. Tautologies, as a class of propositions, are extremely important to logicians, mathematicians and computer scientists. Indeed, they have been critical to the formation of modern computing.

As for the case of propaganda, we can argue that ascribing this proposition the quality of usefulness is also questionable: it is certainly not useful for those people who are harmed by its consequences, in terms of prejudice and marginalisation. It's probably also not useful for the people who are being encouraged to harbour hatred.

What if, instead of propaganda, we think of more positive examples of the usefulness of propositions that are not true. What about the diverse sources of inspiration many influential thinkers are informed by? Sources like art, literature, music, imagination or nature. Here, elements of human experience where it is not even sensible to discuss 'truth' can be useful in generating associations and stimulating thinking that leads to the production of true propositions. In cases where a truth value is questionable, as where people indulge in a degree of mysticism or magical thinking to inform their views, it is helpful to remember that the production of true propositions is not contingent on the truth of the arguments that gives rise to them: true conclusions can arise from false premises.

Perspective, context, and happenstance shape the class of things that are true and the class of things that are useful. And, the class of things that are true and the class of things that are useful grow over time as new truths come to light (or are debunked), and new ideas evolve. True and useful propositions inform one another - they are entwined in their causes and consequences. But, while some ideas are both true and useful, truth and usefulness are not the same thing.



Your browser does not support the HTML5 canvas tag.