Printed from the Society, Religion and Technology Project website: www.srtp.org.uk

Genetic Risk Regulation; Society & Ethics

image Published: Apr 14, 2010

Introduction

In recent years genetic engineering has emerged on to the threshold of becoming a practical technology in agriculture and medicine. As it has done so, it has posed many complex questions regarding risk, regulation, societal structures and ethics. But these are apt to be treated too much in isolation in their separate disciplines. There has been a general recognition of the need to integrate risk assessment with other disciplines, and this paper is an attempt to do this, by considering the interrelation of these facets. It arises out of the membership of its three authors on an expert working group into the ethics of genetic engineering, being run by the Society, Religion and Technology Project of the Church of Scotland, of which I am Director. I am a chemist formerly a nuclear inspector, now turned ethicist, John Eldridge is a sociologist with special interest in media and risk studies including the BSE situation, and Joyce Tait works in environmental management, with a background in risk perception and regulation. We do this in the context of an analysis of the two different types of regulation applied to the potential risks associated with applying genetic engineering to agriculture - the precautionary and the reactive.

Risk Regulation

The reactive or preventive approach to risk regulation was, and still is, typical of our approach to most of the risks that arise from modern technology. An industry is controlled by a system which has been set up in response to statistically proven, adverse impacts that have arisen in earlier generations of products, impacts often brought to light only after one or more serious incidents. Any new products and processes are screened to ensure that they do not give rise to any similar hazards. The regulatory system is built up slowly, in a piecemeal fashion, as new generations of products exhibit different hazards. Decisions about the need and level of regulation required are taken in relation to the benefits to society, compared with the costs to industry. The approach is reactive in that it does not attempt to predict the existence of novel hazards for which there is no concrete statistical evidence. A precautionary approach, on the other hand, is one where an industry and its products are controlled by a system set up to avoid potential hazards, predicted in advance of the development of the products, before there is any empirical evidence of their existence. From the perspective of the policy analyst, the precautionary approach is only justified on one or both of the following grounds :
where the complexity or scale of a set of environmental interactions makes it impossible to identify cause-effect relationships - for example where large numbers of pollutants or organisms interact with one another,
with very new technologies, where we have no previous experience on the basis of which to predict their impact on ecosystems. A precautionary approach may be justified on grounds of uncertainty until a large and reassuring body of data has been accumulated and we can begin to treat the technology as familiar.
Genetic engineering represented a major departure from previous approaches to regulation, in adopting a precautionary approach early on. This was accepted by both regulators and industry in the early stages of biotechnology regulation, primarily because they wished to provide public reassurance rather than because they perceived any real hazards from the technology. As the genetic research community have become more confident in their understanding of the behaviour of genetically modified organisms, the picture has changed. While there remain many, especially in environmental groups, who wish the precautionary approach to continue, it is no longer welcomed either by professional risk regulators who regard it as non-scientific compared with the reactive approach or by industry - who see it imposing extra costs and delays in bringing new products to market, and also as increasing the levels of uncertainty in industry’s operating environment. As increasing numbers of products have got near to marketting, the pressure has been increasing from industry for a relaxation of the precautionary system adopted by the European Union. A report by the House of Lords Select Committee on Science and Technology in 1993 accepted the view of industry that the (precautionary) regulatory regime was unscientific because it failed to discriminate between activities which involve real risks and those which do not. It concluded that any regulation which reduces competitiveness should be reviewed critically, particularly when it cannot be justified on scientific or public interest grounds. This report was influential in achieving relaxation of the precautionary approach throughout the EU for genetically modified crop plants, which were considered among the safest of products now reaching the marketing stage of development.

Sociological Aspects

We now want to explore this dichotomy between “scientific and non-scientific approaches to risk regulation, by looking at the assumption of rationality inherent in the perception that the scientific/preventative approach, and the implication that the precautionary is less than rational. This assumption hides several important sociological and ethical factors which underlie the way risk is being assessed. Firstly, it should be challenged conceptually. There are many different meanings that can be attached to the term “rationality”. Max Weber draws a distinction between instrumental rationality - reasoning based on what you can measure - and value rationality - reasoning based on holding particular values. Instrumental rationality is more familiar. It adopts a calculative approach to problems, to establish the best means to achieve given ends. It is not concerned with the value of the ends themselves. Its task is simply one of clarification and attempting to establish the interrelationship between facts. Value rationality refers to behaviour that is logically consistent in relation to a particular value position which someone holds. An example would be the standpoint that rejects genetic engineering as intrinsically unacceptable, no matter what the consequences. According to its own lights, it can be thoroughly “rational”. At first sight, the reactive approach to risk regulation seems to be a case of instrumental rationality, and the precautionary approach has elements of value rationality. It is not that one is rational and the other irrational. What we have is competing rationalities, typical of the different interests groups of a plural world. So the fact that the precautionary approach is regarded as “non-scientific” should not be used to imply, as by some would appear to do in industry and in the scientific and political communities, that the precautionary approach to be “irrational”. No, we would say it is reason used on a different basis. But we also would go a step further. In the real world, the instrumental approach it is not as straightforwardly rational as it might seem. Genetic engineering has not appeared out of a vacuum; like anything else, it has a context. There are stakeholders - be it commercial, scientific, environmental, political or whatever - all of whom have values in the ends that they seek. And the significance of the consequences and probabilities of a release have to be interpreted and evaluated, invloving value judgements drawn from outside science, with ethical, political and economic dimensions. We would also challenge the ideological use of the notion of rationality as a tool of power by one group, defining all the opposing views as “irrational”. Because reason has such powerful postitive connotations, there is a temptation to “smuggle it in” in an ideological way, during public debate or the privacy of committee discussions, to endorse one’s own position and attack others, and to marginalise their position. UlrichBeck has introduced us to the notion of the Risk Society, to describe the emergence of a society which has been undermining its own processes through the unintended consequences of industrialisation. This is not limited to particular risks of certain technologies, requiring their individual solutions, but it is systemic, a property of industrial society itself. In the light of this insight, we would argue, instead of the “growth and progress” models of industrial society, for a more precautionary approach to social change as a whole, which recognises that mistakes will be made, that we are on a learning curve.  There are unanticiapted consequences which, with forethought, might have been anticipated we need to take more account of the rationality which comes from the different values expressed in society - not merely those of a dominant elite of decision makers taken primarily from particular groups (of whatever variety).

Ethical and Value Questions

This brings us to the ethical and value questions themselves. In so brief a paper, time does not suffice to examine in depth how we apply ethics to the area of risk in genetic engineering. In the area of risk regulation, two ethical issues and their corrolaries seem to stand out:

> Are we affecting the environment or human health in a way that we cannot predict adequately enough, to know whether or not our intervention will turn out to be harmful?
The corrolary: what represents adequacy, in this sense, and who decides it?
> Are we subjecting particular groups in society to an unjust share of the burden of any risk, including future generations and other countries?
> The reverse of this: are the benefits primarily, or even exclusively, for certain groups in society, or are they truly shared out generally to most people? Who will not benefit?

The two approaches to risk regulation both have theor underlyinh ethical assumptions. The reactive approach to risk regulation says that the only risks a society should have to regulate are those it can reasonably forsee as a result of past evidence. Technology is seen as a fundamentally desireable thing for society and should1 be given its head. Its expected benefits to society as a whole - in economics, employment, quality of life - are deemed greater than the risks incurred overall, and to the higher risks certain individuals and groups are likely to suffer. It is thus a largely utilitarian view. The greater good of the many justifies the risks that the few may bear. It is ethically wrong to hold back through unnecessary fears from bringing in those benefits, provided we have acted in the light of the knowledge we have. We only respond to what is rational to respond to. The driving force borrows much from the Enlightenment project of man’s mastery over nature through our ability to reason, and the optimisitic notion of progress through scientific and technological advancement. The importance of this as an ethic in itself is often missed by many people who subscribe to it. Indeed, it is close to being an inherent ethical principle. The precautionary approach is also derived from experience, but it interprets it in a different way. It focusses on the fact that past events teach us that we can never be certain of the outcome of technology. Things nearly always go wrong that no one anticipated at the start. It reasons that we have only a limited right to subject society to these unknown, or at least ill-defined, risks and dangers. If these are big enough, we should try and anticipate them in legislation. In the limit, we should not take those risks at all unless and until we judge that we are able to. We should not wait until something has gone wrong for which we then have real “scientific” data, because someone would have already suffered unnecessarily if we could have seen what precautions to take first. It has resonances with post-modernism, rejecting any single integrating vision like progress; and also with feminist and ecological perspectives, focussing on the connectedness of the way things are, espousing an ethic of co-operation rather than of intervening in order to dominate. It draws on Christian ethical insights, in the latter respect, but several others also:

> in the sense of humility in recognising both the finitenness and fallibility of humanity,
> in contrast to the hubris that has sometimes disfigured the human progress motif,
> in highlighting the plight of the groups most vulnerable to the particular risks,
> setting a greater store by the need to protect them against the broader benefits to society. [This goes to the heart of the age old ethical problem of classical utilitarianism, which does not address adequately how to provide justice for the few.]
> in setting more value in the longer future than in the short term present.

Conclusions

From this analysis we conclude that it is not that we have two regulatory approaches, one rational and the other irrational, rather that both express different forms of rationality, and are both undergirded by ethical frameworks and principles, but of a very different in nature, particularly over perceived priorities. The problem is then how to deal with the dilemma this presents for the contemporary debate on genetic engineering and the release of genetically modified organisms, and the conflicting pressures to relaxation regulation or to consider ever more possibilities of hazards. We have sought to show that the way in which reactive risk regulation has been argued for, making claims for its scientific rationality, can be contested on other rational grounds, and that it can have its own ethical and political baggage attached to it. It is not that there is no place for it, but the argument for its universal application no longer holds. The terms of the debate have changed. Where there are indeed sound data for a well established range of risks, the reactive approach to regulation will, of course, continue to play its role. But where it does not, the moral onus of a risk society as a whole implies a more precautionary approach, taking on board a much wider set of values, from a wider set of people than has typically been the case up till now. To address this wider consideration, what we feel is needed is a statutory national ethical commission on biotechnology, where potential new and radical technologies are made public at an early stage, and given interim intellectual property protection while opportunity is given for the public to lodge objections (or voice support) and to appeal. Care would be needed to maintain a consistency and openness in the value judgements being made. Once a wealth of good experience is established for a technology, such a level of scrutiny would not be needed. But if something is not done to grant such a public debate, and to take seriously ethical and value considerations being expresses, the controversies we see - in militant forms in some cases - will continue without any hope of resolution, and, in the long run, the bioindustry risks losing part of its public for the wrong reasons.

Printed from www.srtp.org.uk on Thu, October 19, 2017
© The Church of Scotland 2017