November 22, 2012 by David Taylor
Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. However, scientific endeavor is replete with argument, claim and counterclaim whilst data is acquired that eventually leads to a consensus on its interpretation.
However, in some cases policy makers cannot afford to wait until agreement has been reached before taking action. Consequently a “weight of evidence approach” is used to try to assess the merits of all the information produced, often over many years by different scientists of varying abilities using a range of methodologies, in order to reach a consensus view that can be used to inform policy. This is not necessarily easy and it is important to evaluate all the relevant evidence dispassionately, an issue that has recently been the subject of a European Commission Expert Committee report by SCHENIR.
Unfortunately, in the last two decades there has been an increasing and unhelpful tendency for some stakeholders to “play the man and not the ball”. In other words some people in discussing scientific controversies do not attempt to argue about the validity of the scientific data but instead seek to exclude data from the assessment by undermining confidence in the reputation of the scientist e.g. by insinuating that the scientists is biased in favour of the organization funding the research. It is rare for any evidence of bias to be produced, the damage being done by the use of generalized innuendo.
Responsible scientists consider such criticisms to be without merit, unless evidence of unprofessional activity has been produced. This was recently emphasized at the conclusion of a conference on risk assessment organized jointly by the EFSA and DG SANCO: Professor Anthony Hardy, Chair of EFSA’s Scientific Committee, said: “It doesn’t matter who generates scientific evidence as long as it stands up to examination and meets accepted scientific criteria.” David Byrne, a former EU Commissioner for Health, added: “Science is the lifeblood of risk assessment. If the scientific method is sound the outcome is trusted.”
For those who wish to look at this in more detail I have recently produced a briefing paper [Weight of Evidence Assessments – Fraud, Bias and Incompetence [Version 2.5]] which discusses some of the things that need to be considered when evaluating data in the published literature.
From our blog
January 25, 2018 by Melanie Gross
October 25, 2017 by Dean Leverett