Monday, December 12, 2016

THE POST-TRUTH ERA

The Oxford Dictionaries Word of the Year 2016 is post-truth.

Every year Oxford Dictionaries selects a word or expression that has "attracted a great deal of interest during the year to date", post-truth has been the word selected for 2016. What is "post-truth"? "Rather than simply referring to the time after a specified situation or event – as in post-war or post-match – the prefix in post-truth has a meaning more like ‘belonging to a time in which the specified concept has become unimportant or irrelevant ", explains Oxford Dictionaries. So, post-truth refers to an epoch, our own, in which truth would have become irrelevant. This idea immediately raises two further questions.

The first one is an old question, which has been reverberating since that famous day in Jerusalem, during the Jews Easter, when the fifth Roman prefect of Judaea, Pontius Pilate, replied to Jesus, "What is truth?" ("Quid est veritas?"). For centuries, scholars have debated about the nature of truth and even mentioning  this debate would sound arrogant. "If there is such a thing as truth – wrote I.B. Singer  concluding his adorable short novel "A Crown of Feather" – it is as intricate and hidden as a crown of feather". Truth is never elsewhere – completely out from our reach – yet it is always a bit beyond ourselves. It is a horizon, which gives meaning and limit; as the horizon, it can never be grasped: when you move ahead, it moves ahead too, always with you, always away from you. Yet, the idea of a post-truth epoch does not imply any judgment about the question "what is truth?"  – even whether there is a truth –rather it implies that this very question has become totally irrelevant. Who really cares today "what is truth"?  "In the post-truth era we don’t just have truth and lies, but a third category of ambiguous statements that are not exactly the truth but fall short of a lie. Enhanced truth it might be called".

The second question stems from the previous one, and it is its obvious corollary. If truth is irrelevant, what is then relevant? In other words, what is "enhanced truth"? Enhanced truth –  call it truth 3.0 – is narrative. People are not interested in truth but in stories. This is efficaciously demonstrated by a recent BuzzFeed News analysis that found that "top fake election news stories generated more total engagement on Facebook than top election stories from 19 major news outlets combined".  False election stories diffused by hoax sites generated 8,711,000 shares, reactions, and comments on Facebook, while news distributed by authoritative and verified sources generated a total of 7,367,000 shares, reactions, and comments. Researchers found that hyperpolarized and hyper partisan information is more effective in delivering messages than neutral, fact based, information. 

Yet, assuming that the "populace" confuses narrative with truth would be a tragic misunderstanding.  Such a misconception would reveal a snobbish, pre "post-truth", way of reasoning. People simply don't care – or care much lesser than in the past -  of truth. They enjoy stories, which are much more amusing, exciting, and meaningful. Do you remember Descartes' Meditation, "I shall consider that the heavens, the earth, colours, figures, sound, and all other external things are nought but… illusions and dreams… I shall consider myself as having no hands, no eyes, no flesh, no blood, nor any senses, yet falsely believing myself to possess all these things’?  Nice statement, isn't it? Male adolescents of the past, when they "discovered"  philosophy, often used this statement to impress their girl-friends, pretending looking "very profound". Then, when they had to date the girls, they became immediately oblivious of hyperbolic doubts, looking eagerly at their watch. Methodological skepticism cant' afford abandoning lecture halls,  in real life it unavoidably becomes  a parody.



"Fake news, and the proliferation of raw opinion that passes for news, is creating confusion, punching holes in what is true, causing a kind of fun-house effect that leaves the reader doubting everything, including real news". This is the point. We live in post-truth epoch, because we live in an epoch that has made skepticism and cynicism commonplace. Mass skepticism is the almost unavoidable consequence of information overload, which is due to the digital revolution. It looks like as though there were today no alternative but between skepticism and gullibility. Who would ever prefer to pass himself off as a gullible person? Much better looking skeptical.  Yet, notwithstanding global 3.0 skepticism, truth always takes its revenge. Fake news draw their strength from the seeds of truth that they unavoidably conceal to be trusted. No narrative is pleasant and convincing without a kernel of truth. 

This is the main lesson for those who work on public communication, trying to debunk false messages. Always search for the kernel of truth concealed in falsehood, and when you find it, first address it effectively, if you want to be trusted.


Thursday, November 24, 2016

Who is more "scientific"?

In January 2016, Mark Zuckerberg posted on his Facebook page a photo of himself holding his baby daughter with the caption “Doctor’s visit – time for vaccines!” Zuckerberg's post ignited a lively discussion. Pro-vaccination and anti-vaccination people took the opportunity to make comments and to turn on each other. Zuckerberg's post soon became an open online forum discussing vaccines. Overall, approximately 1,400 comments were posted. These comments - triggered by the same stimulus and hosted in the same Facebook page - represented a unique "natural" experiment about rhetoric and sentiments involved in the vaccination debate. 

A team of scholars from the 
University of New South Wales (UNSW) in Sydney, Australia and La Sierra University in Riverside, California, analyzed the language of both parties by using a text analysis program, the Linguistic Inquiry and Word Count (LIWC). Through the LIWC, researchers categorized words and sentences per some psychological variables. The study - A comparison of language use in pro- and anti-vaccination comments in response to a high profile Facebook post - was published in the October issue of Vaccine and its findings are quite interesting.  

Its main conclusion concerns the degree of anxiety and emotional involvement showed by pro-vaccination comments.  Rather counter-intuitively, people who supported vaccination were more prone to post emotional messages, poor in logic and scientific contents.  In comparison, anti-vaccination messages were more rational, more logically structured, richer in scientific contents.  One of the authors noted “skeptical comments (…) focus on health, biology, and research, they may be particularly compelling for parents who are uncertain about what decision to make about childhood vaccination and are seeking more information (…) This concerns us because the scientific evidence is very clear in demonstrating the safety and benefits of vaccines". Here, it is the paradox, pro-vaccination people defend their (scientifically grounded) point of view by using emotional and non-scientific arguments, while anti-vaccination individuals defend their (anti-scientific) position by using well structured, logic, and apparently, evidence-based, discourses.
   
Researchers commented that vaccination supporters are inadequate to defend their reasons because they tend to become overzealous and are not capable enough to master scientific arguments.  I partly disagree with this conclusion. To be sure, as the debate between pro-vaccination and anti-vaccination groups becomes over polarized, it is understandable that emotional arguments become prevalent in pro-vaccination people, but this does not explain the opposite process among anti-vaccination individuals, who seem to become more rational and less emotionally involved. 


If there is something that this debate clearly demonstrates, it is that both parties tend to play the game of the other side. Anti-vaccination people pretend being rational and scientific; while pro-vaccination persons "discover" sentiments and try to evoke fear in their audience.  I don't think that this happens only for trivial or casual reasons. I suspect instead that such a "mimetic fight" provides important clues on how scientific and health communication could work also in other circumstances. Brief, the discovery of this bizarre mirror game is likely to be more significant than researchers suspected and it would deserve to be studied more in depth.

Monday, October 31, 2016

SCIENTIFIC ILLITERACY

Sci Ed is a PLOS Blog devoted to "Diverse perspectives on science and medicine". Mike Klymkowsky, professor of Molecular, Cellular, and Developmental Biology at the University of Colorado Boulder, has recently posted on Sci Ed a very interesting article dedicated to science literacy, "Recognizing scientific literacy & illiteracy".  Inspired by a report of the National Academy “Science Literacy: concepts, contexts, and consequences”,  Professor Klymkowsky  poses a provocative question, "can we recognize a scientifically illiterate person from what they write or say?".  This is a bit mischievous question, because it implicitly suggests that the distinction between scientifically literate and illiterate persons is not that easy.

What makes scientific literacy is not what one knows, but how he knows it. Science – argues Mike Klymkowsky - is more a perspective, than a specific knowledge.   The article lists two main criteria for assessing scientific literacy. First, scientific literacy implies the capacity to understand scientific questions and recognize what an adequate answer should contain (which is not, pay attention, the "right answer" but the "right format of the answer"). 

Second, scientific literacy means the capacity "to recognize the limits of scientific knowledge; this includes an explicit recognition of the tentative nature of science, combined with the fact that some things are, theoretically, unknowable scientifically". Science is made of local perspectives, any perspective that aims to be universal, total, cannot be scientific (which does not imply that it is wrong or false, but it simply means that it belongs to a different register). 

Finally, Mike Klymkowsky addresses an important issue, say, "scientific illiteracy in the scientific community". Paradoxically enough, it is not rare that the very scientific community shows some forms of scientific illiteracy. How could it be possible? Mike Klymkowsky thinks that it is chiefly due to the "highly competitive, litigious, and high stakes environment"  in which most scientists operate. Often this situation leads them to make claims that are over-arching and self-serving. In other words, driven by a too competitive environment, scientists tend to draw unjustified conclusions from their empirical findings to best market their results

The article ends posing the question "how to best help our students avoid scientific illiteracy". The conclusion is that there is not a clear answer to this question but to try to establish "a culture of Socratic discourse (as opposed to posturing)". Such a culture could be summarized – per the author - into an ongoing attempt to understand "what a person is saying, what empirical data and assumptions it is based on, and what does it imply and or predict". 

Curiosity and ongoing inquiry could help to prevent scientific illiteracy, yet there are two other aspects of the Socratic approach, which are still more essential to the scientific discourse. They are self-irony and sense of transcendence.  These two elements are strictly interlaced, because they are both rooted in the deep conviction that truth is always a bit beyond the human reach. Socrates is not relativistic – as some commentators have erroneously argued – rather he is aware that humans can get closer to truth only asymptotically. This awareness prevents any form of scientific arrogance, the real origin of scientific illiteracy.

Thursday, October 13, 2016

Mandatory Vaccinations

In the Oct 6 issue of the New York Times, Christopher Mele devotes an interesting article to risk communication in emergencies. In a nutshell, his argument is that – if one aims to communicate risks – one needs also to evoke fear. Mele’s point of departure is the recent evacuation of Florida, Georgia, North Carolina and South Carolina residents due to Hurricane Matthew. He reports that “even after all of the best practices in emergency communications are exhausted, 5 percent of the population will most likely remain in harm’s way, experts and researchers said”.  Actually, this figure is likely to be still over optimistic, for instance during 2013 Hurricane Sandy 49% of coastal residents who were under mandatory evacuation did not evacuate.  In 2014, a State University of New Jersey team led Dr. Cara Cuite, carried out a study on “Best Practices in Coastal Storm Risk Communication” concluding that effective communication should “stress the possibility that people who do not evacuate could be killed.  This is better done by using implicit messages rather than direct, explicit, messages. For instance, if authorities ask people who do not evacuate to fill out a form on how to notify their next of kin, they communicate in a very effective way the actual possibility “that people who do not evacuate could be killed “without the need of warning them explicitly. Another important lesson concerns semantic, say, the specific words chosen to communicate.  In most cases, mandatory evacuation is excluded, since there is no way to enforce it. Yet, experts know very well that “a voluntary evacuation will have a lower rate of compliance than one labeled mandatory”. It is then critical to avoid using the expression “voluntary evacuation” and “make it clear that residents are being ordered to leave, even if no one is going to remove them forcibly from their homes”. 

It is possible to elicit from Mele’s arguments two general rules concerning risk communication, which could be adopted also in other situations, notably in outbreaks. 


First, in contrast to the standard risk communication account, one should focus on emotional responses rather than on mere rationality.  Risk communicators often aim to raise awareness and to provide the public with information, which is in principle laudable and it would be effective in an ideal world, ruled only by rational choices. Unfortunately, very rarely people make choices on a rational basis, even when they pretend doing it.  As a matter of fact, in real life “pure rationality” does not exist, is a fictional concept. Mental processes are an inextricable mix of logic arguments, emotional reactions, implicit and explicit memories, automatisms, conscious and unconscious processes. Very rarely – if ever – an action follows a rational decision; more often the so called “rational decision” is a post-hoc rationalization, used to justify decisions made in more or less irrational ways.  There is little that one could do to prevent this mechanism, notably in emergencies, when people are asked to take quick and momentous decisions. Among emotions, fear plays a pivotal role as one of the basic emotions that drive human behavior. There are two opposite mistakes that one could do in risk communication, over stimulating fear but also over reassuring people. Fear must be fine-tuned.

Second, if being able to deal with emotions is critical in risk communication, this implies that two variables become paramount, timing and words.  Timing is essential because human emotions are continuously fluctuating in each individual and they change over time. The same message could evoke completely different reactions according to the emotional context of the receiver and consequently a message could have very different effects according to the moment in which is delivered. There is not something like “the right message”; rather there is “the right message in the right moment”. Words are also very important. I’m saying “words” and not “contents”, because I’m referring to the very terms used rather than to concepts underlying words.  Words unavoidably evoke specific emotional reactions, which are – be careful – cultural bound and context dependent (say, one should avoid the mistake of thinking that the same words evoke the same reactions always, everywhere and in everybody). The word “mandatory” is a good example. At least in our society, if something is “mandatory”, for most people it is also important, while, if it is “voluntary”, it is not (or less). So to label something as “mandatory” does not imply necessarily that one is going to enforce it compulsorily. The term “mandatory” could be used also to transmit the importance of an action.  This is well illustrated by the wrong (in communication terms) policy to make most vaccinations “voluntary”.  To be sure, in democratic societies it is largely unthinkable to vaccinate compulsory people and notably children, yet the issue at stake is only in part the balance between voluntariness, persuasion, soft coercion, and compulsion. Words chosen by regulators communicate also the importance of a public measure. Health authorities and policy makers should pay more attention to the communicational implications of wording, even when choices seem to concern purely technical and normative aspects.   

Friday, September 30, 2016

Vaccines and Alternative Medicine

The State of Vaccine Confidence 2016: Global Insights Through a 67-Country Survey  is the title of a study  just published by EBio Medine. A team of the London School of Hygiene & Tropical Medicine interviewed 65,819 respondents across 67 countries about their attitudes towards vaccines. The study is “the largest survey on confidence in immunization to date”.
Researchers submitted four statements to their sample, asking people to specify on a Likert scale their degree of agreement-disagreement. The four statements were,
1) "Vaccines are important for children to have."
2) "Overall I think vaccines are safe."
3) "Overall I think vaccines are effective."
4) "Vaccines are compatible with my religious beliefs."

The majority of interviewed people thought that vaccines are important for children but, rather contradictorily, they showed lower confidence in vaccine  effectiveness and, above all, in safety. Everywhere, education increased confidence in vaccine importance and effectiveness but not safety. Religion did not play a major role both in vaccine acceptance and hesitancy, with the exception of Mongolia where 50.5% of respondents said vaccines were not compatible with their religion (Buddhism), which is rather odd considering that other Buddhist countries were instead aligned with average results (around 8-10% people thinking that vaccines are hardly compatible with their religion).
Interestingly enough, European countries showed the lowest confidence in vaccine safety with France the least confident: 41% of respondents in France disagreed with the assertion that vaccines are safe (on average, 12% of respondents in other nations disagreed with this statement). Authors noted that "France recently has experienced 'anxiety' about suspected but unproven links between the hepatitis B vaccine and multiple sclerosis and, separately, the human papillomavirus vaccine and side effects like fatigue in girls". This element is certainly important, yet it hardly explains the findings.
In order to make more meaningful the survey, I would suggest to confront it with figures concerning prevalence of Complementary and Alternative Medicine (CAM).  Although in economic terms the largest CAM market is still the US, the market that is growing faster is the European market. Moreover, there is a significant difference between the US and the EU markets, while in the US the lion's share is largely taken by chiropractic, in the EU homeopathic and herbal remedies account for the largest part of the market. Homeopathy is particularly popular in France, where it is the leading alternative therapy. The costs for homeopathic products are partially covered by the French National Health System and the percentage of French population habitually or sporadically using homeopathy has grown from 16% (1982) to 62% (2004). This is mirrored by the attitude of health care professionals.  Homeopathy is taught in all major French medical schools and in schools of pharmacy, in dental schools, in veterinary medical schools, and schools of midwifery. According to a 2004 survey, 95% of GPs, dermatologists and pediatricians, consider homeopathy effective and are willing to prescribe it, or co-prescribe with conventional medicine. Another survey showed that 94.5% of French pharmacists advise pregnant women to prefer homeopathic products because "safer".


Concerns about safety of medical products are thus wider than vaccine hesitancy and vaccine hesitancy is probably only the peak of an iceberg. Further research is certainly required in order to understand better social, psychological, and economic dynamics that underlie this phenomenon. Yet an element is already self-evident: making appeal to scientific arguments to convince people to vaccinate themselves is a pure waste of time, if – at the same time -  the whole social fabric welcomes pseudo-scientific practices among recognized medical treatments. 

Monday, September 12, 2016

Selective Gullibility

PLOS Current Outbreaks has just published a study "Lessons from Ebola: Sources of Outbreak Information and the Associated Impact on UC Irvine and Ohio University College Students", authored by a team of researchers from the University of California Irvine, led by Miryha G. Runnerstrom.  Authors carried out an online survey of 797 undergraduates at the University of California, Irvine (UCI) and Ohio University (OU) during the peak of the 2014 Ebola (EVD) outbreak. Researchers aimed at identifying the main sources of information about the outbreak in four areas: knowledge, attitudes, beliefs, and stigma.

Results are rather interesting. Students main sources of information were news media (34%) and social media (19%). As one could expect, only few students searched information on official government and public health institution websites. However, this small minority (11%) was better informed and had more positive attitudes towards those infected. Authors conclude that " information sources are likely to influence students’ knowledge, attitudes, beliefs, and stigma relating to EVD", which is undoubtedly true, but it more a truism than an actual conclusion. Actually, the study tells something more.

There are at least three thought-provoking elements that emerge from this survey. The first one concerns risk perception. The large majority of participants (81%) perceived a low personal risk of contracting the infection. They were definitely right, given that the total number of Ebola cases in the US in 2014 was 11 cases, that is to say, the theoretical risk for an individual in the US to be infected by Ebola was about 11 in 319,000,000, even lower than the risk of being killed by a meteorite, which was calculated 
in 2014 by earth sciences professor Stephen A. Nelson to be 1 in 1,600,000. Yet, during 2014 Ebola outbreak, the alarm spread all over the US and public health communicators overly spoke today of a hysterical media coverage. Eric Boehlert, an influential blogger and writer for Media Matters for America, wrote "It's almost like they're crossing their fingers for an outbreak (…)   CNN actually invited onto the network a fiction writer who wrote an Ebola thriller in the 1980s to hype unsubstantiated fears about the transmission of the virus. CNN's Ashleigh Banfield speculated that "All ISIS would need to do is send a few of its suicide killers into an Ebola-affected zones and then get them on some mass transit, somewhere where they would need to be to affect the most damage." And colleague Don Lemon lamented that government officials seemed "too confident" they can contain the Ebola scare".  This is the first interesting element: notwithstanding media hypes, most students did not perceive a true health risk for them. 

The second interesting element concerns knowledge. Researchers complain that half sample (51%) showed poor or inexact knowledge of the disease, its means of transmission, its contagiousness, and symptoms. I would argue differently. It seems to me that – considering the inaccurate and emphatic media coverage - the fact that 49% participants had a sufficient correct knowledge of the infection and its dynamic is a positive surprise. It confirms that sensationalist information reaches the target but it does not necessarily penetrate it. This raises immediately a question, why does it happen? What are the main variables at stake in determining the outcome of a sensationalist information campaign?



The third element, which emerges from this study, provides a clue to answer this question.  The third element of interest concerns misinformation.  Asked whether they agreed with the statement “Ebola is a government conspiracy created to get rid of a particular race” 89% students answered that they disagreed. Yet, approx. 1/3 of all participants thought that “There is a cure for Ebola but the government is keeping it from the public”. This answer is particularly astonishing also considering that they were university students, that is to say, a population that should in principle possesses the intellectual means to debunk trivial conspiracy theories. What does it mean? It probably means that distrust towards politicians is at such a level that people look for any excuse to accuse them. In other words, this is a phenomenon that has little to do with public health information. As a general rule a piece of information becomes credible, ceteris paribus, to the extent that it meets people's expectation and imaginary. If people wish to see the evidence of politicians' dishonesty, any piece of information that is cooked with this ingredient becomes immediately palatable. But this does not imply that other pieces of information are passively accepted. 

This study confirms the universal tendency towards selective gullibility. We are always ready to believe in what confirms our beliefs and narratives, especially our biases, even when it is apparently incredible. Ironically enough, cynics are those more prone to this peculiar form of self-deception. 

Thursday, September 1, 2016

Science Communication and The Impostor Syndrome

"Why scientists are losing the fight to communicate science to the public" is the title chosen by Richard Grant for his editorial on science communication, published on The Guardian of August 23.  At the same time, Pediatrics, the official journal of the American Academy of Pediatrics, published the results of a US wide survey among pediatricians focusing on parents' refusal to vaccinate their children. The surveyed population was a population already surveyed in 2006, so allowing a temporal comparison.

Results are appalling, in a decade the proportion of pediatricians reporting parental vaccine refusals increased from 74.5% to 87.0%. Pediatricians reported that parents are increasingly refusing vaccinations because parents believe they are unnecessary, moreover a total of 75.0% of pediatricians reported that parents delay vaccines because of concern about discomfort, and 72.5% indicated that they delay because of concern for immune system burden.  Brief, not only ten years of public health communication aiming to increase vaccine acceptance have not achieved this goal - at least in the US -  but there is even the suspect that they have  been counterproductive.

Richard Grant mentions two main reasons why scientists are losing the fight to communicate science, first because scientists tell people how to live their lives and second, because they don't listen to people. Commenting Grant's article, Ian Mackay, in his Blog The Virology Down Under, adds a third, may be deeper, reason: scientists are usually "supercilious".  Of course, among scientists there are people of all kinds, disdainful and humble, arrogant and unpretentious. That's right, but what Mackay means is a bit subtler than a naïve psychological profile of the average scientist.  Mackay argues that "imposter syndrome for an academic can drive the need to sound just as supercilious as our peers". The "impostor syndrome" is a term that describes the peculiar condition of successful individuals who, notwithstanding their successes and despite their true competence, fear of being considered impostor. Although they are truly high-achieving, they have the inner sensation of being deceivers. In other words, to Mackay scientists are arrogant as far as they need to reassure themselves. Arrogance is their way to say "I'm part of the (scientific) tribe, I'm not a fraud".

Mackay's argument is sound and, as a psychiatrist, I would tend to agree with him. Yet, I would like to go a bit further. During my clinical practice, I happened to meet patients suffering from delusional beliefs.  In a dialogue with someone who is suffering from delusions, one could try to keep the conversation far from delusional contents, at least during the initial interview and the early treatment.  Yet, this strategy cannot be followed forever, soon or later the moment arrives in which one has to discuss patient's delusions.  When this occurs, it is not rare that the patient poses a simple question, "do you believe me?" We all know that questions are never simple questions. Yet, questions are also simple questions and psychiatrists should avoid playing the old, ridiculous, game to interpret questions or to answer with another question. Answering questions is a matter of professional honesty and respect for the patient.

When a delusional patient asked me whether I believed in his narrative, I used to reply him "No, I think you are wrong. Said so, I'm aware that there are more things in heaven and earth than are dreamt of in my science. So, at least inside this room, I'm suspending any definitive judgment and I will listen at you without too many preconceptions.  I will accept the, theoretical, possibility that you are right". Was it only a rhetoric expedient? No, it wasn't.  Of course, there was  a lot of rhetoric in my statement, yet it would not work if it were only rhetoric. At the end of the day, either the psychiatrist truly accepts the risk of becoming insane with his patient, or he will be forever precluded from a real understanding of the patient's inner world.

Similarly, scientists involved in science communication should sincerely accept the possibility that they are wrong, this is the only hope they have to change people's mind.  Is there any prerequisite to adopt this strategy? Yes, there is. You should feel yourself solid enough in your knowledge and your world-view in order to accept a true, actual, challenge. Finally, this is the implicit, reassuring, message that you give to someone else when you accept his extreme challenge. 

So, arrogance is not only the reason why scientists often dictate rules and don't listen to people, it is also the sign that they are not as confident in "their" science as they pretend being. Is this only due to the "impostor syndrome"? 

Wednesday, August 24, 2016

Many a mile comes plague

August ends with bad news. Twelve Greek municipalities - Farkadona, Trikala, Palamas, Tempe, Achaean and Thebes (central Greece); Evrotas and Andravida-Kyllini (Peloponnesus); Chalcis (Euboea); Marathon (Attica); and Lagada and Pylaia (Thessaloniki region) - have banned blood donations because of malaria.

Greece was declared free of malaria by the World Health Organization in 1974. From 1974 to 2010, an average of 39 cases per year - mainly imported - were reported. In 2016, 65 new cases of malaria had been detected. Most of them (50) concern immigrants coming from the Indian subcontinent or African states.  Eleven cases involve Greek citizens returning from malaria-affected countries. Yet there are alsofour domestic cases. Imported cases are not, per se, worrisome (e.g., the UK had 1,400 cases of malaria in 2015, all imported). Domestic cases, on the contrary, imply the actual presence of the malaria parasite in a given region. Greek and international newspapers commented that malaria's return – after 40 years -  is probably due to the fact that the Greek public authorities can no longer afford mosquito-spraying programs. In other words, malaria should be included among deleterious consequences of the crisis of the Euro zone. Some media also suggested that malaria have been probably brought over by the large influx of migrants who have entered the country in the last years. "Many a mile comes plague", to quote Shelley.

No doubts that poor economic conditions are likely to be one of the major causes of the reemerging of malaria infection in Greece. The suspension of mosquito-spraying programs is likely to have negatively impacted, together with an overall deterioration of public health and social wellbeing. It is also probable that the malaria parasite was reintroduced by immigrants (although genotypic data would be necessary to state it with certainty). There is, however, an important piece of information that most commentators omit to mention. 

Most municipal spraying schemes to combat mosquito-borne diseases have been cut back in early 2010 and current figures are not the worst on record. The Hellenic Center for Disease Control and Prevention reported 96 malaria cases (imported 54 -  domestic 42) in 2011; 93 (imported 73 -  domestic 20) in 2012; 25 (22 imported - domestic 3) in 2013; 38 (all imported) in 2014; 85 (79 imported –domestic 6) in 2015. Where is then the rationale of the current emergency? In case, a crisis should have been declared during 2011 outbreak. Instead one had to wait till 2016 (4 cases of domestic transmission against 42 in 2011) for seeing the health crisis officially declared and blood donations suspended in twelve districts. Rather bizarre, isn't it?

Would it be too mischievous to suspect that the malaria emergency is occurring now also because of the refugee political crisis? The health emergency caused by refugees is probably one of the cards played in the complex negotiation, directly involving the Greece borders, between the European Union and Turkey.

One of the main problems with risk communication in public health and, notably, in outbreaks is that politicians can hardly resist the temptation to use communicable diseases as political weapons, to blackmail other countries and internal political opponents.  To be sure, one is not morally authorized to call it "bioterrorism", nevertheless it would be worth inventing an appropriate word to stigmatize such a vile habit.

Monday, August 15, 2016

Crying Wolf

Olympic Games has started. After more than a week of competitions, Zika infection does not seem to be any longer a major concern for anyone. Yet still a few weeks ago, Zika seemed to threaten the existence of the Olympic Games themselves. Distinguished epidemiologists and public health experts were even suggesting to postpone or relocate the games and some athletes announced their will not to participate because of the risk of being infected.  Other athletes decided to freeze their sperm as a precautionary measure. The entire world was looking at the Olympic Games in Rio with trepidation and alarm

The pendulum started to swing towards June, when the World Health Organization, finally, realized that August is midwinter in Rio, that is to say, it is not the mosquito season. So, on
 June 14  the WHO declared "that there is a very low risk of further international spread of Zika virus as a result of the Olympic and Paralympic Games as Brazil will be hosting the Games during the Brazilian winter when the intensity of autochthonous transmission of arboviruses, such as dengue and Zika viruses, will be minimal and is intensifying vector-control measures in and around the venues for the Games which should further reduce the risk of transmission". This official statement did not revoke the alert but just downgraded the emergency crisis level. As a matter of fact, the WHO still recommended that " countries with travelers to and from the Olympic and Paralympic Games should ensure that those travelers are fully informed on the risks of Zika virus infection, the personal protective measures that should be taken to reduce those risks, and the action that they should take if they suspect they have been infected. Countries should also establish protocols for managing returning travelers with Zika virus infection based on WHO guidance". Were these recommendations truly necessary? Actually,  on June 9, the European Center for Disease Prevention and Control (ECDC) had already circulated an Olympics risk assessment   stating something different from WHO declaration.  According to the ECDC gastrointestinal infections were definitely  the first risk for travelers to the Olympic Games,  while risks related to Zika infection were considered almost negligible. ECDC's approach was confirmed by the US CDC  – on July 13 -  in  its own 2016 Olympic and Paralympic Game risk assessment,  which concluded that the worst-case scenario of Zika contagion would likely lead to a small number of cases, if any, in only 4 of the 206 participating countries and consequently "attendance at the Games does not pose a unique or substantive risk for mosquito-borne transmission of Zika virus in excess of that posed by non-Games travel".  

Finally, on July 26 -  only nine days before the opening of the games -  the Annals of Internal Medicine published a model that drastically rectified original catastrophic predictions. Considering, inter alia, that Rio is far from the epicenter of the Zika outbreak, the study argued that the 2016 Olympics definitely represented a low risk event for Zika infection, disease, and transmission. Authors wrote "Our calculation provides worst-case estimates of travel-associated Zika risk by assuming that visitors encounter the same infectious exposures as local residents. Under these pessimistic conditions, we estimate that an individual traveler's probability of acquiring infection in Rio de Janeiro ranges from 1 in 56 300 to 1 in 6200".  


Should we wait for the calculation provided by this new model?  Probably not.  Already on July 2015, the
Lancet Infectious Diseases had published a study devoted to dengue transmission during the 2014 FIFA World Cup in Brazil. Dengue is a disease very close to Zika and it is transmitted by the same mosquito that transmits Zika. Brazil has one of the highest rates of dengue infection in the world and in July 2014 the Football World Cup was held in Rio. As today, the public health authorities were worried by the risk of an outbreak, yet of the million foreign tourists who went to Brazil for the sporting event, only three of them contracted dengue (and no one of them in Rio!).  It would have been enough to project this data on the current Zika outbreak, to understand that Zika risks were negligible.

Actually, extrapolating dengue 2014 FIFA World Cup data to Zika 2016 Olympic Games, the worst-case scenario is 3.2 Zika infections per 100,000 tourists, while the much more likely scenario  is 1.8 cases per million tourists. Indeed, on 
10 May 2016, John McConnell, editor of the Lancet Infectious Diseases, wrote "unless new data emerge before August, we can say that compared with the risks usually associated with travel, such as gastrointestinal infections (on which we have written previously), traffic accidents, and falls, Zika virus represents a minimal threat to games visitors".  Should public authorities wait August 2016 for reaching the same conclusions? Couldn't they learn from dengue transmission? Is it ever possible that still in June 2016, the WHO seriously warned tourists who were about to travel to Brazil for the Olympic Games? 

Since SARS outbreak in 2003, scientists, public health authorities, and the media, have been systematically overestimating the risk of emerging epidemics. Zika and 2016 Olympic Games are just the last episode of an ongoing saga of risk assessment and communication errors. Given the recurrent nature of these episodes, it is likely that these errors are of systematic nature. It is more and more urgent to understand their origin in order to prevent them. Crying wolf is not only the worst way to communicate risks, but it also paves the way for catastrophic failures, once risks eventually materialize.