Thursday, November 24, 2016

Who is more "scientific"?

In January 2016, Mark Zuckerberg posted on his Facebook page a photo of himself holding his baby daughter with the caption “Doctor’s visit – time for vaccines!” Zuckerberg's post ignited a lively discussion. Pro-vaccination and anti-vaccination people took the opportunity to make comments and to turn on each other. Zuckerberg's post soon became an open online forum discussing vaccines. Overall, approximately 1,400 comments were posted. These comments - triggered by the same stimulus and hosted in the same Facebook page - represented a unique "natural" experiment about rhetoric and sentiments involved in the vaccination debate. 

A team of scholars from the 
University of New South Wales (UNSW) in Sydney, Australia and La Sierra University in Riverside, California, analyzed the language of both parties by using a text analysis program, the Linguistic Inquiry and Word Count (LIWC). Through the LIWC, researchers categorized words and sentences per some psychological variables. The study - A comparison of language use in pro- and anti-vaccination comments in response to a high profile Facebook post - was published in the October issue of Vaccine and its findings are quite interesting.  

Its main conclusion concerns the degree of anxiety and emotional involvement showed by pro-vaccination comments.  Rather counter-intuitively, people who supported vaccination were more prone to post emotional messages, poor in logic and scientific contents.  In comparison, anti-vaccination messages were more rational, more logically structured, richer in scientific contents.  One of the authors noted “skeptical comments (…) focus on health, biology, and research, they may be particularly compelling for parents who are uncertain about what decision to make about childhood vaccination and are seeking more information (…) This concerns us because the scientific evidence is very clear in demonstrating the safety and benefits of vaccines". Here, it is the paradox, pro-vaccination people defend their (scientifically grounded) point of view by using emotional and non-scientific arguments, while anti-vaccination individuals defend their (anti-scientific) position by using well structured, logic, and apparently, evidence-based, discourses.
   
Researchers commented that vaccination supporters are inadequate to defend their reasons because they tend to become overzealous and are not capable enough to master scientific arguments.  I partly disagree with this conclusion. To be sure, as the debate between pro-vaccination and anti-vaccination groups becomes over polarized, it is understandable that emotional arguments become prevalent in pro-vaccination people, but this does not explain the opposite process among anti-vaccination individuals, who seem to become more rational and less emotionally involved. 


If there is something that this debate clearly demonstrates, it is that both parties tend to play the game of the other side. Anti-vaccination people pretend being rational and scientific; while pro-vaccination persons "discover" sentiments and try to evoke fear in their audience.  I don't think that this happens only for trivial or casual reasons. I suspect instead that such a "mimetic fight" provides important clues on how scientific and health communication could work also in other circumstances. Brief, the discovery of this bizarre mirror game is likely to be more significant than researchers suspected and it would deserve to be studied more in depth.

Monday, October 31, 2016

SCIENTIFIC ILLITERACY

Sci Ed is a PLOS Blog devoted to "Diverse perspectives on science and medicine". Mike Klymkowsky, professor of Molecular, Cellular, and Developmental Biology at the University of Colorado Boulder, has recently posted on Sci Ed a very interesting article dedicated to science literacy, "Recognizing scientific literacy & illiteracy".  Inspired by a report of the National Academy “Science Literacy: concepts, contexts, and consequences”,  Professor Klymkowsky  poses a provocative question, "can we recognize a scientifically illiterate person from what they write or say?".  This is a bit mischievous question, because it implicitly suggests that the distinction between scientifically literate and illiterate persons is not that easy.

What makes scientific literacy is not what one knows, but how he knows it. Science – argues Mike Klymkowsky - is more a perspective, than a specific knowledge.   The article lists two main criteria for assessing scientific literacy. First, scientific literacy implies the capacity to understand scientific questions and recognize what an adequate answer should contain (which is not, pay attention, the "right answer" but the "right format of the answer"). 

Second, scientific literacy means the capacity "to recognize the limits of scientific knowledge; this includes an explicit recognition of the tentative nature of science, combined with the fact that some things are, theoretically, unknowable scientifically". Science is made of local perspectives, any perspective that aims to be universal, total, cannot be scientific (which does not imply that it is wrong or false, but it simply means that it belongs to a different register). 

Finally, Mike Klymkowsky addresses an important issue, say, "scientific illiteracy in the scientific community". Paradoxically enough, it is not rare that the very scientific community shows some forms of scientific illiteracy. How could it be possible? Mike Klymkowsky thinks that it is chiefly due to the "highly competitive, litigious, and high stakes environment"  in which most scientists operate. Often this situation leads them to make claims that are over-arching and self-serving. In other words, driven by a too competitive environment, scientists tend to draw unjustified conclusions from their empirical findings to best market their results

The article ends posing the question "how to best help our students avoid scientific illiteracy". The conclusion is that there is not a clear answer to this question but to try to establish "a culture of Socratic discourse (as opposed to posturing)". Such a culture could be summarized – per the author - into an ongoing attempt to understand "what a person is saying, what empirical data and assumptions it is based on, and what does it imply and or predict". 

Curiosity and ongoing inquiry could help to prevent scientific illiteracy, yet there are two other aspects of the Socratic approach, which are still more essential to the scientific discourse. They are self-irony and sense of transcendence.  These two elements are strictly interlaced, because they are both rooted in the deep conviction that truth is always a bit beyond the human reach. Socrates is not relativistic – as some commentators have erroneously argued – rather he is aware that humans can get closer to truth only asymptotically. This awareness prevents any form of scientific arrogance, the real origin of scientific illiteracy.

Thursday, October 13, 2016

Mandatory Vaccinations

In the Oct 6 issue of the New York Times, Christopher Mele devotes an interesting article to risk communication in emergencies. In a nutshell, his argument is that – if one aims to communicate risks – one needs also to evoke fear. Mele’s point of departure is the recent evacuation of Florida, Georgia, North Carolina and South Carolina residents due to Hurricane Matthew. He reports that “even after all of the best practices in emergency communications are exhausted, 5 percent of the population will most likely remain in harm’s way, experts and researchers said”.  Actually, this figure is likely to be still over optimistic, for instance during 2013 Hurricane Sandy 49% of coastal residents who were under mandatory evacuation did not evacuate.  In 2014, a State University of New Jersey team led Dr. Cara Cuite, carried out a study on “Best Practices in Coastal Storm Risk Communication” concluding that effective communication should “stress the possibility that people who do not evacuate could be killed.  This is better done by using implicit messages rather than direct, explicit, messages. For instance, if authorities ask people who do not evacuate to fill out a form on how to notify their next of kin, they communicate in a very effective way the actual possibility “that people who do not evacuate could be killed “without the need of warning them explicitly. Another important lesson concerns semantic, say, the specific words chosen to communicate.  In most cases, mandatory evacuation is excluded, since there is no way to enforce it. Yet, experts know very well that “a voluntary evacuation will have a lower rate of compliance than one labeled mandatory”. It is then critical to avoid using the expression “voluntary evacuation” and “make it clear that residents are being ordered to leave, even if no one is going to remove them forcibly from their homes”. 

It is possible to elicit from Mele’s arguments two general rules concerning risk communication, which could be adopted also in other situations, notably in outbreaks. 


First, in contrast to the standard risk communication account, one should focus on emotional responses rather than on mere rationality.  Risk communicators often aim to raise awareness and to provide the public with information, which is in principle laudable and it would be effective in an ideal world, ruled only by rational choices. Unfortunately, very rarely people make choices on a rational basis, even when they pretend doing it.  As a matter of fact, in real life “pure rationality” does not exist, is a fictional concept. Mental processes are an inextricable mix of logic arguments, emotional reactions, implicit and explicit memories, automatisms, conscious and unconscious processes. Very rarely – if ever – an action follows a rational decision; more often the so called “rational decision” is a post-hoc rationalization, used to justify decisions made in more or less irrational ways.  There is little that one could do to prevent this mechanism, notably in emergencies, when people are asked to take quick and momentous decisions. Among emotions, fear plays a pivotal role as one of the basic emotions that drive human behavior. There are two opposite mistakes that one could do in risk communication, over stimulating fear but also over reassuring people. Fear must be fine-tuned.

Second, if being able to deal with emotions is critical in risk communication, this implies that two variables become paramount, timing and words.  Timing is essential because human emotions are continuously fluctuating in each individual and they change over time. The same message could evoke completely different reactions according to the emotional context of the receiver and consequently a message could have very different effects according to the moment in which is delivered. There is not something like “the right message”; rather there is “the right message in the right moment”. Words are also very important. I’m saying “words” and not “contents”, because I’m referring to the very terms used rather than to concepts underlying words.  Words unavoidably evoke specific emotional reactions, which are – be careful – cultural bound and context dependent (say, one should avoid the mistake of thinking that the same words evoke the same reactions always, everywhere and in everybody). The word “mandatory” is a good example. At least in our society, if something is “mandatory”, for most people it is also important, while, if it is “voluntary”, it is not (or less). So to label something as “mandatory” does not imply necessarily that one is going to enforce it compulsorily. The term “mandatory” could be used also to transmit the importance of an action.  This is well illustrated by the wrong (in communication terms) policy to make most vaccinations “voluntary”.  To be sure, in democratic societies it is largely unthinkable to vaccinate compulsory people and notably children, yet the issue at stake is only in part the balance between voluntariness, persuasion, soft coercion, and compulsion. Words chosen by regulators communicate also the importance of a public measure. Health authorities and policy makers should pay more attention to the communicational implications of wording, even when choices seem to concern purely technical and normative aspects.   

Friday, September 30, 2016

Vaccines and Alternative Medicine

The State of Vaccine Confidence 2016: Global Insights Through a 67-Country Survey  is the title of a study  just published by EBio Medine. A team of the London School of Hygiene & Tropical Medicine interviewed 65,819 respondents across 67 countries about their attitudes towards vaccines. The study is “the largest survey on confidence in immunization to date”.
Researchers submitted four statements to their sample, asking people to specify on a Likert scale their degree of agreement-disagreement. The four statements were,
1) "Vaccines are important for children to have."
2) "Overall I think vaccines are safe."
3) "Overall I think vaccines are effective."
4) "Vaccines are compatible with my religious beliefs."

The majority of interviewed people thought that vaccines are important for children but, rather contradictorily, they showed lower confidence in vaccine  effectiveness and, above all, in safety. Everywhere, education increased confidence in vaccine importance and effectiveness but not safety. Religion did not play a major role both in vaccine acceptance and hesitancy, with the exception of Mongolia where 50.5% of respondents said vaccines were not compatible with their religion (Buddhism), which is rather odd considering that other Buddhist countries were instead aligned with average results (around 8-10% people thinking that vaccines are hardly compatible with their religion).
Interestingly enough, European countries showed the lowest confidence in vaccine safety with France the least confident: 41% of respondents in France disagreed with the assertion that vaccines are safe (on average, 12% of respondents in other nations disagreed with this statement). Authors noted that "France recently has experienced 'anxiety' about suspected but unproven links between the hepatitis B vaccine and multiple sclerosis and, separately, the human papillomavirus vaccine and side effects like fatigue in girls". This element is certainly important, yet it hardly explains the findings.
In order to make more meaningful the survey, I would suggest to confront it with figures concerning prevalence of Complementary and Alternative Medicine (CAM).  Although in economic terms the largest CAM market is still the US, the market that is growing faster is the European market. Moreover, there is a significant difference between the US and the EU markets, while in the US the lion's share is largely taken by chiropractic, in the EU homeopathic and herbal remedies account for the largest part of the market. Homeopathy is particularly popular in France, where it is the leading alternative therapy. The costs for homeopathic products are partially covered by the French National Health System and the percentage of French population habitually or sporadically using homeopathy has grown from 16% (1982) to 62% (2004). This is mirrored by the attitude of health care professionals.  Homeopathy is taught in all major French medical schools and in schools of pharmacy, in dental schools, in veterinary medical schools, and schools of midwifery. According to a 2004 survey, 95% of GPs, dermatologists and pediatricians, consider homeopathy effective and are willing to prescribe it, or co-prescribe with conventional medicine. Another survey showed that 94.5% of French pharmacists advise pregnant women to prefer homeopathic products because "safer".


Concerns about safety of medical products are thus wider than vaccine hesitancy and vaccine hesitancy is probably only the peak of an iceberg. Further research is certainly required in order to understand better social, psychological, and economic dynamics that underlie this phenomenon. Yet an element is already self-evident: making appeal to scientific arguments to convince people to vaccinate themselves is a pure waste of time, if – at the same time -  the whole social fabric welcomes pseudo-scientific practices among recognized medical treatments. 

Monday, September 12, 2016

Selective Gullibility

PLOS Current Outbreaks has just published a study "Lessons from Ebola: Sources of Outbreak Information and the Associated Impact on UC Irvine and Ohio University College Students", authored by a team of researchers from the University of California Irvine, led by Miryha G. Runnerstrom.  Authors carried out an online survey of 797 undergraduates at the University of California, Irvine (UCI) and Ohio University (OU) during the peak of the 2014 Ebola (EVD) outbreak. Researchers aimed at identifying the main sources of information about the outbreak in four areas: knowledge, attitudes, beliefs, and stigma.

Results are rather interesting. Students main sources of information were news media (34%) and social media (19%). As one could expect, only few students searched information on official government and public health institution websites. However, this small minority (11%) was better informed and had more positive attitudes towards those infected. Authors conclude that " information sources are likely to influence students’ knowledge, attitudes, beliefs, and stigma relating to EVD", which is undoubtedly true, but it more a truism than an actual conclusion. Actually, the study tells something more.

There are at least three thought-provoking elements that emerge from this survey. The first one concerns risk perception. The large majority of participants (81%) perceived a low personal risk of contracting the infection. They were definitely right, given that the total number of Ebola cases in the US in 2014 was 11 cases, that is to say, the theoretical risk for an individual in the US to be infected by Ebola was about 11 in 319,000,000, even lower than the risk of being killed by a meteorite, which was calculated 
in 2014 by earth sciences professor Stephen A. Nelson to be 1 in 1,600,000. Yet, during 2014 Ebola outbreak, the alarm spread all over the US and public health communicators overly spoke today of a hysterical media coverage. Eric Boehlert, an influential blogger and writer for Media Matters for America, wrote "It's almost like they're crossing their fingers for an outbreak (…)   CNN actually invited onto the network a fiction writer who wrote an Ebola thriller in the 1980s to hype unsubstantiated fears about the transmission of the virus. CNN's Ashleigh Banfield speculated that "All ISIS would need to do is send a few of its suicide killers into an Ebola-affected zones and then get them on some mass transit, somewhere where they would need to be to affect the most damage." And colleague Don Lemon lamented that government officials seemed "too confident" they can contain the Ebola scare".  This is the first interesting element: notwithstanding media hypes, most students did not perceive a true health risk for them. 

The second interesting element concerns knowledge. Researchers complain that half sample (51%) showed poor or inexact knowledge of the disease, its means of transmission, its contagiousness, and symptoms. I would argue differently. It seems to me that – considering the inaccurate and emphatic media coverage - the fact that 49% participants had a sufficient correct knowledge of the infection and its dynamic is a positive surprise. It confirms that sensationalist information reaches the target but it does not necessarily penetrate it. This raises immediately a question, why does it happen? What are the main variables at stake in determining the outcome of a sensationalist information campaign?



The third element, which emerges from this study, provides a clue to answer this question.  The third element of interest concerns misinformation.  Asked whether they agreed with the statement “Ebola is a government conspiracy created to get rid of a particular race” 89% students answered that they disagreed. Yet, approx. 1/3 of all participants thought that “There is a cure for Ebola but the government is keeping it from the public”. This answer is particularly astonishing also considering that they were university students, that is to say, a population that should in principle possesses the intellectual means to debunk trivial conspiracy theories. What does it mean? It probably means that distrust towards politicians is at such a level that people look for any excuse to accuse them. In other words, this is a phenomenon that has little to do with public health information. As a general rule a piece of information becomes credible, ceteris paribus, to the extent that it meets people's expectation and imaginary. If people wish to see the evidence of politicians' dishonesty, any piece of information that is cooked with this ingredient becomes immediately palatable. But this does not imply that other pieces of information are passively accepted. 

This study confirms the universal tendency towards selective gullibility. We are always ready to believe in what confirms our beliefs and narratives, especially our biases, even when it is apparently incredible. Ironically enough, cynics are those more prone to this peculiar form of self-deception.