Lying by explaining: an experimental study

Abstract

The widely accepted view states that an intention to deceive is not necessary for lying. Proponents of this view, the so-called non-deceptionists, argue that lies are simply insincere assertions. We conducted three experimental studies with false explanations, the results of which put some pressure on non-deceptionist analyses. We present cases of explanations that one knows are false and compare them with analogical explanations that differ only in having a deceptive intention. The results show that lay people distinguish between such false explanations and to a higher degree classify as lies those explanations that are made with the intention to deceive. Non-deceptionists fail to distinguish between such cases and wrongly classify both as lies. This novel empirical finding indicates the need for supplementing non-deceptionist definitions of lying, at least in some cases, with an additional condition, such as an intention to deceive.

GenAI against humanity: nefarious applications of generative artificial intelligence and large language models

Abstract

Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) are marvels of technology; celebrated for their prowess in natural language processing and multimodal content generation, they promise a transformative future. But as with all powerful tools, they come with their shadows. Picture living in a world where deepfakes are indistinguishable from reality, where synthetic identities orchestrate malicious campaigns, and where targeted misinformation or scams are crafted with unparalleled precision. Welcome to the darker side of GenAI applications. This article is not just a journey through the meanders of potential misuse of GenAI and LLMs, but also a call to recognize the urgency of the challenges ahead. As we navigate the seas of misinformation campaigns, malicious content generation, and the eerie creation of sophisticated malware, we’ll uncover the societal implications that ripple through the GenAI revolution we are witnessing. From AI-powered botnets on social media platforms to the unnerving potential of AI to generate fabricated identities, or alibis made of synthetic realities, the stakes have never been higher. The lines between the virtual and the real worlds are blurring, and the consequences of potential GenAI’s nefarious applications impact us all. This article serves both as a synthesis of rigorous research presented on the risks of GenAI and misuse of LLMs and as a thought-provoking vision of the different types of harmful GenAI applications we might encounter in the near future, and some ways we can prepare for them.

How denialist amplification spread COVID misinformation and undermined the credibility of public health science

Abstract

Denialist scientists played an outsized role in shaping public opinion and determining public health policy during the recent COVID pandemic. From early on, amplification of researchers who denied the threat of COVID shaped public opinion and undermined public health policy. The forces that amplify denialists include (1) Motivated amplifiers seeking to protect their own interests by supporting denialist scientists, (2) Conventional media outlets giving disproportionate time to denialist opinions, (3) Promoters of controversy seeking to gain traction in an ‘attention economy,’ and (4) Social media creating information silos in which denialists can become the dominant voice. Denialist amplification poses an existential threat to science relevant to public policy. It is incumbent on the scientific community to create a forum to accurately capture the collective perspective of the scientific community related to public health policy that is open to dissenting voices but prevents artificial amplification of denialists.

The Role of Materiality in an Era of Generative Artificial Intelligence

Abstract

The introduction of generative artificial intelligence (GenAI) tools like ChatGPT has raised many challenging questions about the nature of teaching, learning, and assessment in every subject area, including science. Unlike other disciplines, natural science is unique because the ontological and epistemological understanding of nature is fundamentally rooted in our interaction with material objects in the physical world. GenAI, powered by statistical probability arising from a massive corpus of text, is devoid of any connection to the physical world. The use of GenAI thus raises concerns about our connection to reality and its effect on science education. This paper emphasizes the importance of materiality (or material reality) in shaping scientific knowledge and argues for its recognition in the era of GenAI. Drawing on the perspectives of new materialism and science studies, the paper highlights how materiality forms an indispensable aspect of human knowledge and meaning-making, particularly in the discipline of science. It further explains how materiality is central to the epistemic authority of science and cautions the outputs generated by GenAI that lack contextualization to a material reality. The paper concludes by providing recommendations for research and teaching that recognize the role of materiality in the context of GenAI, specifically in practical work, scientific argumentation, and learning with GenAI. As we navigate a future dominated by GenAI, understanding how the epistemic authority of science arises from our connection to the physical world will become a crucial consideration in science education.

Orchestrating the climate choir: the boundaries of scientists’ expertise, the relevance of experiential knowledge, and quality assurance in the public climate debate

Abstract

Scientific knowledge is at the heart of discussions about climate change. However, it has been proposed that the apparent predominance of climate science in the societal debate should be reconsidered and that a more inclusive approach is warranted. Further, the introduction of new communication technology has made the information environment more fragmented, possibly endangering the quality of societal deliberation on climate change concerns. Using focus group methodology, this paper explores how climate scientists, climate journalists, and citizens perceive scientific experts’ mandate when they communicate publicly, the role of experiential knowledge in discussions of climate-related issues, and who the three actors prefer to guard the quality of the climate information exchanged in the public sphere. The findings show that scientific experts are perceived to carry a high degree of legitimacy, but only within their own narrow specialty, while experiential knowledge was seen as more useful in applied domains of science than in arcane research fields. In the new media landscape, journalists are still generally preferred as gatekeepers by all three actor types.

Provider Perspectives on Multi-level Barriers and Facilitators to PrEP Access Among Latinx Sexual and Gender Minorities

Abstract

Although pre-exposure prophylaxis (PrEP) is a highly effective HIV prevention intervention, inequities in access remain among Latinx sexual and gender minorities (LSGM). There is also a gap in the PrEP literature regarding providers’ perspective on access inequities. This qualitative case study sought to explore barriers and facilitators to PrEP engagement in a community-based integrated health center primarily serving Latinx populations in Northern California. We conducted in-depth, semi-structured interviews with providers (9/15) involved in PrEP services and engaged in a constructivist grounded theory analysis consisting of memoing, coding, and identifying salient themes. Three participants worked as medical providers, three as outreach staff, and one each in planning, education, and research. The analysis surfaced four themes: geopolitical differences, culture as barrier, clinic as context, and patient strengths and needs. Participants referenced a lack of resources to promote PrEP, as well as the difficulties of working within an institution that still struggles with cultural and organizational mores that deprioritize sexual health. Another barrier is related to sexual health being positioned outside of patients’ immediate needs owing to structural barriers, including poverty, documentation status, and education. Participants, however, observed that peer-based models, which emboldened their decision-making processes, were conducive to better access to PrEP, as well as allowing them to build stronger community ties. These data underscore the need for interventions to help reduce sexual stigma, promote peer support, and ameliorate structural barriers to sexual healthcare among LSGM.

On inscription and bias: data, actor network theory, and the social problems of text-to-image AI models

Abstract

Text-to-image generation platforms are a type of generative artificial intelligence that can produce novel and realistic images from a text prompt. However, these systems also raise social and ethical issues related to the data they rely on. Therefore, this review essay explores how data influence these issues and how to address them using the concept of inscription by Bruno Latour. Inscription is the process of encoding the values and interests of the actors involved in the creation and use of a technology into the technology itself. Using inscription as a theoretical and analytical tool, this work analyzes the data sources, data processing, data representation, and data interpretation of these systems, and reveals how they shape the images they generate and the potential biases and harms they may cause. Thus, this essay offers a new perspective on the ethical discussion of the generative AI models, especially text-to-image models, by bridging the gap between the technical and sociological perspectives on these issues, which has been largely overlooked in the existing literature, and it also provides some novel and practical recommendations for the developers, users, and regulators of these technologies, based on the findings and implications of the analysis.

On inscription and bias: data, actor network theory, and the social problems of text-to-image AI models

Abstract

Text-to-image generation platforms are a type of generative artificial intelligence that can produce novel and realistic images from a text prompt. However, these systems also raise social and ethical issues related to the data they rely on. Therefore, this review essay explores how data influence these issues and how to address them using the concept of inscription by Bruno Latour. Inscription is the process of encoding the values and interests of the actors involved in the creation and use of a technology into the technology itself. Using inscription as a theoretical and analytical tool, this work analyzes the data sources, data processing, data representation, and data interpretation of these systems, and reveals how they shape the images they generate and the potential biases and harms they may cause. Thus, this essay offers a new perspective on the ethical discussion of the generative AI models, especially text-to-image models, by bridging the gap between the technical and sociological perspectives on these issues, which has been largely overlooked in the existing literature, and it also provides some novel and practical recommendations for the developers, users, and regulators of these technologies, based on the findings and implications of the analysis.

The persuasive effects of social cues and source effects on misinformation susceptibility

Abstract

Although misinformation exposure takes place within a social context, significant conclusions have been drawn about misinformation susceptibility through studies that largely examine judgements in a social vacuum. Bridging the gap between social influence research and the cognitive science of misinformation, we examine the mechanisms through which social context impacts misinformation susceptibility across 5 experiments (N = 20,477). We find that social cues only impact individual judgements when they influence perceptions of wider social consensus, and that source similarity only biases news consumers when the source is high in credibility. Specifically, high and low engagement cues (‘likes’) reduced misinformation susceptibility relative to a control, and endorsement cues increased susceptibility, but discrediting cues had no impact. Furthermore, political ingroup sources increased susceptibility if the source was high in credibility, but political outgroup sources had no effect relative to a control. This work highlights the importance of studying cognitive processes within a social context, as judgements of (mis)information change when embedded in the social world. These findings further underscore the need for multifaceted interventions that take account of the social context in which false information is processed to effectively mitigate the impact of misinformation on the public.

The persuasive effects of social cues and source effects on misinformation susceptibility

Abstract

Although misinformation exposure takes place within a social context, significant conclusions have been drawn about misinformation susceptibility through studies that largely examine judgements in a social vacuum. Bridging the gap between social influence research and the cognitive science of misinformation, we examine the mechanisms through which social context impacts misinformation susceptibility across 5 experiments (N = 20,477). We find that social cues only impact individual judgements when they influence perceptions of wider social consensus, and that source similarity only biases news consumers when the source is high in credibility. Specifically, high and low engagement cues (‘likes’) reduced misinformation susceptibility relative to a control, and endorsement cues increased susceptibility, but discrediting cues had no impact. Furthermore, political ingroup sources increased susceptibility if the source was high in credibility, but political outgroup sources had no effect relative to a control. This work highlights the importance of studying cognitive processes within a social context, as judgements of (mis)information change when embedded in the social world. These findings further underscore the need for multifaceted interventions that take account of the social context in which false information is processed to effectively mitigate the impact of misinformation on the public.