Epistemic Insights as Design Principles for a Teaching-Learning Module on Artificial Intelligence

Abstract

In a historical moment in which Artificial Intelligence and machine learning have become within everyone’s reach, science education needs to find new ways to foster “AI literacy.” Since the AI revolution is not only a matter of having introduced extremely performant tools but has been determining a radical change in how we conceive and produce knowledge, not only technical skills are needed but instruments to engage, cognitively, and culturally, with the epistemological challenges that this revolution poses. In this paper, we argue that epistemic insights can be introduced in AI teaching to highlight the differences between three paradigms: the imperative procedural, the declarative logic, and the machine learning based on neural networks (in particular, deep learning). To do this, we analyze a teaching-learning activity designed and implemented within a module on AI for upper secondary school students in which the game of tic-tac-toe is addressed from these three alternative perspectives. We show how the epistemic issues of opacity, uncertainty, and emergence, which the philosophical literature highlights as characterizing the novelty of deep learning with respect to other approaches, allow us to build the scaffolding for establishing a dialogue between the three different paradigms.

GenAI against humanity: nefarious applications of generative artificial intelligence and large language models

Abstract

Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) are marvels of technology; celebrated for their prowess in natural language processing and multimodal content generation, they promise a transformative future. But as with all powerful tools, they come with their shadows. Picture living in a world where deepfakes are indistinguishable from reality, where synthetic identities orchestrate malicious campaigns, and where targeted misinformation or scams are crafted with unparalleled precision. Welcome to the darker side of GenAI applications. This article is not just a journey through the meanders of potential misuse of GenAI and LLMs, but also a call to recognize the urgency of the challenges ahead. As we navigate the seas of misinformation campaigns, malicious content generation, and the eerie creation of sophisticated malware, we’ll uncover the societal implications that ripple through the GenAI revolution we are witnessing. From AI-powered botnets on social media platforms to the unnerving potential of AI to generate fabricated identities, or alibis made of synthetic realities, the stakes have never been higher. The lines between the virtual and the real worlds are blurring, and the consequences of potential GenAI’s nefarious applications impact us all. This article serves both as a synthesis of rigorous research presented on the risks of GenAI and misuse of LLMs and as a thought-provoking vision of the different types of harmful GenAI applications we might encounter in the near future, and some ways we can prepare for them.

How denialist amplification spread COVID misinformation and undermined the credibility of public health science

Abstract

Denialist scientists played an outsized role in shaping public opinion and determining public health policy during the recent COVID pandemic. From early on, amplification of researchers who denied the threat of COVID shaped public opinion and undermined public health policy. The forces that amplify denialists include (1) Motivated amplifiers seeking to protect their own interests by supporting denialist scientists, (2) Conventional media outlets giving disproportionate time to denialist opinions, (3) Promoters of controversy seeking to gain traction in an ‘attention economy,’ and (4) Social media creating information silos in which denialists can become the dominant voice. Denialist amplification poses an existential threat to science relevant to public policy. It is incumbent on the scientific community to create a forum to accurately capture the collective perspective of the scientific community related to public health policy that is open to dissenting voices but prevents artificial amplification of denialists.

Advancing Sponge City Implementation in China: The Quest for a Strategy Model

Abstract

The unbridled expansion of urban development in China has created unsustainable challenges in the management of urban rainwater. In response, the Chinese government has endorsed sponge city (SPC) theory as a sustainable urban development model that aims to enhance urban planning, construction, and sustainable wastewater management. However, despite the issuance of policies and regulations, the envisioned SPC goals remain difficult to achieve in current implementations. This review paper proposes an idealized SPC strategy model that can be adopted by pilot cities in China. This model was developed by thoroughly analyzing policy requirements and in-field achievements, evaluating diverse implementation scenarios, and contrasting the outcomes in three different pilot cities in China. The demonstrated success of city construction has highlighted the potential to simultaneously achieve multiple objectives, including conserving urban water resources, enhancing urban water quality, ensuring water safety, and revitalizing urban water ecosystems. This review supports the use of a planning approach that integrates the drainage division, aligns with project-specific conditions and emphasizes the importance of low-impact development (LID) facility placement within drainage zones. Consequently, this study calls for exploring the impact of catchment topography on LID performance. Finally, the results of this study highlight the necessity of investigating precipitation variations among LID facilities during rainfall events and exploring cost-effective material alternatives to improve the effectiveness of SPC implementations.

Lying by explaining: an experimental study

Abstract

The widely accepted view states that an intention to deceive is not necessary for lying. Proponents of this view, the so-called non-deceptionists, argue that lies are simply insincere assertions. We conducted three experimental studies with false explanations, the results of which put some pressure on non-deceptionist analyses. We present cases of explanations that one knows are false and compare them with analogical explanations that differ only in having a deceptive intention. The results show that lay people distinguish between such false explanations and to a higher degree classify as lies those explanations that are made with the intention to deceive. Non-deceptionists fail to distinguish between such cases and wrongly classify both as lies. This novel empirical finding indicates the need for supplementing non-deceptionist definitions of lying, at least in some cases, with an additional condition, such as an intention to deceive.

The Role of Materiality in an Era of Generative Artificial Intelligence

Abstract

The introduction of generative artificial intelligence (GenAI) tools like ChatGPT has raised many challenging questions about the nature of teaching, learning, and assessment in every subject area, including science. Unlike other disciplines, natural science is unique because the ontological and epistemological understanding of nature is fundamentally rooted in our interaction with material objects in the physical world. GenAI, powered by statistical probability arising from a massive corpus of text, is devoid of any connection to the physical world. The use of GenAI thus raises concerns about our connection to reality and its effect on science education. This paper emphasizes the importance of materiality (or material reality) in shaping scientific knowledge and argues for its recognition in the era of GenAI. Drawing on the perspectives of new materialism and science studies, the paper highlights how materiality forms an indispensable aspect of human knowledge and meaning-making, particularly in the discipline of science. It further explains how materiality is central to the epistemic authority of science and cautions the outputs generated by GenAI that lack contextualization to a material reality. The paper concludes by providing recommendations for research and teaching that recognize the role of materiality in the context of GenAI, specifically in practical work, scientific argumentation, and learning with GenAI. As we navigate a future dominated by GenAI, understanding how the epistemic authority of science arises from our connection to the physical world will become a crucial consideration in science education.

Orchestrating the climate choir: the boundaries of scientists’ expertise, the relevance of experiential knowledge, and quality assurance in the public climate debate

Abstract

Scientific knowledge is at the heart of discussions about climate change. However, it has been proposed that the apparent predominance of climate science in the societal debate should be reconsidered and that a more inclusive approach is warranted. Further, the introduction of new communication technology has made the information environment more fragmented, possibly endangering the quality of societal deliberation on climate change concerns. Using focus group methodology, this paper explores how climate scientists, climate journalists, and citizens perceive scientific experts’ mandate when they communicate publicly, the role of experiential knowledge in discussions of climate-related issues, and who the three actors prefer to guard the quality of the climate information exchanged in the public sphere. The findings show that scientific experts are perceived to carry a high degree of legitimacy, but only within their own narrow specialty, while experiential knowledge was seen as more useful in applied domains of science than in arcane research fields. In the new media landscape, journalists are still generally preferred as gatekeepers by all three actor types.

Provider Perspectives on Multi-level Barriers and Facilitators to PrEP Access Among Latinx Sexual and Gender Minorities

Abstract

Although pre-exposure prophylaxis (PrEP) is a highly effective HIV prevention intervention, inequities in access remain among Latinx sexual and gender minorities (LSGM). There is also a gap in the PrEP literature regarding providers’ perspective on access inequities. This qualitative case study sought to explore barriers and facilitators to PrEP engagement in a community-based integrated health center primarily serving Latinx populations in Northern California. We conducted in-depth, semi-structured interviews with providers (9/15) involved in PrEP services and engaged in a constructivist grounded theory analysis consisting of memoing, coding, and identifying salient themes. Three participants worked as medical providers, three as outreach staff, and one each in planning, education, and research. The analysis surfaced four themes: geopolitical differences, culture as barrier, clinic as context, and patient strengths and needs. Participants referenced a lack of resources to promote PrEP, as well as the difficulties of working within an institution that still struggles with cultural and organizational mores that deprioritize sexual health. Another barrier is related to sexual health being positioned outside of patients’ immediate needs owing to structural barriers, including poverty, documentation status, and education. Participants, however, observed that peer-based models, which emboldened their decision-making processes, were conducive to better access to PrEP, as well as allowing them to build stronger community ties. These data underscore the need for interventions to help reduce sexual stigma, promote peer support, and ameliorate structural barriers to sexual healthcare among LSGM.

On inscription and bias: data, actor network theory, and the social problems of text-to-image AI models

Abstract

Text-to-image generation platforms are a type of generative artificial intelligence that can produce novel and realistic images from a text prompt. However, these systems also raise social and ethical issues related to the data they rely on. Therefore, this review essay explores how data influence these issues and how to address them using the concept of inscription by Bruno Latour. Inscription is the process of encoding the values and interests of the actors involved in the creation and use of a technology into the technology itself. Using inscription as a theoretical and analytical tool, this work analyzes the data sources, data processing, data representation, and data interpretation of these systems, and reveals how they shape the images they generate and the potential biases and harms they may cause. Thus, this essay offers a new perspective on the ethical discussion of the generative AI models, especially text-to-image models, by bridging the gap between the technical and sociological perspectives on these issues, which has been largely overlooked in the existing literature, and it also provides some novel and practical recommendations for the developers, users, and regulators of these technologies, based on the findings and implications of the analysis.