Towards an Epistemology of ‘Speciesist Ignorance’

Abstract

The literature on the epistemology of ignorance already discusses how certain forms of discrimination, such as racism and sexism, are perpetuated by the ignorance of individuals and groups. However, little attention has been given to how speciesism—a form of discrimination on the basis of species membership—is sustained through ignorance. Of the few animal ethicists who explicitly discuss ignorance, none have related this concept to speciesism as a form of discrimination. However, it is crucial to explore this connection, I argue, as ignorance is both an integral part of the injustice done to animals as well as an obstacle to improving their treatment. In order to adequately criticize sustained structural speciesism and injustices towards animals, I develop an epistemological account of ‘speciesist ignorance’. I begin by defining and distinguishing between individual and group-based accounts of speciesist ignorance. I argue that humans, taken as a group, enjoy a position of privilege, which allows them to comfortably remain ignorant of their participation in collective wrongdoings towards animals. Additionally, I point out that speciesist ignorance is structurally encouraged and thereby maintains the dominant view that the human-animal-relationship, as it stands, is just. In sum, this article lays the groundwork for a social epistemology of speciesist ignorance. In particular, it informs further debate about individual and institutional epistemic duties to inquire into speciesism and to inform the public, about the moral culpability of ignorant actions, and about effective animal advocacy and policy which actively rejects speciesist ignorance.

Wisdom in the Age of AI Education

Abstract

The disruptive potential of artificial intelligence (AI) augurs a requisite evolutionary concern for artificial wisdom (AW). However, given both a dearth of institutionalized scientific impetus and culturally subjective understandings of wisdom, there is currently no consensus surrounding its future development. This article provides a succinct overview of wisdom within various cultural traditions to establish a foundational common ground for both its necessity and global functioning in the age of AI. This is followed by a more directed argument in favor of pedagogical practices that inculcate students with a theoretical/practical wisdom in support of individual/collective critical capacities directed at democratic planetary stewardship in the age of AI education. The article concludes with a distilled synthesis of wisdom philosophies as principles that establish a framework for the development of a new planetary ethics built upon a symbiotic relationship between humans-technology and nature.

Environmental epistemology

Abstract

We argue that there is a large class of questions—specifically questions about how to epistemically evaluate environments—that currently available epistemic theories are not well-suited for answering, precisely because these questions are not about the epistemic state of particular agents or groups. For example, if we critique Facebook for being conducive to the spread of misinformation, then we are not thereby critiquing Facebook for being irrational, or lacking knowledge, or failing to testify truthfully. Instead, we are saying something about the social media environment. In this paper, we first propose that a new branch of epistemology–Environmental Epistemology–is needed to address these questions. We argue that environments can be proper objects of epistemic evaluation, and that there are genuine epistemic norms that govern environments. We then provide a positive account of these norms and conclude by considering how recognition of these norms may require us to rethink longstanding epistemic debates.

Academic capture in the Anthropocene: a framework to assess climate action in higher education

Abstract

Higher education institutions have a mandate to serve the public good, yet in many cases fail to adequately respond to the global climate crisis. The inability of academic institutions to commit to purposeful climate action through targeted research, education, outreach, and policy is due in large part to “capture” by special interests. Capture involves powerful minority interests that exert influence and derive benefits at the expense of a larger group or purpose. This paper makes a conceptual contribution to advance a framework of “academic capture” applied to the climate crisis in higher education institutions. Academic capture is the result of the three contributing factors of increasing financialization issues, influence of the fossil fuel industry, and reticence of university employees to challenge the status quo. The framework guides an empirical assessment evaluating eight activities and related indices of transparency and participation based on principles of climate justice and the growing democracy-climate nexus. The framework can be a helpful tool for citizens and academics to assess the potential for academic capture and capacity for more just and democratic methods of climate action in higher education. We conclude with a series of recommendations on how to refine and apply our framework and assessment in academic settings. Our goal is to further the discussion on academic capture and continue to develop tools that transform higher education institutions to places of deep democracy and innovative climate education, research, and outreach to meet the challenges of the Anthropocene.

The animal agriculture industry, US universities, and the obstruction of climate understanding and policy

Abstract

The 2006 United Nations report “Livestock’s Long Shadow” provided the first global estimate of the livestock sector’s contribution to anthropogenic climate change and warned of dire environmental consequences if business as usual continued. In the subsequent 17 years, numerous studies have attributed significant climate change impacts to livestock. In the USA, one of the largest consumers and producers of meat and dairy products, livestock greenhouse gas emissions remain effectively unregulated. What might explain this? Similar to fossil fuel companies, US animal agriculture companies responded to evidence that their products cause climate change by minimizing their role in the climate crisis and shaping policymaking in their favor. Here, we show that the industry has done so with the help of university experts. The beef industry awarded funding to Dr. Frank Mitloehner from the University of California, Davis, to assess “Livestock’s Long Shadow,” and his work was used to claim that cows should not be blamed for climate change. The animal agriculture industry is now involved in multiple multi-million-dollar efforts with universities to obstruct unfavorable policies as well as influence climate change policy and discourse. Here, we traced how these efforts have downplayed the livestock sector’s contributions to the climate crisis, minimized the need for emission regulations and other policies aimed at internalizing the costs of the industry’s emissions, and promoted industry-led climate “solutions” that maintain production. We studied this phenomenon by examining the origins, funding sources, activities, and political significance of two prominent academic centers, the CLEAR Center at UC Davis, established in 2018, and AgNext at Colorado State University, established in 2020, as well as the influence and industry ties of the programs’ directors, Dr. Mitloehner and Dr. Kimberly Stackhouse-Lawson. We developed 20 questions to evaluate the nature, extent, and societal impacts of the relationship between individual researchers and industry groups. Using publicly available evidence, we documented how the ties between these professors, centers, and the animal agriculture industry have helped maintain the livestock industry’s social license to operate not only by generating industry-supported research, but also by supporting public relations and policy advocacy.

Public perceptions of the use of artificial intelligence in Defence: a qualitative exploration

Abstract

There are a wide variety of potential applications of artificial intelligence (AI) in Defence settings, ranging from the use of autonomous drones to logistical support. However, limited research exists exploring how the public view these, especially in view of the value of public attitudes for influencing policy-making. An accurate understanding of the public’s perceptions is essential for crafting informed policy, developing responsible governance, and building responsive assurance relating to the development and use of AI in military settings. This study is the first to explore public perceptions of and attitudes towards AI in Defence. A series of four focus groups were conducted with 20 members of the UK public, aged between 18 and 70, to explore their perceptions and attitudes towards AI use in general contexts and, more specifically, applications of AI in Defence settings. Thematic analysis revealed four themes and eleven sub-themes, spanning the role of humans in the system, the ethics of AI use in Defence, trust in AI versus trust in the organisation, and gathering information about AI in Defence. Participants demonstrated a variety of misconceptions about the applications of AI in Defence, with many assuming that a variety of different technologies involving AI are already being used. This highlighted a confluence between information from reputable sources combined with narratives from the mass media and conspiracy theories. The study demonstrates gaps in knowledge and misunderstandings that need to be addressed, and offers practical insights for keeping the public reliably, accurately, and adequately informed about the capabilities, limitations, benefits, and risks of AI in Defence.

The adaptive community-response (ACR) method for collecting misinformation on social media

Abstract

Social media can be a major accelerator of the spread of misinformation, thereby potentially compromising both individual well-being and social cohesion. Despite significant recent advances, the study of online misinformation is a relatively young field facing several (methodological) challenges. In this regard, the detection of online misinformation has proven difficult, as online large-scale data streams require (semi-)automated, highly specific and therefore sophisticated methods to separate posts containing misinformation from irrelevant posts. In the present paper, we introduce the adaptive community-response (ACR) method, an unsupervised technique for the large-scale collection of misinformation on Twitter (now known as ’X’). The ACR method is based on previous findings showing that Twitter users occasionally reply to misinformation with fact-checking by referring to specific fact-checking sites (crowdsourced fact-checking). In a first step, we captured such misinforming but fact-checked tweets. These tweets were used in a second step to extract specific linguistic features (keywords), enabling us to collect also those misinforming tweets that were not fact-checked at all as a third step. We initially present a mathematical framework of our method, followed by an explicit algorithmic implementation. We then evaluate ACR on the basis of a comprehensive dataset consisting of \(>25\) million tweets, belonging to \(>300\) misinforming stories. Our evaluation shows that ACR is a useful extension to the methods pool of the field, enabling researchers to collect online misinformation more comprehensively. Text similarity measures clearly indicated correspondence between the claims of false stories and the ACR tweets, even though ACR performance was heterogeneously distributed across the stories. A baseline comparison to the fact-checked tweets showed that the ACR method can detect story-related tweets to a comparable degree, while being sensitive to different types of tweets: Fact-checked tweets tend to be driven by high outreach (as indicated by a high number of retweets), whereas the sensitivity of the ACR method extends to tweets exhibiting lower outreach. Taken together, ACR’s capacity as a valuable methodological contribution to the field is based on (i) the adoption of prior, pioneering research in the field, (ii) a well-formalized mathematical framework and (iii) an empirical foundation via a comprehensive set of indicators.

Epistemic Insights as Design Principles for a Teaching-Learning Module on Artificial Intelligence

Abstract

In a historical moment in which Artificial Intelligence and machine learning have become within everyone’s reach, science education needs to find new ways to foster “AI literacy.” Since the AI revolution is not only a matter of having introduced extremely performant tools but has been determining a radical change in how we conceive and produce knowledge, not only technical skills are needed but instruments to engage, cognitively, and culturally, with the epistemological challenges that this revolution poses. In this paper, we argue that epistemic insights can be introduced in AI teaching to highlight the differences between three paradigms: the imperative procedural, the declarative logic, and the machine learning based on neural networks (in particular, deep learning). To do this, we analyze a teaching-learning activity designed and implemented within a module on AI for upper secondary school students in which the game of tic-tac-toe is addressed from these three alternative perspectives. We show how the epistemic issues of opacity, uncertainty, and emergence, which the philosophical literature highlights as characterizing the novelty of deep learning with respect to other approaches, allow us to build the scaffolding for establishing a dialogue between the three different paradigms.

How denialist amplification spread COVID misinformation and undermined the credibility of public health science

Abstract

Denialist scientists played an outsized role in shaping public opinion and determining public health policy during the recent COVID pandemic. From early on, amplification of researchers who denied the threat of COVID shaped public opinion and undermined public health policy. The forces that amplify denialists include (1) Motivated amplifiers seeking to protect their own interests by supporting denialist scientists, (2) Conventional media outlets giving disproportionate time to denialist opinions, (3) Promoters of controversy seeking to gain traction in an ‘attention economy,’ and (4) Social media creating information silos in which denialists can become the dominant voice. Denialist amplification poses an existential threat to science relevant to public policy. It is incumbent on the scientific community to create a forum to accurately capture the collective perspective of the scientific community related to public health policy that is open to dissenting voices but prevents artificial amplification of denialists.