Towards an Epistemology of ‘Speciesist Ignorance’

Abstract

The literature on the epistemology of ignorance already discusses how certain forms of discrimination, such as racism and sexism, are perpetuated by the ignorance of individuals and groups. However, little attention has been given to how speciesism—a form of discrimination on the basis of species membership—is sustained through ignorance. Of the few animal ethicists who explicitly discuss ignorance, none have related this concept to speciesism as a form of discrimination. However, it is crucial to explore this connection, I argue, as ignorance is both an integral part of the injustice done to animals as well as an obstacle to improving their treatment. In order to adequately criticize sustained structural speciesism and injustices towards animals, I develop an epistemological account of ‘speciesist ignorance’. I begin by defining and distinguishing between individual and group-based accounts of speciesist ignorance. I argue that humans, taken as a group, enjoy a position of privilege, which allows them to comfortably remain ignorant of their participation in collective wrongdoings towards animals. Additionally, I point out that speciesist ignorance is structurally encouraged and thereby maintains the dominant view that the human-animal-relationship, as it stands, is just. In sum, this article lays the groundwork for a social epistemology of speciesist ignorance. In particular, it informs further debate about individual and institutional epistemic duties to inquire into speciesism and to inform the public, about the moral culpability of ignorant actions, and about effective animal advocacy and policy which actively rejects speciesist ignorance.

The potential of generative AI for personalized persuasion at scale

Abstract

Matching the language or content of a message to the psychological profile of its recipient (known as “personalized persuasion”) is widely considered to be one of the most effective messaging strategies. We demonstrate that the rapid advances in large language models (LLMs), like ChatGPT, could accelerate this influence by making personalized persuasion scalable. Across four studies (consisting of seven sub-studies; total N = 1788), we show that personalized messages crafted by ChatGPT exhibit significantly more influence than non-personalized messages. This was true across different domains of persuasion (e.g., marketing of consumer products, political appeals for climate action), psychological profiles (e.g., personality traits, political ideology, moral foundations), and when only providing the LLM with a single, short prompt naming or describing the targeted psychological dimension. Thus, our findings are among the first to demonstrate the potential for LLMs to automate, and thereby scale, the use of personalized persuasion in ways that enhance its effectiveness and efficiency. We discuss the implications for researchers, practitioners, and the general public.

The animal agriculture industry, US universities, and the obstruction of climate understanding and policy

Abstract

The 2006 United Nations report “Livestock’s Long Shadow” provided the first global estimate of the livestock sector’s contribution to anthropogenic climate change and warned of dire environmental consequences if business as usual continued. In the subsequent 17 years, numerous studies have attributed significant climate change impacts to livestock. In the USA, one of the largest consumers and producers of meat and dairy products, livestock greenhouse gas emissions remain effectively unregulated. What might explain this? Similar to fossil fuel companies, US animal agriculture companies responded to evidence that their products cause climate change by minimizing their role in the climate crisis and shaping policymaking in their favor. Here, we show that the industry has done so with the help of university experts. The beef industry awarded funding to Dr. Frank Mitloehner from the University of California, Davis, to assess “Livestock’s Long Shadow,” and his work was used to claim that cows should not be blamed for climate change. The animal agriculture industry is now involved in multiple multi-million-dollar efforts with universities to obstruct unfavorable policies as well as influence climate change policy and discourse. Here, we traced how these efforts have downplayed the livestock sector’s contributions to the climate crisis, minimized the need for emission regulations and other policies aimed at internalizing the costs of the industry’s emissions, and promoted industry-led climate “solutions” that maintain production. We studied this phenomenon by examining the origins, funding sources, activities, and political significance of two prominent academic centers, the CLEAR Center at UC Davis, established in 2018, and AgNext at Colorado State University, established in 2020, as well as the influence and industry ties of the programs’ directors, Dr. Mitloehner and Dr. Kimberly Stackhouse-Lawson. We developed 20 questions to evaluate the nature, extent, and societal impacts of the relationship between individual researchers and industry groups. Using publicly available evidence, we documented how the ties between these professors, centers, and the animal agriculture industry have helped maintain the livestock industry’s social license to operate not only by generating industry-supported research, but also by supporting public relations and policy advocacy.

Academic capture in the Anthropocene: a framework to assess climate action in higher education

Abstract

Higher education institutions have a mandate to serve the public good, yet in many cases fail to adequately respond to the global climate crisis. The inability of academic institutions to commit to purposeful climate action through targeted research, education, outreach, and policy is due in large part to “capture” by special interests. Capture involves powerful minority interests that exert influence and derive benefits at the expense of a larger group or purpose. This paper makes a conceptual contribution to advance a framework of “academic capture” applied to the climate crisis in higher education institutions. Academic capture is the result of the three contributing factors of increasing financialization issues, influence of the fossil fuel industry, and reticence of university employees to challenge the status quo. The framework guides an empirical assessment evaluating eight activities and related indices of transparency and participation based on principles of climate justice and the growing democracy-climate nexus. The framework can be a helpful tool for citizens and academics to assess the potential for academic capture and capacity for more just and democratic methods of climate action in higher education. We conclude with a series of recommendations on how to refine and apply our framework and assessment in academic settings. Our goal is to further the discussion on academic capture and continue to develop tools that transform higher education institutions to places of deep democracy and innovative climate education, research, and outreach to meet the challenges of the Anthropocene.

Environmental epistemology

Abstract

We argue that there is a large class of questions—specifically questions about how to epistemically evaluate environments—that currently available epistemic theories are not well-suited for answering, precisely because these questions are not about the epistemic state of particular agents or groups. For example, if we critique Facebook for being conducive to the spread of misinformation, then we are not thereby critiquing Facebook for being irrational, or lacking knowledge, or failing to testify truthfully. Instead, we are saying something about the social media environment. In this paper, we first propose that a new branch of epistemology–Environmental Epistemology–is needed to address these questions. We argue that environments can be proper objects of epistemic evaluation, and that there are genuine epistemic norms that govern environments. We then provide a positive account of these norms and conclude by considering how recognition of these norms may require us to rethink longstanding epistemic debates.

Wisdom in the Age of AI Education

Abstract

The disruptive potential of artificial intelligence (AI) augurs a requisite evolutionary concern for artificial wisdom (AW). However, given both a dearth of institutionalized scientific impetus and culturally subjective understandings of wisdom, there is currently no consensus surrounding its future development. This article provides a succinct overview of wisdom within various cultural traditions to establish a foundational common ground for both its necessity and global functioning in the age of AI. This is followed by a more directed argument in favor of pedagogical practices that inculcate students with a theoretical/practical wisdom in support of individual/collective critical capacities directed at democratic planetary stewardship in the age of AI education. The article concludes with a distilled synthesis of wisdom philosophies as principles that establish a framework for the development of a new planetary ethics built upon a symbiotic relationship between humans-technology and nature.

Public perceptions of the use of artificial intelligence in Defence: a qualitative exploration

Abstract

There are a wide variety of potential applications of artificial intelligence (AI) in Defence settings, ranging from the use of autonomous drones to logistical support. However, limited research exists exploring how the public view these, especially in view of the value of public attitudes for influencing policy-making. An accurate understanding of the public’s perceptions is essential for crafting informed policy, developing responsible governance, and building responsive assurance relating to the development and use of AI in military settings. This study is the first to explore public perceptions of and attitudes towards AI in Defence. A series of four focus groups were conducted with 20 members of the UK public, aged between 18 and 70, to explore their perceptions and attitudes towards AI use in general contexts and, more specifically, applications of AI in Defence settings. Thematic analysis revealed four themes and eleven sub-themes, spanning the role of humans in the system, the ethics of AI use in Defence, trust in AI versus trust in the organisation, and gathering information about AI in Defence. Participants demonstrated a variety of misconceptions about the applications of AI in Defence, with many assuming that a variety of different technologies involving AI are already being used. This highlighted a confluence between information from reputable sources combined with narratives from the mass media and conspiracy theories. The study demonstrates gaps in knowledge and misunderstandings that need to be addressed, and offers practical insights for keeping the public reliably, accurately, and adequately informed about the capabilities, limitations, benefits, and risks of AI in Defence.