Genomic surveillance for antimicrobial resistance — a One Health perspective

Abstract

Antimicrobial resistance (AMR) — the ability of microorganisms to adapt and survive under diverse chemical selection pressures — is influenced by complex interactions between humans, companion and food-producing animals, wildlife, insects and the environment. To understand and manage the threat posed to health (human, animal, plant and environmental) and security (food and water security and biosecurity), a multifaceted ‘One Health’ approach to AMR surveillance is required. Genomic technologies have enabled monitoring of the mobilization, persistence and abundance of AMR genes and mutations within and between microbial populations. Their adoption has also allowed source-tracing of AMR pathogens and modelling of AMR evolution and transmission. Here, we highlight recent advances in genomic AMR surveillance and the relative strengths of different technologies for AMR surveillance and research. We showcase recent insights derived from One Health genomic surveillance and consider the challenges to broader adoption both in developed and in lower- and middle-income countries.

“That’s just like, your opinion, man”: the illusory truth effect on opinions

Abstract

With the expanse of technology, people are constantly exposed to an abundance of information. Of vital importance is to understand how people assess the truthfulness of such information. One indicator of perceived truthfulness seems to be whether it is repeated. That is, people tend to perceive repeated information, regardless of its veracity, as more truthful than new information, also known as the illusory truth effect. In the present study, we examined whether such effect is also observed for opinions and whether the manner in which the information is encoded influenced the illusory truth effect. Across three experiments, participants (n = 552) were presented with a list of true information, misinformation, general opinion, and/or social–political opinion statements. First, participants were either instructed to indicate whether the presented statement was a fact or opinion based on its syntax structure (Exp. 1 & 2) or assign each statement to a topic category (Exp. 3). Subsequently, participants rated the truthfulness of various new and repeated statements. Results showed that repeated information, regardless of the type of information, received higher subjective truth ratings when participants simply encoded them by assigning each statement to a topic. However, when general and social–political opinions were encoded as an opinion, we found no evidence of such effect. Moreover, we found a reversed illusory truth effect for general opinion statements when only considering information that was encoded as an opinion. These findings suggest that how information is encoded plays a crucial role in evaluating truth.

Mathematical Model and AI Integration for COVID-19: Improving Forecasting and Policy-Making

Abstract

In this work, a new susceptible–exposed–infectious–recovered (SEIR) compartmental model is proposed which has additional media influence for precise quantization of the coronavirus disease 2019 (COVID-19). In the proposed model, first-order ordinary differential equations (ODEs) are used for the formulation of basic reproduction number, whereas genetic algorithm (GA) is used for its estimation. The inclusion of climatic parameters, governmental impact, and human behavioral response toward the disease provides an upper hand in determining the dynamics of its transmissibility, thereby indicating their significance in precising the outcomes. In addition, the future trends for the new normalized confirmed cases of COVID-19 are predicted using the long short-term memory (LSTM) model which helps in evaluating and modifying the current preventive actions taken to improve the situation. The robustness of the proposed model is measured by five different error functions which are tested in five different countries. According to the experimental results, this is observed that the proposed model has a smaller prediction deviation as well and the proposed scheme outperforms state-of-the-art models of COVID-19.

Mathematical Model and AI Integration for COVID-19: Improving Forecasting and Policy-Making

Abstract

In this work, a new susceptible–exposed–infectious–recovered (SEIR) compartmental model is proposed which has additional media influence for precise quantization of the coronavirus disease 2019 (COVID-19). In the proposed model, first-order ordinary differential equations (ODEs) are used for the formulation of basic reproduction number, whereas genetic algorithm (GA) is used for its estimation. The inclusion of climatic parameters, governmental impact, and human behavioral response toward the disease provides an upper hand in determining the dynamics of its transmissibility, thereby indicating their significance in precising the outcomes. In addition, the future trends for the new normalized confirmed cases of COVID-19 are predicted using the long short-term memory (LSTM) model which helps in evaluating and modifying the current preventive actions taken to improve the situation. The robustness of the proposed model is measured by five different error functions which are tested in five different countries. According to the experimental results, this is observed that the proposed model has a smaller prediction deviation as well and the proposed scheme outperforms state-of-the-art models of COVID-19.

Twitter’s pulse on hydrogen energy in 280 characters: a data perspective

Abstract

Uncovering the public discourse on hydrogen energy is essential for understanding public behaviour and the evolving nature of conversations over time and across different regions. This paper presents a comprehensive analysis of a large multilingual dataset pertaining to hydrogen energy collected from Twitter spanning a decade (2013–2022) using selected keywords. The analysis aims to explore various aspects, including the temporal and spatial dimensions of the discourse, factors influencing Twitter engagement, user engagement patterns, and the interpretation of conversations through hashtags and ngrams. By delving into these aspects, this study offers valuable insights into the dynamics of public discourse surrounding hydrogen energy and the perceptions of social media users.

Twitter’s pulse on hydrogen energy in 280 characters: a data perspective

Abstract

Uncovering the public discourse on hydrogen energy is essential for understanding public behaviour and the evolving nature of conversations over time and across different regions. This paper presents a comprehensive analysis of a large multilingual dataset pertaining to hydrogen energy collected from Twitter spanning a decade (2013–2022) using selected keywords. The analysis aims to explore various aspects, including the temporal and spatial dimensions of the discourse, factors influencing Twitter engagement, user engagement patterns, and the interpretation of conversations through hashtags and ngrams. By delving into these aspects, this study offers valuable insights into the dynamics of public discourse surrounding hydrogen energy and the perceptions of social media users.

“Things fall apart, the center cannot hold”: fractionalized and polarized party systems in Western democracies

Abstract

Jean Blondel made many lasting contributions toward comparative politics, not least in his classification of party systems in Western democracies. Yet during the 5 decades since Blondel’s original contribution, party competition has been transformed by multiple developments, including changes in the grassroots electorate, as intermediary organizations connecting citizens and the state, and at the apex in legislatures and government. Does Blondel’s typology of party systems remain relevant today—or does it require substantial revision? And, does party system fragmentation predict ideological polarization? Part I sets out the theoretical framework. Part II compares trends from 1960 to 2020 in party system fragmentation in a wide range of democracies, measured by the effective number of parties in the electorate and in parliament. Not surprisingly, the effective number of electoral parties (ENEP) has generally grown in each country across Western democracies. This does not imply, however, that party systems are necessarily more polarized ideologically. Part III examines polarization in party systems across Western democracies, measured by standard deviations around the mean of several ideological values and issue positions in each country. The findings suggest that party system fractionalization and polarization should be treated as two distinct and unrelated dimensions of party competition. The conclusion reflects on the broader implications of the findings for understanding party polarization and threats of backsliding in democratic states.

“Things fall apart, the center cannot hold”: fractionalized and polarized party systems in Western democracies

Abstract

Jean Blondel made many lasting contributions toward comparative politics, not least in his classification of party systems in Western democracies. Yet during the 5 decades since Blondel’s original contribution, party competition has been transformed by multiple developments, including changes in the grassroots electorate, as intermediary organizations connecting citizens and the state, and at the apex in legislatures and government. Does Blondel’s typology of party systems remain relevant today—or does it require substantial revision? And, does party system fragmentation predict ideological polarization? Part I sets out the theoretical framework. Part II compares trends from 1960 to 2020 in party system fragmentation in a wide range of democracies, measured by the effective number of parties in the electorate and in parliament. Not surprisingly, the effective number of electoral parties (ENEP) has generally grown in each country across Western democracies. This does not imply, however, that party systems are necessarily more polarized ideologically. Part III examines polarization in party systems across Western democracies, measured by standard deviations around the mean of several ideological values and issue positions in each country. The findings suggest that party system fractionalization and polarization should be treated as two distinct and unrelated dimensions of party competition. The conclusion reflects on the broader implications of the findings for understanding party polarization and threats of backsliding in democratic states.

Computational philosophy: reflections on the PolyGraphs project

Abstract

In this paper, we situate our computational approach to philosophy relative to other digital humanities and computational social science practices, based on reflections stemming from our research on the PolyGraphs project in social epistemology. We begin by describing PolyGraphs. An interdisciplinary project funded by the Academies (BA, RS, and RAEng) and the Leverhulme Trust, it uses philosophical simulations (Mayo-Wilson and Zollman, 2021) to study how ignorance prevails in networks of inquiring rational agents. We deploy models developed in economics (Bala and Goyal, 1998), and refined in philosophy (O’Connor and Weatherall, 2018; Zollman, 2007), to simulate communities of agents engaged in inquiry, who generate evidence relevant to the topic of their investigation and share it with their neighbors, updating their beliefs on the evidence available to them. We report some novel results surrounding the prevalence of ignorance in such networks. In the second part of the paper, we compare our own to other related academic practices. We begin by noting that, in digital humanities projects of certain types, the computational component does not appear to directly support the humanities research itself; rather, the digital and the humanities are simply grafted together, not fully intertwined and integrated. PolyGraphs is notably different: the computational work directly supports the investigation of the primary research questions, which themselves belong decidedly within the humanities in general, and philosophy in particular. This suggests an affinity with certain projects in the computational social sciences. But despite these real similarities, there are differences once again: the computational philosophy we practice aims not so much at description and prediction as at answering the normative and interpretive questions that are distinctive of humanities research.

Computational philosophy: reflections on the PolyGraphs project

Abstract

In this paper, we situate our computational approach to philosophy relative to other digital humanities and computational social science practices, based on reflections stemming from our research on the PolyGraphs project in social epistemology. We begin by describing PolyGraphs. An interdisciplinary project funded by the Academies (BA, RS, and RAEng) and the Leverhulme Trust, it uses philosophical simulations (Mayo-Wilson and Zollman, 2021) to study how ignorance prevails in networks of inquiring rational agents. We deploy models developed in economics (Bala and Goyal, 1998), and refined in philosophy (O’Connor and Weatherall, 2018; Zollman, 2007), to simulate communities of agents engaged in inquiry, who generate evidence relevant to the topic of their investigation and share it with their neighbors, updating their beliefs on the evidence available to them. We report some novel results surrounding the prevalence of ignorance in such networks. In the second part of the paper, we compare our own to other related academic practices. We begin by noting that, in digital humanities projects of certain types, the computational component does not appear to directly support the humanities research itself; rather, the digital and the humanities are simply grafted together, not fully intertwined and integrated. PolyGraphs is notably different: the computational work directly supports the investigation of the primary research questions, which themselves belong decidedly within the humanities in general, and philosophy in particular. This suggests an affinity with certain projects in the computational social sciences. But despite these real similarities, there are differences once again: the computational philosophy we practice aims not so much at description and prediction as at answering the normative and interpretive questions that are distinctive of humanities research.