Public perceptions of the use of artificial intelligence in Defence: a qualitative exploration

Abstract

There are a wide variety of potential applications of artificial intelligence (AI) in Defence settings, ranging from the use of autonomous drones to logistical support. However, limited research exists exploring how the public view these, especially in view of the value of public attitudes for influencing policy-making. An accurate understanding of the public’s perceptions is essential for crafting informed policy, developing responsible governance, and building responsive assurance relating to the development and use of AI in military settings. This study is the first to explore public perceptions of and attitudes towards AI in Defence. A series of four focus groups were conducted with 20 members of the UK public, aged between 18 and 70, to explore their perceptions and attitudes towards AI use in general contexts and, more specifically, applications of AI in Defence settings. Thematic analysis revealed four themes and eleven sub-themes, spanning the role of humans in the system, the ethics of AI use in Defence, trust in AI versus trust in the organisation, and gathering information about AI in Defence. Participants demonstrated a variety of misconceptions about the applications of AI in Defence, with many assuming that a variety of different technologies involving AI are already being used. This highlighted a confluence between information from reputable sources combined with narratives from the mass media and conspiracy theories. The study demonstrates gaps in knowledge and misunderstandings that need to be addressed, and offers practical insights for keeping the public reliably, accurately, and adequately informed about the capabilities, limitations, benefits, and risks of AI in Defence.

Integration of SWAT, SDSM, AHP, and TOPSIS to detect flood-prone areas

Abstract

Flood is one of the most frightening dangers in the world, which can cause a lot of human and financial losses. In this study, an attempt has been made to create a flood risk map with higher accuracy by using the combination of SWAT, SDSM, AHP, and TOPSIS models. The flood risk map helps to identify areas that have flood potential. Managers and officials can control and reduce human and financial losses caused by floods by using such maps and adopting correct policies. In this study, using the SWAT and SDSM models, the future runoff of the Kashkan basin of Lorestan Province in Iran was simulated for the period from 2049 to 2073. Simulated runoff with different return periods of 2, 5, 10, 25, 50, and 100 years was investigated. According to the obtained results, RCP2.6 was introduced as the most dangerous scenario of this watershed with a runoff forecast of 7715 cubic meters per second. With the help of the obtained flood risk map, sub-basins 22, 24, 28, and 32 representing Khorram Abad and Poldakhter cities were introduced as flood-prone areas of the study area. The simulation of the precipitation, maximum and minimum temperature of the studied basin in the period from 2006 to 2100 showed that the maximum and minimum temperatures can get warmer by 1.3–3  C, and 1 to 2  C can get colder. On the other hand, the rainfall of the entire basin will be able to decrease between 54 and 120 mm. The methods used in this study can also be used to detect flood-prone areas for other parts of the world that have been exposed to sudden floods due to climate change.

Graphical abstract

The adaptive community-response (ACR) method for collecting misinformation on social media

Abstract

Social media can be a major accelerator of the spread of misinformation, thereby potentially compromising both individual well-being and social cohesion. Despite significant recent advances, the study of online misinformation is a relatively young field facing several (methodological) challenges. In this regard, the detection of online misinformation has proven difficult, as online large-scale data streams require (semi-)automated, highly specific and therefore sophisticated methods to separate posts containing misinformation from irrelevant posts. In the present paper, we introduce the adaptive community-response (ACR) method, an unsupervised technique for the large-scale collection of misinformation on Twitter (now known as ’X’). The ACR method is based on previous findings showing that Twitter users occasionally reply to misinformation with fact-checking by referring to specific fact-checking sites (crowdsourced fact-checking). In a first step, we captured such misinforming but fact-checked tweets. These tweets were used in a second step to extract specific linguistic features (keywords), enabling us to collect also those misinforming tweets that were not fact-checked at all as a third step. We initially present a mathematical framework of our method, followed by an explicit algorithmic implementation. We then evaluate ACR on the basis of a comprehensive dataset consisting of \(>25\) million tweets, belonging to \(>300\) misinforming stories. Our evaluation shows that ACR is a useful extension to the methods pool of the field, enabling researchers to collect online misinformation more comprehensively. Text similarity measures clearly indicated correspondence between the claims of false stories and the ACR tweets, even though ACR performance was heterogeneously distributed across the stories. A baseline comparison to the fact-checked tweets showed that the ACR method can detect story-related tweets to a comparable degree, while being sensitive to different types of tweets: Fact-checked tweets tend to be driven by high outreach (as indicated by a high number of retweets), whereas the sensitivity of the ACR method extends to tweets exhibiting lower outreach. Taken together, ACR’s capacity as a valuable methodological contribution to the field is based on (i) the adoption of prior, pioneering research in the field, (ii) a well-formalized mathematical framework and (iii) an empirical foundation via a comprehensive set of indicators.

Flood risk assessment and adaptation under changing climate for the agricultural system in the Ghanaian White Volta Basin

Abstract

In the context of river basins, the threat of climate change has been extensively studied. However, many of these studies centred on hazard analysis while neglecting the need for comprehensive risk assessments that account for exposure and vulnerability. Hazard analysis alone is not adequate for making adaptive decisions. Thus, to effectively manage flood risk, it is essential to understand the elements that contribute to vulnerability and exposure in addition to hazard analysis. This study aims to assess flood risk (in space and time until the year 2100) for the agricultural system, in the White Volta Basin in northern Ghana. Employing the impact chain methodology, a mix of quantitative and qualitative data and techniques were used to assess hazard, exposure, and vulnerability. Multi-model climate change data (RCP 8.5) from CORDEX and observation data from the Ghana Meteorological Agency were used for hazard analysis. Data on exposure, vulnerability, and adaptation were collected through structured interviews. Results indicate that flood hazard will increase by 79.1% with high spatial variability of wet periods but the flood risk of the catchment will increase by 19.3% by the end of the twenty-first century. The highest flood risk is found in the Upper East region, followed by North East, Northern, Savannah, and Upper West for all four analysed periods. Adaptive capacity, sensitivity, and exposure factors are driven by poverty, ineffective institutional governance, and a lack of livelihood alternatives. We conclude that the region is highly susceptible and vulnerable to floods, and that shifting from isolated hazard analysis to a comprehensive assessment that considers exposure and vulnerability reveals the underlying root causes of the risk. Also, the impact chain is useful in generating insight into flood risk for policymakers and researchers. We recommend the need to enhance local capacity and foster social transformation in the region.

Epistemic Insights as Design Principles for a Teaching-Learning Module on Artificial Intelligence

Abstract

In a historical moment in which Artificial Intelligence and machine learning have become within everyone’s reach, science education needs to find new ways to foster “AI literacy.” Since the AI revolution is not only a matter of having introduced extremely performant tools but has been determining a radical change in how we conceive and produce knowledge, not only technical skills are needed but instruments to engage, cognitively, and culturally, with the epistemological challenges that this revolution poses. In this paper, we argue that epistemic insights can be introduced in AI teaching to highlight the differences between three paradigms: the imperative procedural, the declarative logic, and the machine learning based on neural networks (in particular, deep learning). To do this, we analyze a teaching-learning activity designed and implemented within a module on AI for upper secondary school students in which the game of tic-tac-toe is addressed from these three alternative perspectives. We show how the epistemic issues of opacity, uncertainty, and emergence, which the philosophical literature highlights as characterizing the novelty of deep learning with respect to other approaches, allow us to build the scaffolding for establishing a dialogue between the three different paradigms.

Assessing the variability of satellite and reanalysis rainfall products over a semiarid catchment in Tunisia

Abstract

Precipitation is a key component in hydrologic processes. It plays an important role in hydrological modeling and water resource management. However, many regions suffer from limited and data scarcity due to the lack of ground-based rain gauge networks. The main objective of this study is to evaluate other source of rainfall data such as remote sensing data (three different satellite-based precipitation products (CHIRPS, PERSIANN, and GPM) and a reanalysis (ERA5) against ground-based data, which could provide complementary rainfall information in semiarid catchment of Tunisia (Haffouz catchment), for the period between September 2000 and August 2018. These remotely sensed-data are compared for the first time with observations in a semiarid catchment in Tunisia.

Twelve rain gauges and two different interpolation methods (inverse distance weight and ordinary kriging) were used to compute a set of interpolated precipitation reference fields. The evaluation was performed at daily, monthly, and yearly time scales and at different spatial scales, using several statistical metrics. The results showed that the two interpolation methods give similar precipitation estimates at the catchment scale. According to the different statistical metrics, CHIRPS showed the most satisfactory results followed by PERSIANN which performed well in terms of correlation but overestimated precipitations spatially over the catchment. GPM underestimates the precipitation considerably, but it gives a satisfactory performance temporally. ERA5 shows a very good performance at daily, monthly, and yearly timescale, but it is unable to represent the spatial variability distribution of precipitation for this catchment. This study concluded that satellite-based precipitation products or reanalysis data can be useful in semiarid regions and data-scarce catchments, and it may provide less costly alternatives for data-poor regions.

Assessing the variability of satellite and reanalysis rainfall products over a semiarid catchment in Tunisia

Abstract

Precipitation is a key component in hydrologic processes. It plays an important role in hydrological modeling and water resource management. However, many regions suffer from limited and data scarcity due to the lack of ground-based rain gauge networks. The main objective of this study is to evaluate other source of rainfall data such as remote sensing data (three different satellite-based precipitation products (CHIRPS, PERSIANN, and GPM) and a reanalysis (ERA5) against ground-based data, which could provide complementary rainfall information in semiarid catchment of Tunisia (Haffouz catchment), for the period between September 2000 and August 2018. These remotely sensed-data are compared for the first time with observations in a semiarid catchment in Tunisia.

Twelve rain gauges and two different interpolation methods (inverse distance weight and ordinary kriging) were used to compute a set of interpolated precipitation reference fields. The evaluation was performed at daily, monthly, and yearly time scales and at different spatial scales, using several statistical metrics. The results showed that the two interpolation methods give similar precipitation estimates at the catchment scale. According to the different statistical metrics, CHIRPS showed the most satisfactory results followed by PERSIANN which performed well in terms of correlation but overestimated precipitations spatially over the catchment. GPM underestimates the precipitation considerably, but it gives a satisfactory performance temporally. ERA5 shows a very good performance at daily, monthly, and yearly timescale, but it is unable to represent the spatial variability distribution of precipitation for this catchment. This study concluded that satellite-based precipitation products or reanalysis data can be useful in semiarid regions and data-scarce catchments, and it may provide less costly alternatives for data-poor regions.

How denialist amplification spread COVID misinformation and undermined the credibility of public health science

Abstract

Denialist scientists played an outsized role in shaping public opinion and determining public health policy during the recent COVID pandemic. From early on, amplification of researchers who denied the threat of COVID shaped public opinion and undermined public health policy. The forces that amplify denialists include (1) Motivated amplifiers seeking to protect their own interests by supporting denialist scientists, (2) Conventional media outlets giving disproportionate time to denialist opinions, (3) Promoters of controversy seeking to gain traction in an ‘attention economy,’ and (4) Social media creating information silos in which denialists can become the dominant voice. Denialist amplification poses an existential threat to science relevant to public policy. It is incumbent on the scientific community to create a forum to accurately capture the collective perspective of the scientific community related to public health policy that is open to dissenting voices but prevents artificial amplification of denialists.

GenAI against humanity: nefarious applications of generative artificial intelligence and large language models

Abstract

Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) are marvels of technology; celebrated for their prowess in natural language processing and multimodal content generation, they promise a transformative future. But as with all powerful tools, they come with their shadows. Picture living in a world where deepfakes are indistinguishable from reality, where synthetic identities orchestrate malicious campaigns, and where targeted misinformation or scams are crafted with unparalleled precision. Welcome to the darker side of GenAI applications. This article is not just a journey through the meanders of potential misuse of GenAI and LLMs, but also a call to recognize the urgency of the challenges ahead. As we navigate the seas of misinformation campaigns, malicious content generation, and the eerie creation of sophisticated malware, we’ll uncover the societal implications that ripple through the GenAI revolution we are witnessing. From AI-powered botnets on social media platforms to the unnerving potential of AI to generate fabricated identities, or alibis made of synthetic realities, the stakes have never been higher. The lines between the virtual and the real worlds are blurring, and the consequences of potential GenAI’s nefarious applications impact us all. This article serves both as a synthesis of rigorous research presented on the risks of GenAI and misuse of LLMs and as a thought-provoking vision of the different types of harmful GenAI applications we might encounter in the near future, and some ways we can prepare for them.