From unseen to seen in post-mining polluted territories: (in)visibilisation processes at work in soil contamination management

Abstract

In line with EU recommendations, the potential ‘mining revival’ in France focuses on (re)opening mines. In this context, political discussions on post-mining areas have increased, driven by past mismanagements. Scientists are key in these regions, studying contamination, advising policy, and seeking solutions. Based on a case study of phytoremediation research in Saint-Laurent-Le Minier, we explore how lay and expert knowledge intersect. By examining what is hidden and by whom, we unveil research practices and stakeholder dynamics, sparking reflection on the research process while promoting a reflexive approach for researchers. We show research and its application spotlight specific topics (such as soil contamination), select, and make visible certain lay knowledge and local stakeholders and visibilises certain technological choices.

Predictability of the 7·20 extreme rainstorm in Zhengzhou in stochastic kinetic-energy backscatter ensembles

Abstract

The scale-dependent predictability of the devastating 7·20 extreme rainstorm in Zhengzhou, China in 2021 was investigated via ensemble experiments, which were perturbed on different scales using the stochastic kinetic-energy backscatter (SKEB) scheme in the WRF model, with the innermost domain having a 3-km grid spacing. The daily rainfall (RAIN24h) and the cloudburst during 1600–1700 LST (RAIN1h) were considered. Results demonstrated that with larger perturbation scales, the ensemble spread for the rainfall maximum widens and rainfall forecasts become closer to the observations. In ensembles with mesoscale or convective-scale perturbations, RAIN1h loses predictability at scales smaller than 20 km and RAIN24h is predictable for all scales. Whereas in ensembles with synoptic-scale perturbations, the largest scale of predictability loss extends to 60 km for both RAIN1h and RAIN24h. Moreover, the average positional error in forecasting the heaviest rainfall for RAIN24h (RAIN1h) was 400 km (50–60) km. The southerly low-level jet near Zhengzhou was assumed to be directly responsible for the forecast uncertainty of RAIN1h. The rapid intensification in low-level cyclonic vorticity, mid-level divergence, and upward motion concomitant with the jet dynamically facilitated the cloudburst. Further analysis of the divergent, rotational and vertical kinetic spectra and the corresponding error spectra showed that the error kinetic energy at smaller scales grows faster than that at larger scales and saturates more quickly in all experiments. Larger-scale perturbations not only boost larger-scale error growth but are also conducive to error growth at all scales through a downscale cascade, which indicates that improving the accuracy of larger-scale flow forecast may discernibly contributes to the forecast of cloudburst intensity and position.

Playing an Augmented Reality Escape Game Promotes Learning About Fake News

Abstract

The spread of fake news poses a global challenge to society, as this deliberately false information reduce trust in democracy, manipulate opinions, and negatively affect people’s health. Educational research and practice must address this issue by developing and evaluating solutions to counter fake news. A promising approach in this regard is the use of game-based learning environments. In this study, we focus on Escape Fake, an augmented reality (AR) escape game developed for use in media literacy education. To date, there is limited research on the effectiveness of the game for learning about fake news. To overcome this gap, we conducted a field study using a pretest-posttest research design. A total of 28 students (14 girls, mean age = 14.71 years) participated. The results show that Escape Fake can address four learning objectives relevant in fake news detection with educationally desired effect sizes: Knowledge acquisition (d = 1.34), ability to discern information (d = 0.39), critical attitude toward trustworthiness of online information (d = 0.53), and confidence in recognizing fake news in the future (d = 0.41). Based on these results, the game can be recommended as an educational resource for media literacy education. Future research directions are also discussed.

Biden’s Executive Order on AI and the E.U.’s AI Act: A Comparative Computer-Ethical Analysis

Abstract

AI (ethics) initiatives are essential in bringing about fairer, safer, and more trustworthy AI systems. Yet, they also come with various drawbacks, including a lack of effective governance mechanisms, window-dressing, and ‘ethics shopping.’ To address those concerns, hard laws are necessary, and more and more countries are moving in this direction. Two of the most notable recent legislations include the Biden Administration’s Executive Order (EO) on AI and the E.U.’s AI Act (AIA). While several scholarly articles have evaluated the strengths and weaknesses of the AIA and proposed reform measures that could help strengthen the Act, only a couple of papers do the same for the EO or compare the two regulatory initiatives. The following sections try to close this research gap by providing an in-depth comparative analysis of the EO and AIA. They offer, in particular, a critical computer-ethical evaluation of the EO’s and AIA’s pros and cons and similarities and differences and discuss possible ways to improve both legislations.

Neighborhood relation-based incremental label propagation algorithm for partially labeled hybrid data

Abstract

Label propagation can rapidly predict the labels of unlabeled objects as the correct answers from a small amount of given label information, which can enhance the performance of subsequent machine learning tasks. Most existing label propagation methods are proposed for static data. However, in many applications, real datasets including multiple feature value types and massive unlabeled objects vary dynamically over time, whereas applying these label propagation methods for dynamic partially labeled hybrid data will be a huge drain due to recalculating from scratch when the data changes every time. To improve efficiency, a novel incremental label propagation algorithm based on neighborhood relation (ILPN) is developed in this paper. Specifically, we first construct graph structures by utilizing neighborhood relations to eliminate unnecessary label information. Then, a new label propagation strategy is designed in consideration of the weights assigned to each class so that it does not rely on a probabilistic transition matrix to fix the structure for propagation. On this basis, a new label propagation algorithm called neighborhood relation-based label propagation (LPN) is developed. For the dynamic partially labeled hybrid data, we integrate incremental learning into LPN and develop an updating mechanism that allows incremental label propagation over previous label propagation results and graph structures, rather than recalculating from scratch. Finally, extensive experiments on UCI datasets validate that our proposed algorithm LPN can outperform other label propagation algorithms in speed on the premise of ensuring accuracy. Especially for simulated dynamic data, the incremental algorithm ILPN is more efficient than other non-incremental methods with the variation of the partially labeled hybrid data.

Who shares about AI? Media exposure, psychological proximity, performance expectancy, and information sharing about artificial intelligence online

Abstract

Media exposure can shape audience perceptions surrounding novel innovations, such as artificial intelligence (AI), and could influence whether they share information about AI with others online. This study examines the indirect association between exposure to AI in the media and information sharing about AI online. We surveyed 567 US citizens aged 18 and older in November 2020, several months after the release of Open AI’s transformative GPT-3 model. Results suggest that AI media exposure was related to online information sharing through psychological proximity to the impacts of AI and positive AI performance expectancy in serial mediation. This positive indirect association became stronger the more an individual perceived society to be changing due to new technology. Results imply that public exposure to AI in the media could significantly impact public understanding of AI, and prompt further information sharing online.

Who shares about AI? Media exposure, psychological proximity, performance expectancy, and information sharing about artificial intelligence online

Abstract

Media exposure can shape audience perceptions surrounding novel innovations, such as artificial intelligence (AI), and could influence whether they share information about AI with others online. This study examines the indirect association between exposure to AI in the media and information sharing about AI online. We surveyed 567 US citizens aged 18 and older in November 2020, several months after the release of Open AI’s transformative GPT-3 model. Results suggest that AI media exposure was related to online information sharing through psychological proximity to the impacts of AI and positive AI performance expectancy in serial mediation. This positive indirect association became stronger the more an individual perceived society to be changing due to new technology. Results imply that public exposure to AI in the media could significantly impact public understanding of AI, and prompt further information sharing online.

A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness

Abstract

This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. For most users, current LLMs are black boxes, i.e., for the most part, they lack data transparency and algorithmic transparency. They can, however, be phenomenologically and informationally transparent, in which case there is an interactional flow. Anthropomorphising and interactional flow can, in some users, create an attitude of (unwarranted) trust towards the output LLMs generate. We conclude this paper by drawing on the epistemology of trust and testimony to examine the epistemic implications of these dimensions. Whilst LLMs generally generate accurate responses, we observe two epistemic pitfalls. Ideally, users should be able to match the level of trust that they place in LLMs to the degree that LLMs are trustworthy. However, both their data and algorithmic opacity and their phenomenological and informational transparency can make it difficult for users to calibrate their trust correctly. The effects of these limitations are twofold: users may adopt unwarranted attitudes of trust towards the outputs of LLMs (which is particularly problematic when LLMs hallucinate), and the trustworthiness of LLMs may be undermined.

A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness

Abstract

This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. For most users, current LLMs are black boxes, i.e., for the most part, they lack data transparency and algorithmic transparency. They can, however, be phenomenologically and informationally transparent, in which case there is an interactional flow. Anthropomorphising and interactional flow can, in some users, create an attitude of (unwarranted) trust towards the output LLMs generate. We conclude this paper by drawing on the epistemology of trust and testimony to examine the epistemic implications of these dimensions. Whilst LLMs generally generate accurate responses, we observe two epistemic pitfalls. Ideally, users should be able to match the level of trust that they place in LLMs to the degree that LLMs are trustworthy. However, both their data and algorithmic opacity and their phenomenological and informational transparency can make it difficult for users to calibrate their trust correctly. The effects of these limitations are twofold: users may adopt unwarranted attitudes of trust towards the outputs of LLMs (which is particularly problematic when LLMs hallucinate), and the trustworthiness of LLMs may be undermined.

Examining the nexus of blockchain technology and digital twins: Bibliometric evidence and research trends

Abstract

The integration of Blockchain Technology (BT) with Digital Twins (DTs) is becoming increasingly recognized as an effective strategy to enhance trust, interoperability, and data privacy in virtual spaces such as the metaverse. Although there is a significant body of research at the intersection of BT and DTs, a thorough review of the field has not yet been conducted. This study performs a systematic literature review on BT and DTs, using the CiteSpace analytic tool to evaluate the content and bibliometric information. The review covers 976 publications, identifying the significant effects of BT on DTs and the integration challenges. Key themes emerging from keyword analysis include augmented reality, smart cities, smart manufacturing, cybersecurity, lifecycle management, Ethereum, smart grids, additive manufacturing, blockchain technology, and digitalization. Based on this analysis, the study proposes a development framework for BT-enhanced DTs that includes supporting technologies and applications, main applications, advantages and functionalities, primary contexts of application, and overarching goals and principles. Additionally, an examination of bibliometric data reveals three developmental phases in cross-sectional research on BT and DTs: technology development, technology use, and technology deployment. These phases highlight the research field’s evolution and provide valuable direction for future studies on BT-enhanced DTs.