作者: cajole 時(shí)間: 2025-3-21 20:57 作者: 邊緣 時(shí)間: 2025-3-22 03:14 作者: 碳水化合物 時(shí)間: 2025-3-22 07:06 作者: POLYP 時(shí)間: 2025-3-22 09:21 作者: 暗指 時(shí)間: 2025-3-22 13:07 作者: Mobile 時(shí)間: 2025-3-22 18:01
Feminism, Capitalism, and Critique the spread and amplification of misinformation, it is useful to understand the effects of these algorithms on misinformation sharing and the manner in which they spread it. This chapter examines the psychological, cognitive, and social factors involved in the processing of misinformation people rec作者: BET 時(shí)間: 2025-3-22 21:36 作者: 沉思的魚 時(shí)間: 2025-3-23 01:49
https://doi.org/10.1057/9780230101531computer interaction has contributed a mechanism of such biases that can be addressed by the design of system interventions. For example, the principle of nudging refers to sophisticated modifications in the choice architecture that can change user behaviors in desired or directed ways. This chapter作者: Instantaneous 時(shí)間: 2025-3-23 08:54 作者: 生來(lái) 時(shí)間: 2025-3-23 12:52
https://doi.org/10.1007/978-94-009-1742-2AI and rapidly increasing misinformation problems. Misinformation is by no means a new phenomenon, yet its trend is highlighted by the emergence of AI. It might be useful to see misinformation in the context of a new and rapidly evolving AI landscape, which has facilitated the spread of unparalleled作者: jealousy 時(shí)間: 2025-3-23 14:59
https://doi.org/10.1007/978-94-009-1742-2n fact and fiction. Deepfakes, which use deep learning involving AI to generate videos or fake events, are highly realistic synthetic media that can be abused to threaten an organization’s brand; to impersonate leaders and financial officers; and to enable access to networks, communications, and sen作者: 協(xié)定 時(shí)間: 2025-3-23 19:06
Donghee ShinExplores the dynamics of artificial intelligence (AI) in human contexts.Advocates for interdisciplinary solutions for combating online misinformation.Essential reading for students and scholars in div作者: Axon895 時(shí)間: 2025-3-23 23:37
http://image.papertrans.cn/b/image/162612.jpg作者: Unsaturated-Fat 時(shí)間: 2025-3-24 05:55 作者: ROOF 時(shí)間: 2025-3-24 09:12
Book 2024m meaningful control over AI and the online sphere. Calling for an interdisciplinary approach toward human-misinformation algorithmic interaction that focuses on building methods and tools that robustly deal with complex psychological/social phenomena, the book offers a compelling insight into the future of AI-based society..作者: LURE 時(shí)間: 2025-3-24 14:39 作者: Vital-Signs 時(shí)間: 2025-3-24 16:08 作者: Factual 時(shí)間: 2025-3-24 19:08
Introduction: The Epistemology of Misinformation—How Do We Know What We Knowns work, and what we have to do to combat misinformation. Misinformation concerns and potential methods for mitigating those threats can be discussed in terms of cognitive processes connected to perception, understanding, heuristics, sensemaking, cognitive processing, and decision-making.作者: 厭食癥 時(shí)間: 2025-3-25 00:35 作者: 類型 時(shí)間: 2025-3-25 04:32 作者: 悄悄移動(dòng) 時(shí)間: 2025-3-25 10:39
Introduction: The Epistemology of Misinformation—How Do We Know What We Know it seems that artificial intelligence (AI) is the primary conduit for it. . for misinformation that can do good—but also worsen misinformation. Advances in machine learning and algorithms have brought to life a highly effective method for conveying misinformation. In the midst of a “misinfodemic,” 作者: Blood-Clot 時(shí)間: 2025-3-25 15:22
Misinformation and Algorithmic Biassed as humans are. Bias can originate from various venues, including but not limited to the design and unintended or unanticipated use of the algorithm or algorithmic decisions about the way data are coded, framed, filtered, or analyzed to train machine learning. Algorithmic bias has been widely see作者: Aspiration 時(shí)間: 2025-3-25 17:36 作者: Coordinate 時(shí)間: 2025-3-25 20:20 作者: harmony 時(shí)間: 2025-3-26 04:05
Misinformation Processing Model: How Users Process Misinformation When Using Recommender Algorithms the spread and amplification of misinformation, it is useful to understand the effects of these algorithms on misinformation sharing and the manner in which they spread it. This chapter examines the psychological, cognitive, and social factors involved in the processing of misinformation people rec作者: Watemelon 時(shí)間: 2025-3-26 07:09
Misinformation and Diversity: Nudging Away from Misinformation Nudging Toward Diversityss to mitigate misinformation. Free and plural ideas are key to addressing misinformation and informing users. A key indicator of the healthy online ecosystem is the existence of diversity of ideas and others’ perspectives. Exposure to diverse sources of news promotes tolerance, social cohesion, and作者: 移動(dòng) 時(shí)間: 2025-3-26 10:46
Misinformation, Paradox, and Nudge: Combating Misinformation Through Nudgingcomputer interaction has contributed a mechanism of such biases that can be addressed by the design of system interventions. For example, the principle of nudging refers to sophisticated modifications in the choice architecture that can change user behaviors in desired or directed ways. This chapter作者: Ordeal 時(shí)間: 2025-3-26 16:03 作者: dysphagia 時(shí)間: 2025-3-26 17:29 作者: 利用 時(shí)間: 2025-3-26 22:15 作者: 護(hù)航艦 時(shí)間: 2025-3-27 03:49
https://doi.org/10.1057/9780230389533om unintended privacy infringements to solidifying societal biases of gender, race, ethnicity, and culture. The significance of the data used in training algorithms should not be underestimated. Humans should play a part in the datafication of algorithms, as preventing the spread of misinformation i作者: Cholesterol 時(shí)間: 2025-3-27 06:15
The Great Breast-Feeding Question,the pathways by which users access far-right content are manifold and that a large part of this can be ascribed to platform recommendations through a positive feedback loop. The results are consistent with the proposition that the generation and adoption of extreme content on TikTok largely reflect 作者: STIT 時(shí)間: 2025-3-27 10:30
https://doi.org/10.1007/978-3-319-52386-6 was a greater drop in algorithmic news when nudging was employed. Moderation from algorithmic trust was found, and users’ trust in algorithmic media amplified the nudge effect only for news from algorithmic media and not nonalgorithmic online media sources. The results of our study confirm previous作者: 救護(hù)車 時(shí)間: 2025-3-27 14:21 作者: 非實(shí)體 時(shí)間: 2025-3-27 17:53
Naming Identities: Politics of Identity,users make sense of algorithmic nudges and how nudges influence users’ views on personalization and attitudes toward news diversity. The findings show that algorithmic nudges play a key role in understanding normative values in NRS, which then influence the user’s intention to consume diverse news. 作者: HERTZ 時(shí)間: 2025-3-28 00:25
https://doi.org/10.1057/9780230101531of nudging in decreasing the chances of consuming and spreading misinformation. The key is how to ensure that algorithmic nudges are used in an effective way and whether the nudge could also help to achieve a sustainable way of life. This chapter discusses the principles and dimensions of the nudgin作者: 玷污 時(shí)間: 2025-3-28 05:32
https://doi.org/10.1007/978-94-009-1742-2high heuristic processing of ethical values and positive diagnostic perception were more likely to proactively discern misinformation than users with low heuristic processing and low diagnostic perception. When exposed to misinformation from GenAI, users’ construed diagnosticity of misinformation ca作者: 變態(tài) 時(shí)間: 2025-3-28 10:06 作者: CAB 時(shí)間: 2025-3-28 11:27
ach toward human-misinformation algorithmic interaction that focuses on building methods and tools that robustly deal with complex psychological/social phenomena, the book offers a compelling insight into the future of AI-based society..978-3-031-52571-1978-3-031-52569-8作者: ONYM 時(shí)間: 2025-3-28 18:31
Misinformation and Algorithmic Biasom unintended privacy infringements to solidifying societal biases of gender, race, ethnicity, and culture. The significance of the data used in training algorithms should not be underestimated. Humans should play a part in the datafication of algorithms, as preventing the spread of misinformation i作者: 樸素 時(shí)間: 2025-3-28 20:21
Misinformation, Extremism, and Conspiracies: Amplification and Polarization by Algorithmsthe pathways by which users access far-right content are manifold and that a large part of this can be ascribed to platform recommendations through a positive feedback loop. The results are consistent with the proposition that the generation and adoption of extreme content on TikTok largely reflect 作者: opalescence 時(shí)間: 2025-3-29 01:28 作者: gusher 時(shí)間: 2025-3-29 04:20
Misinformation Processing Model: How Users Process Misinformation When Using Recommender Algorithmsn and whether explainability further moderates this relationship. The findings showed that users with a high heuristic processing of normative values and positive diagnostic perception were more likely to proactively discern misinformation. Users with a high cognitive ability to understand informati作者: CERE 時(shí)間: 2025-3-29 10:56
Misinformation and Diversity: Nudging Away from Misinformation Nudging Toward Diversityusers make sense of algorithmic nudges and how nudges influence users’ views on personalization and attitudes toward news diversity. The findings show that algorithmic nudges play a key role in understanding normative values in NRS, which then influence the user’s intention to consume diverse news. 作者: arrhythmic 時(shí)間: 2025-3-29 15:19
Misinformation, Paradox, and Nudge: Combating Misinformation Through Nudgingof nudging in decreasing the chances of consuming and spreading misinformation. The key is how to ensure that algorithmic nudges are used in an effective way and whether the nudge could also help to achieve a sustainable way of life. This chapter discusses the principles and dimensions of the nudgin作者: recede 時(shí)間: 2025-3-29 18:39
Misinformation and Generative AI: How Users Construe Their Sense of Diagnostic Misinformationhigh heuristic processing of ethical values and positive diagnostic perception were more likely to proactively discern misinformation than users with low heuristic processing and low diagnostic perception. When exposed to misinformation from GenAI, users’ construed diagnosticity of misinformation ca