標(biāo)題: Titlebook: Artificial Intelligence in HCI; Second International Helmut Degen,Stavroula Ntoa Conference proceedings 2021 Springer Nature Switzerland AG [打印本頁] 作者: 要旨 時(shí)間: 2025-3-21 16:28
書目名稱Artificial Intelligence in HCI影響因子(影響力)
書目名稱Artificial Intelligence in HCI影響因子(影響力)學(xué)科排名
書目名稱Artificial Intelligence in HCI網(wǎng)絡(luò)公開度
書目名稱Artificial Intelligence in HCI網(wǎng)絡(luò)公開度學(xué)科排名
書目名稱Artificial Intelligence in HCI被引頻次
書目名稱Artificial Intelligence in HCI被引頻次學(xué)科排名
書目名稱Artificial Intelligence in HCI年度引用
書目名稱Artificial Intelligence in HCI年度引用學(xué)科排名
書目名稱Artificial Intelligence in HCI讀者反饋
書目名稱Artificial Intelligence in HCI讀者反饋學(xué)科排名
作者: 狂亂 時(shí)間: 2025-3-21 21:22 作者: Fibroid 時(shí)間: 2025-3-22 00:50 作者: 無目標(biāo) 時(shí)間: 2025-3-22 05:50
Louiza M. Boukharaeva,Marcel MarloieThis paper provides an overview of artificial intelligence (AI) and human-centered artificial intelligence (HCAI). It presents a case study of applying AI and HCAI to software product development. Considerations such as use case development, user involvement and the creation of a smart assistant are reviewed.作者: 寬敞 時(shí)間: 2025-3-22 12:48
Human-Centered Artificial Intelligence Considerations and Implementations: A Case Study from SoftwarThis paper provides an overview of artificial intelligence (AI) and human-centered artificial intelligence (HCAI). It presents a case study of applying AI and HCAI to software product development. Considerations such as use case development, user involvement and the creation of a smart assistant are reviewed.作者: 橫條 時(shí)間: 2025-3-22 14:09
Louise Margaret Smith,Hamish P. D. Godfreyommunities. One key focus of this stream is the enablement of trust between end users and the intelligent solution. Although, the current body of literature discusses and proposes a range of best practices for the design of user interfaces for intelligent solutions, there is a dearth of research how作者: 招募 時(shí)間: 2025-3-22 19:13
https://doi.org/10.1007/978-3-030-03216-6nt of Artificial Intelligence (AI) for decision support, those experts face the black-box problem and lack trust in AI predictions for fraud. Such an issue has been tackled by employing Explainable AI (XAI) to provide experts with explained AI predictions through various explanation methods. However作者: Ergots 時(shí)間: 2025-3-22 22:26
Anna M. Agathangelou,Kyle D. Killianhat will guide human interactions with AI and anticipation of AI behavior. Similarly, anthropomorphizing, or attributing humanlike qualities to an entity, allows humans to predict and prepare for the complex and motivated behavior of humanlike entities in our environment. The shared role of these tw作者: 大笑 時(shí)間: 2025-3-23 01:52
Redefinition in Family Therapy,ity, productivity, self-efficacy, and knowledge transfer as it does with a human. However, to create a gender-inclusive agent, we need to understand the communication styles between human-human and human-agent pairs. To investigate the communication styles, we conducted gender-balanced studies with 作者: ALTER 時(shí)間: 2025-3-23 09:30 作者: nettle 時(shí)間: 2025-3-23 10:47
Community-Minded Family Therapy,ncerns. The Moral Foundations Theory (MFT) can be used to evaluate expressions of moral sentiment. MFT proposes that there are five innate, universal moral foundations that exist across cultures and societies: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and sanctity/degrada作者: 大看臺 時(shí)間: 2025-3-23 14:35 作者: 收藏品 時(shí)間: 2025-3-23 18:56 作者: Cloudburst 時(shí)間: 2025-3-24 00:40
Family Therapy in Changing Timesargumentation-based recommender system. Those applications need a distance function, which should give intuitive results. In this paper, we present seven functions which calculate how similar the attitudes of two agents are in an argumentation. We evaluate how good those functions match the results 作者: Proponent 時(shí)間: 2025-3-24 03:43 作者: 抱負(fù) 時(shí)間: 2025-3-24 08:36
Laurie L. Charlés,Gameela Samarasinghetand the current and potentially future research activities, a workshop “AI in HCI” took place as part of the first AI in HCI conference. The intent of the workshop was twofold: to explore the research landscape of AI in HCI with current and potentially future research topics, as well as building th作者: 失敗主義者 時(shí)間: 2025-3-24 14:22
https://doi.org/10.1007/978-3-319-51747-6 are often data- or algorithm-centric and opaque, and do not support the complexities of analyst sensemaking. An exploratory sensitivity analysis was conducted with a simple Authorship Attribution (AA) task to identify the degree to which an analyst can apply their sensemaking outputs as inputs to a作者: COLON 時(shí)間: 2025-3-24 17:58 作者: 侵略 時(shí)間: 2025-3-24 19:31
Steven M. DeMille,Marilyn J. Montgomery human operators and advanced tools. Advances in artificial intelligence (AI) have seen an increase in interest and use in the mission planning environment. Yet traditional approaches typically focus on optimizing the performance of the individual operator or the mission planning tool, not the . pro作者: epicondylitis 時(shí)間: 2025-3-25 01:24 作者: archaeology 時(shí)間: 2025-3-25 03:32 作者: dissolution 時(shí)間: 2025-3-25 10:34 作者: 不幸的人 時(shí)間: 2025-3-25 14:29
Can You Trust the Black Box? The Effect of Personality Traits on Trust in AI-Enabled User Interfacesommunities. One key focus of this stream is the enablement of trust between end users and the intelligent solution. Although, the current body of literature discusses and proposes a range of best practices for the design of user interfaces for intelligent solutions, there is a dearth of research how作者: modifier 時(shí)間: 2025-3-25 17:40
Towards Design Principles for User-Centric Explainable AI in Fraud Detectionnt of Artificial Intelligence (AI) for decision support, those experts face the black-box problem and lack trust in AI predictions for fraud. Such an issue has been tackled by employing Explainable AI (XAI) to provide experts with explained AI predictions through various explanation methods. However作者: 神圣在玷污 時(shí)間: 2025-3-25 22:38
Disentangling Trust and Anthropomorphism Toward the Design of Human-Centered AI Systemshat will guide human interactions with AI and anticipation of AI behavior. Similarly, anthropomorphizing, or attributing humanlike qualities to an entity, allows humans to predict and prepare for the complex and motivated behavior of humanlike entities in our environment. The shared role of these tw作者: 微生物 時(shí)間: 2025-3-26 00:49
Designing a Gender-Inclusive Conversational Agent For?Pair Programming: An Empirical Investigationity, productivity, self-efficacy, and knowledge transfer as it does with a human. However, to create a gender-inclusive agent, we need to understand the communication styles between human-human and human-agent pairs. To investigate the communication styles, we conducted gender-balanced studies with 作者: contradict 時(shí)間: 2025-3-26 07:45 作者: 心胸開闊 時(shí)間: 2025-3-26 08:57
Morality Beyond the Lines: Detecting Moral Sentiment Using AI-Generated Synthetic Contextncerns. The Moral Foundations Theory (MFT) can be used to evaluate expressions of moral sentiment. MFT proposes that there are five innate, universal moral foundations that exist across cultures and societies: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and sanctity/degrada作者: Itinerant 時(shí)間: 2025-3-26 15:58
Whoops! Something Went Wrong: Errors, Trust, and Trust Repair Strategies in Human Agent Teamingbilities. Modern technological advances will inevitably lead to the integration of autonomous systems and will consequently increase the need for effective human agent teaming (HAT). One of the most paramount ideals human operators must discern is their perception of autonomous agents as equal team 作者: 無脊椎 時(shí)間: 2025-3-26 20:47
What Does It Mean to Explain? A User-Centered Study on AI Explainability been active in exploring mathematical approaches that can increase the explainability of models. However, ensuring explainability thoroughly in the real world remains an open question. Indeed, besides data scientists, a variety of users is involved in the model lifecycle with varying motivations an作者: Oafishness 時(shí)間: 2025-3-27 00:36 作者: Culpable 時(shí)間: 2025-3-27 03:31 作者: Irascible 時(shí)間: 2025-3-27 09:00
From a Workshop to a Framework for Human-Centered Artificial Intelligencetand the current and potentially future research activities, a workshop “AI in HCI” took place as part of the first AI in HCI conference. The intent of the workshop was twofold: to explore the research landscape of AI in HCI with current and potentially future research topics, as well as building th作者: 充足 時(shí)間: 2025-3-27 11:15
Collaborative Human-AI Sensemaking for Intelligence Analysis are often data- or algorithm-centric and opaque, and do not support the complexities of analyst sensemaking. An exploratory sensitivity analysis was conducted with a simple Authorship Attribution (AA) task to identify the degree to which an analyst can apply their sensemaking outputs as inputs to a作者: 思想上升 時(shí)間: 2025-3-27 17:14 作者: diabetes 時(shí)間: 2025-3-27 20:48 作者: ALIEN 時(shí)間: 2025-3-28 00:16
Putting a Face on Algorithms: Personas for Modeling Artificial Intelligenceonas are commonly used in design practices for modelling users. We argue that the personification of AI agents can help multidisciplinary teams in understanding and designing systems that include AI agents. We propose a process for creating AI personas and the properties they should include, and rep作者: 赦免 時(shí)間: 2025-3-28 05:00 作者: bronchiole 時(shí)間: 2025-3-28 07:56 作者: 膠水 時(shí)間: 2025-3-28 11:44
https://doi.org/10.1007/978-3-030-03216-6t a design science research methodology and IS theoretical lens to develop and evaluate design principles, which align fraud expert’s tasks with explanation methods for Explainable AI decision support. We evaluate the utility of these principles using an information quality framework to interview ex作者: prosthesis 時(shí)間: 2025-3-28 18:27
Examples of Structural Family Therapy,iscrepancies should also be taken into consideration. This paper will begin by discussing the challenge of education in this digital era we have in Taiwan, and how to “bridge” the digital divide to achieve digital equality in our society.作者: dragon 時(shí)間: 2025-3-28 22:39 作者: Horizon 時(shí)間: 2025-3-29 02:34 作者: 乏味 時(shí)間: 2025-3-29 05:34
Laurie L. Charlés,Gameela Samarasinghet votes. Afterwards the participants underwent a deep dive into the three top scored categories, elaborating per category on research questions, research approaches, research challenges, and beneficiaries. During the workshop on July 24 2020, only the topic “Trust” was elaborated. Two follow-up sess作者: 朝圣者 時(shí)間: 2025-3-29 07:19 作者: 解決 時(shí)間: 2025-3-29 14:41 作者: 浸軟 時(shí)間: 2025-3-29 18:17 作者: 公式 時(shí)間: 2025-3-29 21:36
To Start Discussing About Ageing Society,opinions on the interpretation criteria for instrumental recognition of artists and the evaluation of AI artworks by experts in related fields to date. Based on this, we hope to spark interdisciplinary discussions about responsibilities of AI’s current instrumental use; further, we explore new partn作者: catagen 時(shí)間: 2025-3-30 01:20 作者: 售穴 時(shí)間: 2025-3-30 07:02 作者: 巡回 時(shí)間: 2025-3-30 08:59 作者: mydriatic 時(shí)間: 2025-3-30 14:56
Morality Beyond the Lines: Detecting Moral Sentiment Using AI-Generated Synthetic Contexte Pre-trained Transformer (GPT-2) that uses deep learning to produce human-like text—to generate a new story. A human writer would usually provide several prompting sentences, and the GPT model would produce the rest of the story. To customize the GPT-2 model towards a specific domain―for this paper作者: Detonate 時(shí)間: 2025-3-30 18:52
A Contextual Bayesian User Experience Model for Scholarly Recommender Systemstter UX with SRSs. The aim of this paper is to operationalize relevant contexts and to design a Bayesian UM for assisting the diagnosis of scholars’ information needs in terms of accurate, novel, diverse, and popular research papers. The proposed user model can be embedded in the process of recommen作者: Subjugate 時(shí)間: 2025-3-30 21:27 作者: atopic-rhinitis 時(shí)間: 2025-3-31 04:45 作者: MAIZE 時(shí)間: 2025-3-31 05:29
Design Intelligence - Taking Further Steps Towards New Methods and Tools for Designing in the Age ofin a list of considerations to be taken into account when designing for AI. As a second step, the paper analyses Human-Centered-AI principles which are proposed as a solution to the design challenges imposed by AI. By mapping the list of challenges to the solutions, this work seeks to initiate a nex作者: 清洗 時(shí)間: 2025-3-31 13:03 作者: faculty 時(shí)間: 2025-3-31 16:11
Tool or Partner: The Designer’s Perception of an AI-Style Generating Serviceopinions on the interpretation criteria for instrumental recognition of artists and the evaluation of AI artworks by experts in related fields to date. Based on this, we hope to spark interdisciplinary discussions about responsibilities of AI’s current instrumental use; further, we explore new partn作者: faucet 時(shí)間: 2025-3-31 20:26 作者: 無聊點(diǎn)好 時(shí)間: 2025-3-31 23:59
Anna M. Agathangelou,Kyle D. Killianpomorphism as predictive processes, how they are similar, and how they differ. Then, how each may be considered in the design of human-centered AI systems is discussed, including some future research directions.作者: 剝皮 時(shí)間: 2025-4-1 03:09
Ageism: Millennial Persistence,tool for air traffic control. Our first results indicate that AI personas helped designers to empathise with algorithms and enabled better communication within a team of designers and AI and domain experts. We call for a research agenda on AI personas and discussions on potential benefits and pitfalls of this approach.