派博傳思國(guó)際中心

標(biāo)題: Titlebook: Human-Computer Interaction: Interaction Modalities and Techniques; 15th International C Masaaki Kurosu Conference proceedings 2013 Springer [打印本頁(yè)]

作者: Fuctionary    時(shí)間: 2025-3-21 19:45
書目名稱Human-Computer Interaction: Interaction Modalities and Techniques影響因子(影響力)




書目名稱Human-Computer Interaction: Interaction Modalities and Techniques影響因子(影響力)學(xué)科排名




書目名稱Human-Computer Interaction: Interaction Modalities and Techniques網(wǎng)絡(luò)公開度




書目名稱Human-Computer Interaction: Interaction Modalities and Techniques網(wǎng)絡(luò)公開度學(xué)科排名




書目名稱Human-Computer Interaction: Interaction Modalities and Techniques被引頻次




書目名稱Human-Computer Interaction: Interaction Modalities and Techniques被引頻次學(xué)科排名




書目名稱Human-Computer Interaction: Interaction Modalities and Techniques年度引用




書目名稱Human-Computer Interaction: Interaction Modalities and Techniques年度引用學(xué)科排名




書目名稱Human-Computer Interaction: Interaction Modalities and Techniques讀者反饋




書目名稱Human-Computer Interaction: Interaction Modalities and Techniques讀者反饋學(xué)科排名





作者: Demonstrate    時(shí)間: 2025-3-21 21:37
The Ecological AUI (Auditory User Interface) Design and Evaluation of User Acceptance for Various Tar scores on the functional satisfaction and the fun scales in the sonically-enhanced smartphones than in the no-sound condition. The balanced sound design may free users from auditory pollution and allow them to use their devices more pleasantly.
作者: LVAD360    時(shí)間: 2025-3-22 03:10
https://doi.org/10.1007/978-3-322-86394-2ov models. Our experiments were conducted on our recently collected database under a multi-speaker configuration and resulted in higher performance and robustness in comparison to an audio-only recognizer.
作者: 鳥籠    時(shí)間: 2025-3-22 05:33
Supervised Learning of Group Activity, and small-scaled hand gesture recognition with haptic data from smart phone sensors. Each module can process in realtime with the accuracy of 95.0%, 91.2%, and 90.1% respectively. The system uses events generated from the three modules to trigger pre-defined scenarios in a presentation to enhance the exciting experience for audiences.
作者: Mendacious    時(shí)間: 2025-3-22 11:54
Visual Analytics Focusing on Space,arrange a common activity. From the user study we conducted, we found that the agent can discriminate between active and inactive interlocutors well by using the skeletal and azimuth information. Participants found the addressee much clearer when an animated talking head was used.
作者: 機(jī)構(gòu)    時(shí)間: 2025-3-22 15:05

作者: Negligible    時(shí)間: 2025-3-22 18:13

作者: 一起平行    時(shí)間: 2025-3-22 21:14

作者: 有惡意    時(shí)間: 2025-3-23 04:44
Robust Multi-Modal Speech Recognition in Two Languages Utilizing Video and Distance Information fromov models. Our experiments were conducted on our recently collected database under a multi-speaker configuration and resulted in higher performance and robustness in comparison to an audio-only recognizer.
作者: troponins    時(shí)間: 2025-3-23 05:46
Multimodal Smart Interactive Presentation System, and small-scaled hand gesture recognition with haptic data from smart phone sensors. Each module can process in realtime with the accuracy of 95.0%, 91.2%, and 90.1% respectively. The system uses events generated from the three modules to trigger pre-defined scenarios in a presentation to enhance the exciting experience for audiences.
作者: 群居男女    時(shí)間: 2025-3-23 11:36
Situated Multiparty Interaction between Humans and Agentsarrange a common activity. From the user study we conducted, we found that the agent can discriminate between active and inactive interlocutors well by using the skeletal and azimuth information. Participants found the addressee much clearer when an animated talking head was used.
作者: 凝結(jié)劑    時(shí)間: 2025-3-23 17:49

作者: daredevil    時(shí)間: 2025-3-23 20:48

作者: Expostulate    時(shí)間: 2025-3-23 22:23

作者: Increment    時(shí)間: 2025-3-24 06:11

作者: inhumane    時(shí)間: 2025-3-24 07:03
Visual Analytics for Understanding Textsend not to use gaze shifts to cue nodding for grounding and that the presence of an artifact reduced listeners’ gaze at the speaker. These observations remained relatively consistent across the two languages.
作者: nauseate    時(shí)間: 2025-3-24 11:19
Damon Allison,Andy Olsen,James Speer Through genetic algorithm, the system generates and proposes onomatopoeias appropriate for impressions inputted by users. From the evaluation of our system, impressions of onomatopoeias generated by our method were similar to inputted impressions to generate onomatopoeias.
作者: 確認(rèn)    時(shí)間: 2025-3-24 17:29

作者: Soliloquy    時(shí)間: 2025-3-24 21:05
‘Realness’ in Chatbots: Establishing Quantifiable Criteriang in four subscales with strong reliability which discriminated well between the two categories of chatbots. Content analysis of freeform comments led to a proposal of four dimensions along which people judge the naturalness of a conversation with chatbots.
作者: 啜泣    時(shí)間: 2025-3-25 00:28
Grounding and Turn-Taking in Multimodal Multiparty Conversationend not to use gaze shifts to cue nodding for grounding and that the presence of an artifact reduced listeners’ gaze at the speaker. These observations remained relatively consistent across the two languages.
作者: CANON    時(shí)間: 2025-3-25 04:54
System of Generating Japanese Sound Symbolic Expressions Using Genetic Algorithm Through genetic algorithm, the system generates and proposes onomatopoeias appropriate for impressions inputted by users. From the evaluation of our system, impressions of onomatopoeias generated by our method were similar to inputted impressions to generate onomatopoeias.
作者: 上腭    時(shí)間: 2025-3-25 10:51

作者: Fibrin    時(shí)間: 2025-3-25 11:42
Controlling Interaction in Multilingual Conversation the topics covered during the interaction and the expression of opinion. In the proposed template-based approach, the System is proposed to act as a mediator to control the dialog flow, within the modeled framework of the sublanguage-specific and pragmatically related design.
作者: 預(yù)測(cè)    時(shí)間: 2025-3-25 16:18
Linguistic Processing of Implied Information and Connotative Features in Multilingual HCI Applicatio connotative features in HCI applications processing online texts and may be compatible with Interlinguas including the signalization of connotative features, if necessary. The proposed approach combines features detected in the lexical and morpho-syntactic level, and in the prosodic and paralinguistic levels.
作者: 熄滅    時(shí)間: 2025-3-25 22:30
0302-9743 nstitutes the refereed proceedings of the 15th International Conference on Human-Computer Interaction, HCII 2013, held in Las Vegas, NV, USA in July 2013. The total of 1666 papers and 303 posters presented at the HCII 2013 conferences was carefully reviewed and selected from 5210 submissions. These
作者: SLING    時(shí)間: 2025-3-26 04:10
Der Meistertitel im Friseurhandwerk, the topics covered during the interaction and the expression of opinion. In the proposed template-based approach, the System is proposed to act as a mediator to control the dialog flow, within the modeled framework of the sublanguage-specific and pragmatically related design.
作者: Encephalitis    時(shí)間: 2025-3-26 05:58
https://doi.org/10.1007/978-3-663-12941-7 connotative features in HCI applications processing online texts and may be compatible with Interlinguas including the signalization of connotative features, if necessary. The proposed approach combines features detected in the lexical and morpho-syntactic level, and in the prosodic and paralinguistic levels.
作者: 搏斗    時(shí)間: 2025-3-26 10:39

作者: 金盤是高原    時(shí)間: 2025-3-26 16:15
Masaaki KurosuProceedings of the 15th International Conference on Human-Computer Interaction, HCI International 2013
作者: Madrigal    時(shí)間: 2025-3-26 20:11

作者: sulcus    時(shí)間: 2025-3-26 21:27

作者: HAUNT    時(shí)間: 2025-3-27 04:21

作者: 倔強(qiáng)不能    時(shí)間: 2025-3-27 05:56

作者: Dictation    時(shí)間: 2025-3-27 13:12

作者: 減弱不好    時(shí)間: 2025-3-27 17:37

作者: 聯(lián)邦    時(shí)間: 2025-3-27 21:27

作者: NORM    時(shí)間: 2025-3-28 00:56
Investigating the Impact of Combining Speech and Earcons to Communicate Information in E-government nterface versions in each experimental e-government tool. The obtained results demonstrated the usefulness of the tested metaphors to enhance e-government usability and to enable users to attain better communicating performance. In addition empirically derived guidelines showed that the use of multi
作者: parsimony    時(shí)間: 2025-3-28 04:07

作者: cushion    時(shí)間: 2025-3-28 08:15

作者: VERT    時(shí)間: 2025-3-28 10:43

作者: corn732    時(shí)間: 2025-3-28 16:15
Blasted Dreams, Hopeful Lessons,nterface versions in each experimental e-government tool. The obtained results demonstrated the usefulness of the tested metaphors to enhance e-government usability and to enable users to attain better communicating performance. In addition empirically derived guidelines showed that the use of multi
作者: dapper    時(shí)間: 2025-3-28 18:48
https://doi.org/10.1007/978-3-642-72286-8und that the users were impressed by the lively appearance and natural gesturing of the robot, although in many respects they had higher expectations regarding the robot’s presentation capabilities. However, the results are positive enough to encourage research on these lines.
作者: 誘拐    時(shí)間: 2025-3-29 02:47

作者: cumber    時(shí)間: 2025-3-29 05:13

作者: BAIL    時(shí)間: 2025-3-29 11:06

作者: 食草    時(shí)間: 2025-3-29 14:04

作者: 好忠告人    時(shí)間: 2025-3-29 18:37
Blasted Dreams, Hopeful Lessons,ce complexity in the visual communication by incorporating auditory stimuli. These issues are often neglected in the interfaces of e-government applications. This paper investigates the possibility of using multimodal metaphors to enhance the usability and increase the trust between the user and the
作者: 個(gè)人長(zhǎng)篇演說    時(shí)間: 2025-3-29 20:37

作者: 廣大    時(shí)間: 2025-3-30 00:49

作者: isotope    時(shí)間: 2025-3-30 07:06
https://doi.org/10.1007/978-3-322-88395-7e issues, there has been research on the use of non-speech sounds on the mobile devices [e.g., 2, 3-7]. However, most of them have focused on a single specific task of the device. Given the varying functions of the smartphone, the present study designed plausibly integrated auditory cues for diverse
作者: iodides    時(shí)間: 2025-3-30 08:18

作者: FAWN    時(shí)間: 2025-3-30 14:18

作者: 戰(zhàn)勝    時(shí)間: 2025-3-30 20:13
Action Recognition Using Topic Modelsstems which process each modality, we explore various fusion methods to resolve ambiguities which naturally occur independently. The results that are reported on the HAMEX bimodal database show an improvement with respect to a mono-modal based system.
作者: manifestation    時(shí)間: 2025-3-30 23:29

作者: 極小量    時(shí)間: 2025-3-31 04:25

作者: Perineum    時(shí)間: 2025-3-31 09:01
Visual Analytics Focusing on Space,o react to distracting acoustic and visual events and it needs to appropriately handle situations that include multiple humans, being able to to focus on active interlocutors and appropriately shift attention based on the context. We describe a multiparty interaction agent that helps multiple users
作者: 現(xiàn)任者    時(shí)間: 2025-3-31 11:09

作者: 微生物    時(shí)間: 2025-3-31 17:05
Damon Allison,Andy Olsen,James Speeries, a quantification of relationship between phonemes and images enables to predict the images evoked by onomatopoeia and to estimate meanings of onomatopoeia. In this study, we applied the quantification method and developed a system for generating Japanese onomatopoeias using genetic algorithm (G
作者: 廚師    時(shí)間: 2025-3-31 20:52

作者: aquatic    時(shí)間: 2025-3-31 22:31
Language Integrated Query (LINQ),d in some use cases. Novel, goal-based interaction systems, e.g. based on gesture and speech allow a natural control of various devices. However, these are prone to misinterpretation of the user’s intention. In this work we present a method for supporting goal-based interaction using multimodal inte
作者: 吞吞吐吐    時(shí)間: 2025-4-1 03:29
LINQ to XML and XML Processing, one of the most challenging tasks in gesture recognition applications, is still an open issue. Gesture segmentation has two main objectives: first, detecting when a gesture begins and ends; second, recognizing whether a gesture is meant to be meaningful for the machine or is a non-command gesture (
作者: Servile    時(shí)間: 2025-4-1 07:43

作者: GORGE    時(shí)間: 2025-4-1 10:11
Linguistic Processing of Implied Information and Connotative Features in Multilingual HCI Applicatio International Public, especially in applications related to the Service Sector. The proposed filter concerns the detection of implied information and connotative features in HCI applications processing online texts and may be compatible with Interlinguas including the signalization of connotative f
作者: 陰險(xiǎn)    時(shí)間: 2025-4-1 16:07
Investigating the Impact of Combining Speech and Earcons to Communicate Information in E-government ce complexity in the visual communication by incorporating auditory stimuli. These issues are often neglected in the interfaces of e-government applications. This paper investigates the possibility of using multimodal metaphors to enhance the usability and increase the trust between the user and the




歡迎光臨 派博傳思國(guó)際中心 (http://pjsxioz.cn/) Powered by Discuz! X3.5
阿勒泰市| 伊吾县| 阳曲县| 克拉玛依市| 沈阳市| 阳山县| 台北市| 措勤县| 汝城县| 集安市| 分宜县| 商城县| 怀来县| 平度市| 县级市| 周宁县| 依安县| 玛纳斯县| 邵阳市| 平果县| 高台县| 宜宾市| 准格尔旗| 宣武区| 诸暨市| 延庆县| 成安县| 隆化县| 云阳县| 赫章县| 麟游县| 安庆市| 徐州市| 昌邑市| 凤山县| 岚皋县| 四川省| 阳东县| 牡丹江市| 河曲县| 榆中县|