標(biāo)題: Titlebook: Image and Graphics; 10th International C Yao Zhao,Nick Barnes,Chunyu Lin Conference proceedings 2019 Springer Nature Switzerland AG 2019 ar [打印本頁] 作者: 方面 時(shí)間: 2025-3-21 17:08
書目名稱Image and Graphics影響因子(影響力)
書目名稱Image and Graphics影響因子(影響力)學(xué)科排名
書目名稱Image and Graphics網(wǎng)絡(luò)公開度
書目名稱Image and Graphics網(wǎng)絡(luò)公開度學(xué)科排名
書目名稱Image and Graphics被引頻次
書目名稱Image and Graphics被引頻次學(xué)科排名
書目名稱Image and Graphics年度引用
書目名稱Image and Graphics年度引用學(xué)科排名
書目名稱Image and Graphics讀者反饋
書目名稱Image and Graphics讀者反饋學(xué)科排名
作者: 關(guān)節(jié)炎 時(shí)間: 2025-3-21 22:47 作者: 托運(yùn) 時(shí)間: 2025-3-22 02:30
978-3-030-34119-0Springer Nature Switzerland AG 2019作者: pessimism 時(shí)間: 2025-3-22 08:15 作者: NAG 時(shí)間: 2025-3-22 12:02
Conference proceedings 2019and Graphics, ICIG 2019, held in Beijing, China, in August 2019. The 183 full papers presented were selected from 384 submissions and focus on advances of theory, techniques and algorithms as well as innovative technologies of image, video and graphics processing and fostering innovation, entrepreneurship, and networking..作者: Soliloquy 時(shí)間: 2025-3-22 14:57
Chen Wang,Linyuan He,Shiping Ma,Shan Gao..This book provides resources and strategies for internatio"Immigrant Chinese women scientists and engineers who study and work in the United States constitute a rapidly growing yet understudied group. These women’s lived experiences and reflections can tell us a great deal about the current state 作者: 種植,培養(yǎng) 時(shí)間: 2025-3-22 20:53 作者: Entropion 時(shí)間: 2025-3-22 22:05
Zelu Deng,Jianbin Gao,Tao Huang,James C. Geeomen’s lived experiences and reflections can tell us a great deal about the current state of immigrant women scientists in the United States, how universities can help these women succeed, and about China’s emergence as a global scientific and technological superpower..Chinese Dreams American Dreams作者: 上坡 時(shí)間: 2025-3-23 03:26
Xiaoyi Yin,Xinqian Gu,Hong Chang,Bingpeng Ma,Xilin Chen..This book provides resources and strategies for internatio"Immigrant Chinese women scientists and engineers who study and work in the United States constitute a rapidly growing yet understudied group. These women’s lived experiences and reflections can tell us a great deal about the current state 作者: MURAL 時(shí)間: 2025-3-23 07:28
Rui Cao,Jiasong Zhu,Qing Li,Qian Zhang,Qingquan Li,Bozhi Liu,Guoping Qiu..This book provides resources and strategies for internatio"Immigrant Chinese women scientists and engineers who study and work in the United States constitute a rapidly growing yet understudied group. These women’s lived experiences and reflections can tell us a great deal about the current state 作者: 含水層 時(shí)間: 2025-3-23 10:06
Lijun Zhong,Qifeng Yu,Jiexin Zhou,Xiaohu Zhang,Yani Lu..This book provides resources and strategies for internatio"Immigrant Chinese women scientists and engineers who study and work in the United States constitute a rapidly growing yet understudied group. These women’s lived experiences and reflections can tell us a great deal about the current state 作者: Aphorism 時(shí)間: 2025-3-23 14:31 作者: 精確 時(shí)間: 2025-3-23 20:34
Linjiang Huang,Yan Huang,Wanli Ouyang,Liang Wangomen’s lived experiences and reflections can tell us a great deal about the current state of immigrant women scientists in the United States, how universities can help these women succeed, and about China’s emergence as a global scientific and technological superpower..Chinese Dreams American Dreams作者: 言行自由 時(shí)間: 2025-3-24 01:32
Shaodi Ge,Hongjun Li,Liuhong Luo..This book provides resources and strategies for internatio"Immigrant Chinese women scientists and engineers who study and work in the United States constitute a rapidly growing yet understudied group. These women’s lived experiences and reflections can tell us a great deal about the current state 作者: erythema 時(shí)間: 2025-3-24 04:29 作者: acheon 時(shí)間: 2025-3-24 06:56
Shengkui Dai,Xiangcheng Chen,Ziyu Wang..This book provides resources and strategies for internatio"Immigrant Chinese women scientists and engineers who study and work in the United States constitute a rapidly growing yet understudied group. These women’s lived experiences and reflections can tell us a great deal about the current state 作者: Physiatrist 時(shí)間: 2025-3-24 11:09 作者: GREG 時(shí)間: 2025-3-24 15:40
Heng Fu,Lifang Wu,Meng Jian,Yuchen Yang,Xiangdong Wangomen’s lived experiences and reflections can tell us a great deal about the current state of immigrant women scientists in the United States, how universities can help these women succeed, and about China’s emergence as a global scientific and technological superpower..Chinese Dreams American Dreams作者: Malfunction 時(shí)間: 2025-3-24 20:14 作者: Customary 時(shí)間: 2025-3-24 23:32 作者: BUDGE 時(shí)間: 2025-3-25 05:21
Yixuan Wang,Shangdong Zheng,Wei Xu,Yang Xu,Tianming Zhan,Peng Zheng,Zhihui Wei,Zebin Wuomen’s lived experiences and reflections can tell us a great deal about the current state of immigrant women scientists in the United States, how universities can help these women succeed, and about China’s emergence as a global scientific and technological superpower..Chinese Dreams American Dreams作者: ostracize 時(shí)間: 2025-3-25 07:54
Chuansheng Xu,Gaoyun An,Qiuqi Ruanomen’s lived experiences and reflections can tell us a great deal about the current state of immigrant women scientists in the United States, how universities can help these women succeed, and about China’s emergence as a global scientific and technological superpower..Chinese Dreams American Dreams作者: dura-mater 時(shí)間: 2025-3-25 15:02 作者: harbinger 時(shí)間: 2025-3-25 18:08 作者: 不可磨滅 時(shí)間: 2025-3-25 21:14
Object Detection for Chinese Traditional Costume Images Based GRP-DSOD++ Networkional Costume Images (CTCI-4) data set are smaller than that of natural images, and there are not enough training samples, the previous excellent object detection methods cannot achieve good detection result. To tackle this issue, mainly inspired by GRP-DSOD, we propose an effective network, namely 作者: 歸功于 時(shí)間: 2025-3-26 00:46
Combining Cross Entropy Loss with Manually Defined Hard Example for Semantic Image Segmentation, approaches based on fully convolutional network (FCN) have shown state-of-the-art performance in this task. However, most of them adopt cross entropy as the loss function, which will lead to poor performance in regions near object boundary. In this paper, we introduce two region-based metrics to q作者: 不成比例 時(shí)間: 2025-3-26 05:56
Attribute-Aware Pedestrian Image Editings and large vague areas. In this paper, we propose Attribute-aware Pedestrian Image Editing (APIE) to address these problems based on given visual attributes. Our model denominated as APIE-Net, has three mechanisms including an attribute-aware segmentation network, a multi-scale discriminator and a 作者: 令人不快 時(shí)間: 2025-3-26 10:53
Learning Spatial-Aware Cross-View Embeddings for Ground-to-Aerial Geolocalizationon is particularly promising but also difficult due to drastic viewpoint and appearance differences between ground and aerial images. In this paper, we propose a novel spatial-aware Siamese-like network to address the issue by exploiting the spatial transformer layer to effectively alleviate the lar作者: 媽媽不開心 時(shí)間: 2025-3-26 14:49 作者: 面包屑 時(shí)間: 2025-3-26 19:51 作者: 有斑點(diǎn) 時(shí)間: 2025-3-26 23:23
Hierarchical Graph Convolutional Network for Skeleton-Based Action Recognitiony ignore the topological structure of the skeleton which is very important for action recognition. Recently, Graph Convolutional Networks (GCNs) achieve remarkable performance in modeling non-Euclidean structures. However, current graph convolutional networks lack the capacity of modeling hierarchic作者: 單調(diào)女 時(shí)間: 2025-3-27 03:47 作者: organism 時(shí)間: 2025-3-27 08:00 作者: adumbrate 時(shí)間: 2025-3-27 12:38 作者: 暴行 時(shí)間: 2025-3-27 16:14 作者: 起來了 時(shí)間: 2025-3-27 17:55 作者: CRAFT 時(shí)間: 2025-3-27 22:01
Semantic Segmentation of Street Scenes Using Disparity Informationlent results on several semantic segmentation benchmarks. Most of them, however, only exploit RGB information. Due to the development of stereo matching algorithms, disparity maps can be more easily acquired. Structural information encoded in disparity can be treated as supplementary information of 作者: 前奏曲 時(shí)間: 2025-3-28 02:30 作者: 聯(lián)想記憶 時(shí)間: 2025-3-28 06:54 作者: 反饋 時(shí)間: 2025-3-28 11:45
Residual Joint Attention Network with Graph Structure Inference for Object Detectionng on the improvement of the feature extraction, we propose Residual Joint Attention Network, a convolutional neural network using a residual joint attention module which is composed of a spatial attention branch, a channel attention branch, and a residual learning branch within an advanced object d作者: 休息 時(shí)間: 2025-3-28 17:28
Saliency Detection Based on Foreground and Background Propagationsed in visual saliency detection, several existing saliency detection methods still cannot highlight the complete salient object when under complex background. For the purpose of improving the robustness of saliency detection, we propose a novel salient detection method via foreground and background作者: 死亡率 時(shí)間: 2025-3-28 21:35 作者: Hectic 時(shí)間: 2025-3-29 01:00 作者: defenses 時(shí)間: 2025-3-29 05:22
Learning Spatial-Aware Cross-View Embeddings for Ground-to-Aerial Geolocalization with a simple and effective location identity loss to further enhance the performances. We test our method on a publicly available dataset and the results show that the proposed method outperforms state-of-the-art by a large margin.作者: 享樂主義者 時(shí)間: 2025-3-29 07:41 作者: HARP 時(shí)間: 2025-3-29 13:25 作者: vector 時(shí)間: 2025-3-29 16:10
Conference proceedings 2019and Graphics, ICIG 2019, held in Beijing, China, in August 2019. The 183 full papers presented were selected from 384 submissions and focus on advances of theory, techniques and algorithms as well as innovative technologies of image, video and graphics processing and fostering innovation, entreprene作者: bronchodilator 時(shí)間: 2025-3-29 21:07 作者: GEN 時(shí)間: 2025-3-30 00:10 作者: 切碎 時(shí)間: 2025-3-30 07:50 作者: 方便 時(shí)間: 2025-3-30 08:57
A Spiking Neural Network Architecture for Object Tracking accuracy decreases by only 0.007 on MNIST compared to the original method. Our approach applies SNN to object tracking and achieves certain effects, which is a reference for SNN applications in other computer vision areas in the future.作者: 躺下殘殺 時(shí)間: 2025-3-30 12:31 作者: EVADE 時(shí)間: 2025-3-30 16:51 作者: ordain 時(shí)間: 2025-3-30 23:09
Superpixel-Based Saliency Guided Intersecting Cortical Model for Unsupervised Object Segmentationfeature lies on: (1) the proposed saliency detection is based on region instead of pixel; (2) the dynamic guided filter is designed to accelerate the filtering; (3) in order to improve SG-ICM for object segmentation, at the each iteration, we use adaptive and simple threshold, which can raise the sp作者: 護(hù)航艦 時(shí)間: 2025-3-31 02:01 作者: 輕浮女 時(shí)間: 2025-3-31 06:42
Real-Time Interpretation Method for Shooting-Range Image Based on Position Predictionn area is in the image, the detector will detect the target in the full image. Simulation experiments have demonstrated that the position prediction algorithm based on the optimal linear filter can accurately predict the target position and provide KCF with a more accurate search position. The algor作者: 航海太平洋 時(shí)間: 2025-3-31 12:58 作者: myopia 時(shí)間: 2025-3-31 15:06
Parallel-Structure-based Transfer Learning for Deep NIR-to-VIS Face Recognitionnique feature space, which is agnostic to the input image modality. The proposed PST method achieves state-of-the-art performance on CASIA NIR-VIS 2.0 Database, the largest near-infrared face database.作者: TRAWL 時(shí)間: 2025-3-31 18:14 作者: placebo-effect 時(shí)間: 2025-3-31 22:23 作者: offense 時(shí)間: 2025-4-1 04:35 作者: 揮舞 時(shí)間: 2025-4-1 08:54 作者: ingrate 時(shí)間: 2025-4-1 13:47 作者: 容易懂得 時(shí)間: 2025-4-1 16:23
Xiaoyi Yin,Xinqian Gu,Hong Chang,Bingpeng Ma,Xilin Chenn higher education, women’s studies, science and engineering studies, as well as for faculty who teach future scientists and engineers. It also introduces new multicultural, intersectional, and feminist perspec978-94-6300-540-1作者: Oversee 時(shí)間: 2025-4-1 18:37