找回密碼
 To register

QQ登錄

只需一步,快速開始

掃一掃,訪問微社區(qū)

打印 上一主題 下一主題

Titlebook: Machine Learning and Knowledge Discovery in Databases; European Conference, Walter Daelemans,Bart Goethals,Katharina Morik Conference proce

[復(fù)制鏈接]
樓主: risky-drinking
41#
發(fā)表于 2025-3-28 16:06:18 | 只看該作者
Metric Learning: A Support Vector Approach-definite programming problem (QSDP) with local neighborhood constraints, which is based on the Support Vector Machine (SVM) framework. The local neighborhood constraints ensure that examples of the same class are separated from examples of different classes by a margin. In addition to providing an
42#
發(fā)表于 2025-3-28 19:20:57 | 只看該作者
Support Vector Machines, Data Reduction, and Approximate Kernel Matricesch as distributed networking systems are often prohibitively high, resulting in practitioners of SVM learning algorithms having to apply the algorithm on approximate versions of the kernel matrix induced by a certain degree of data reduction. In this paper, we study the tradeoffs between data reduct
43#
發(fā)表于 2025-3-28 23:49:57 | 只看該作者
Hierarchical, Parameter-Free Community Discoveryse to look for community hierarchies, with communities- within-communities. Our proposed method, the . finds such communities at multiple levels, with no user intervention, based on information theoretic principles (MDL). More specifically, it partitions the graph into progressively more refined sub
44#
發(fā)表于 2025-3-29 06:43:29 | 只看該作者
45#
發(fā)表于 2025-3-29 07:26:33 | 只看該作者
46#
發(fā)表于 2025-3-29 12:54:51 | 只看該作者
Kernel-Based Inductive Transferning, the task is to find a suitable bias for a new dataset, given a set of known datasets. In this paper, we take a kernel-based approach to inductive transfer, that is, we aim at finding a suitable kernel for the new data. In our setup, the kernel is taken from the linear span of a set of predefin
47#
發(fā)表于 2025-3-29 19:03:09 | 只看該作者
48#
發(fā)表于 2025-3-29 23:15:02 | 只看該作者
Client-Friendly Classification over Random Hyperplane Hashese are addressing the problem of centrally learning (linear) classification models from data that is distributed on a number of clients, and subsequently deploying these models on the same clients. Our main goal is to balance the accuracy of individual classifiers and different kinds of costs related
49#
發(fā)表于 2025-3-30 03:35:19 | 只看該作者
Large-Scale Clustering through Functional Embeddingmize over discrete labels using stochastic gradient descent. Compared to methods like spectral clustering our approach solves a single optimization problem, rather than an ad-hoc two-stage optimization approach, does not require a matrix inversion, can easily encode prior knowledge in the set of imp
50#
發(fā)表于 2025-3-30 04:37:48 | 只看該作者
Clustering Distributed Sensor Data Streamstain a cluster structure over the data points generated by the entire network. Usual techniques operate by forwarding and concentrating the entire data in a central server, processing it as a multivariate stream. In this paper, we propose ., a new distributed algorithm which reduces both the dimensi
 關(guān)于派博傳思  派博傳思旗下網(wǎng)站  友情鏈接
派博傳思介紹 公司地理位置 論文服務(wù)流程 影響因子官網(wǎng) 吾愛論文網(wǎng) 大講堂 北京大學(xué) Oxford Uni. Harvard Uni.
發(fā)展歷史沿革 期刊點(diǎn)評(píng) 投稿經(jīng)驗(yàn)總結(jié) SCIENCEGARD IMPACTFACTOR 派博系數(shù) 清華大學(xué) Yale Uni. Stanford Uni.
QQ|Archiver|手機(jī)版|小黑屋| 派博傳思國際 ( 京公網(wǎng)安備110108008328) GMT+8, 2025-10-16 05:35
Copyright © 2001-2015 派博傳思   京公網(wǎng)安備110108008328 版權(quán)所有 All rights reserved
快速回復(fù) 返回頂部 返回列表
洞头县| 肇州县| 江川县| 绍兴县| 高平市| 鄢陵县| 广州市| 绿春县| 宜丰县| 西吉县| 隆回县| 营山县| 马鞍山市| 三台县| 英山县| 安龙县| 德惠市| 广州市| 天全县| 象州县| 施甸县| 清徐县| 德江县| 广安市| 安新县| 仪征市| 阿巴嘎旗| 革吉县| 水富县| 南汇区| 遵义县| 蛟河市| 紫云| 宕昌县| 嘉祥县| 专栏| 余干县| 聂荣县| 沽源县| 平乐县| 缙云县|