Show simple item record

dc.contributor.authorGao, Yunlongzh_CN
dc.contributor.authorPan, Jinyanzh_CN
dc.contributor.authorJi, Guolizh_CN
dc.contributor.authorYang, Zijiangzh_CN
dc.contributor.author吉国力zh_CN
dc.date.accessioned2013-12-12T02:49:26Z
dc.date.available2013-12-12T02:49:26Z
dc.date.issued2012-02zh_CN
dc.identifier.citationKnowledge-Based Systems, 2012,26103-110zh_CN
dc.identifier.issn0950-7051zh_CN
dc.identifier.otherWOS:000299979400012zh_CN
dc.identifier.urihttps://dspace.xmu.edu.cn/handle/2288/70701
dc.descriptionNational Natural Science Foundation of China [61174161]; Specialized Research Fund for the Doctoral Program of Higher Education of China [20090121110022]; Xiamen University [2011121047, 201112G018, CXB2011035, 0630-E72000]; Key Research Project of Fujian Province of China [2009H0044]zh_CN
dc.description.abstractWhen there exist an infinite number of samples in the training set, the outcome from nearest neighbor classification (kNN) is independent on its adopted distance metric. However, it is impossible that the number of training samples is infinite. Therefore, selecting distance metric becomes crucial in determining the performance of kNN. We propose a novel two-level nearest neighbor algorithm (TLNN) in order to minimize the mean-absolute error of the misclassification rate of kNN with finite and infinite number of training samples. At the low-level, we use Euclidean distance to determine a local subspace centered at an unlabeled test sample. At the high-level, AdaBoost is used as guidance for local information extraction. Data invariance is maintained by TLNN and the highly stretched or elongated neighborhoods along different directions are produced. The TLNN algorithm can reduce the excessive dependence on the statistical method which learns prior knowledge from the training data. Even the linear combination of a few base classifiers produced by the weak learner in AdaBoost can yield much better kNN classifiers. The experiments on both synthetic and real world data sets provide justifications for our proposed method. (C) 2011 Elsevier B.V. All rights reserved.zh_CN
dc.language.isoen_USzh_CN
dc.source.urihttp://dx.doi.org/10.1016/j.knosys.2011.07.010zh_CN
dc.subjectMARGINzh_CN
dc.titleA novel two-level nearest neighbor classification algorithm using an adaptive distance metriczh_CN
dc.typeArticlezh_CN


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record