Can Automatic Classification Help to Increase Accuracy in Data Collection?
Frederique Lang1; Diego Chavarro1; Yuxian Liu2; Yuxian Liu (E-mail: yxliu@tongji.edu.cn).
2016-09-18
发表期刊Journal of Data and Information Science
卷号1期号:3页码:42-58
摘要
Purpose: The authors aim at testing the performance of a set of machine learning algorithms that could improve the process of data cleaning when building datasets.
Design/methodology/approach: The paper is centered on cleaning datasets gathered from publishers and online resources by the use of specific keywords. In this case, we analyzed data from the Web of Science. The accuracy of various forms of automatic classification was tested here in comparison with manual coding in order to determine their usefulness for data collection and cleaning. We assessed the performance of seven supervised classification algorithms (Support Vector Machine (SVM), Scaled Linear Discriminant Analysis, Lasso and elastic-net regularized generalized linear models, Maximum Entropy, Regression Tree, Boosting, and Random Forest) and analyzed two properties: accuracy and recall. We assessed not only each algorithm individually, but also their combinations through a voting scheme. We also tested the performance of these algorithms with different sizes of training data. When assessing the performance of different combinations, we used an indicator of coverage to account for the agreement and disagreement on classification between algorithms.
Findings: We found that the performance of the algorithms used vary with the size of the sample for training. However, for the classification exercise in this paper the best performing algorithms were SVM and Boosting. The combination of these two algorithms achieved a high agreement on coverage and was highly accurate. This combination performs well with a small training dataset (10%), which may reduce the manual work needed for classification tasks.
Research limitations: The dataset gathered has significantly more records related to the topic of interest compared to unrelated topics. This may affect the performance of some algorithms, especially in their identification of unrelated papers.
Practical implications: Although the classification achieved by this means is not completely accurate, the amount of manual coding needed can be greatly reduced by using classification algorithms. This can be of great help when the dataset is big. With the help of accuracy, recall, and coverage measures, it is possible to have an estimation of the error involved in this classification, which could open the possibility of incorporating the use of these algorithms in software specifically designed for data cleaning and classification.
Originality/value: We analyzed the performance of seven algorithms and whether combinations of these algorithms improve accuracy in data collection. Use of these algorithms could reduce time needed for manual data cleaning.
;
Purpose: The authors aim at testing the performance of a set of machine learning algorithms that could improve the process of data cleaning when building datasets.
Design/methodology/approach: The paper is centered on cleaning datasets gathered from publishers and online resources by the use of specific keywords. In this case, we analyzed data from the Web of Science. The accuracy of various forms of automatic classification was tested here in comparison with manual coding in order to determine their usefulness for data collection and cleaning. We assessed the performance of seven supervised classification algorithms (Support Vector Machine (SVM), Scaled Linear Discriminant Analysis, Lasso and elastic-net regularized generalized linear models, Maximum Entropy, Regression Tree, Boosting, and Random Forest) and analyzed two properties: accuracy and recall. We assessed not only each algorithm individually, but also their combinations through a voting scheme. We also tested the performance of these algorithms with different sizes of training data. When assessing the performance of different combinations, we used an indicator of coverage to account for the agreement and disagreement on classification between algorithms.
Findings: We found that the performance of the algorithms used vary with the size of the sample for training. However, for the classification exercise in this paper the best performing algorithms were SVM and Boosting. The combination of these two algorithms achieved a high agreement on coverage and was highly accurate. This combination performs well with a small training dataset (10%), which may reduce the manual work needed for classification tasks.
Research limitations: The dataset gathered has significantly more records related to the topic of interest compared to unrelated topics. This may affect the performance of some algorithms, especially in their identification of unrelated papers.
Practical implications: Although the classification achieved by this means is not completely accurate, the amount of manual coding needed can be greatly reduced by using classification algorithms. This can be of great help when the dataset is big. With the help of accuracy, recall, and coverage measures, it is possible to have an estimation of the error involved in this classification, which could open the possibility of incorporating the use of these algorithms in software specifically designed for data cleaning and classification.
Originality/value: We analyzed the performance of seven algorithms and whether combinations of these algorithms improve accuracy in data collection. Use of these algorithms could reduce time needed for manual data cleaning.
文章类型Research Papers
关键词Disambiguation Machine Learning Data Cleaning Classification Accuracy Recall Coverage
学科领域新闻学与传播学 ; 图书馆、情报与文献学
DOI10.20309/jdis.201619
URL查看原文
收录类别其他
所属项目编号No.: 71173154 ; No.: 08BZX076
语种英语
资助项目The authors are grateful to Peter Bone for his help with the proofreading of this paper. We wish also to thank Jose Christian for helpful discussion and support. Thanks go also to the colleagues who gave us comments during the 4th Global TechMining Conference and in the Science Policy Research Unit (SPRU) Wednesday Seminar. Yuxian Liu's work was supported by National Natural Science Foundation of China (NSFC) (Grant No.: 71173154), The National Social Science Fund of China (NSSFC) (Grant No.: 08BZX076) and the Fundamental Research Funds for the Central Universities.
引用统计
文献类型期刊论文
条目标识符http://ir.las.ac.cn/handle/12502/8731
专题Journal of Data and Information Science_Journal of Data and Information Science-2016
通讯作者Yuxian Liu (E-mail: yxliu@tongji.edu.cn).
作者单位1.Science Policy Research Unit (SPRU), School of Business, Management and Economics, University of Sussex, Falmer, Brighton, BN1 9SL, United Kingdom
2.Tongji University Library, Tongji University, Shanghai 200092, China
第一作者单位中国科学院文献情报中心
推荐引用方式
GB/T 7714
Frederique Lang,Diego Chavarro,Yuxian Liu,et al. Can Automatic Classification Help to Increase Accuracy in Data Collection?[J]. Journal of Data and Information Science,2016,1(3):42-58.
APA Frederique Lang,Diego Chavarro,Yuxian Liu,&Yuxian Liu .(2016).Can Automatic Classification Help to Increase Accuracy in Data Collection?.Journal of Data and Information Science,1(3),42-58.
MLA Frederique Lang,et al."Can Automatic Classification Help to Increase Accuracy in Data Collection?".Journal of Data and Information Science 1.3(2016):42-58.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
20160304.pdf(945KB)期刊论文作者接受稿开放获取CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Frederique Lang]的文章
[Diego Chavarro]的文章
[Yuxian Liu]的文章
百度学术
百度学术中相似的文章
[Frederique Lang]的文章
[Diego Chavarro]的文章
[Yuxian Liu]的文章
必应学术
必应学术中相似的文章
[Frederique Lang]的文章
[Diego Chavarro]的文章
[Yuxian Liu]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。