[1]杨国亮,王志元,张 雨.一种改进的深度卷积神经网络的精细图像分类[J].江西师范大学学报(自然科学版),2017,(05):476-483.
 YANG Guoliang,WANG Zhiyuan,ZHANG Yu.An Improved Depth Convolutional Neural Network for Fine Image Classification[J].Journal of Jiangxi Normal University:Natural Science Edition,2017,(05):476-483.
点击复制

一种改进的深度卷积神经网络的精细图像分类()
分享到:

《江西师范大学学报》(自然科学版)[ISSN:1006-6977/CN:61-1281/TN]

卷:
期数:
2017年05期
页码:
476-483
栏目:
出版日期:
2017-11-01

文章信息/Info

Title:
An Improved Depth Convolutional Neural Network for Fine Image Classification
作者:
杨国亮王志元张 雨
江西理工大学电气工程与自动化学院,江西 赣州 341000
Author(s):
YANG GuoliangWANG ZhiyuanZHANG Yu
School of Electrical Engineering and Automation,Jiangxi University of Science and Technology,Ganzhou Jiangxi 34100,China
关键词:
精细图像分类 深度卷积神经网络 激活函数 特征提取
Keywords:
fine-grained image classification deep convolutional neural network activation function feature extraction
分类号:
TP 391.41
文献标志码:
A
摘要:
精细图像分类不同于传统的图像分类,由于精细图像自身的类间相似性和类内差异性,传统的基于手工特征和局部特征组合方法已经很难完整地表达精细图像的特征,因此提出了一种基于改进的深度卷积神经网络模型.由于深度卷积神经网络结构参数和神经元数量巨大,训练模型困难,所以采用高斯分布对前6层参数初始化,其中激活函数采用校正之后的Relus-Softplus函数,在花卉图像数据库OXford-102 flowers中TOP1准确率达到85.75%,TOP3准确率达到了94.50%.实验结果表明:该模型在中等规模数据集上比传统方法优势明显,且比未改进的CNN模型识别率高.
Abstract:
Fine image classification is different from traditional image classification.Due to the similarity between intraclass and intraclass differences of fine-grained images themselves,it is difficult to express the characteristics of fine image based on manual feature and local feature combination method.Based on the improved depth convolution neural network model,due to the large number of deep convolution neural network structure parameters and the large number of neurons,the training model is difficult,and the Gaussian distribution is used to initialize the first six parameters.The activation function is used after the correction of the Relus-Softplus function,the TOP1 accuracy rate of the flower image database OXford-102 flowers is 85.75%,and the TOP3 accuracy rate is 94.50%.The experimental results show that the model has obvious advantages over the traditional method,and the recognition rate is higher than that of the unmodified CNN model.

参考文献/References:

[1] Wang Xiaoyu,Yang Tianbao,Lin Yuanqing.Object-centric fine-grained image classification [EB/OL].
[2016-10-11].http://www.google.com/patents/US20160140424.
[2] Akata Z,Reed S,Walter D,et al.Evaluation of output embeddings for fine-grained image classification [EB/OL].
[2016-10-11].http://www-personal.umich.edu/~reedscot/CVPR15.pdf.
[3] Zheng L,Zhao Y,Wang S,et al.Good Practice in CNN Feature Transfer [EB/OL].
[2016-10-11].http://128.84.21.199/pdf/1604.00133.pdf.
[4] Grohs P,Wiatowski T,B?lcskei H.Deep convolutional neural networks on cartoon functions [EB/OL].
[2016-10-11].10.1109/ISIT.2016.7541482.
[5] Kim C,Stern R M.Power-normalized cepstral coefficients(PNCC)for robust speech recognition [J].ACM Transactions on Audio,Speech and Language Processing(TASLP),2016,24(7):1315-1329.
[6] Pentland A,Moghaddam B,Starner T.View-based and modular eigenspaces for face recognition [EB/OL].
[2016-10-11].http://www.ijsr.net/archive/v5i9/9091603.pdf.
[7] Sun Y,Wang X,Tang X.Deep learning face representation from predicting 10000 classes [EB/OL].
[2016-10-11].http://www.ee.cuhk.edu.hk/~xgwang/papers/sunWTcvpr14.pdf.
[8] Rastegari M,Ordonez V,Redmon J,et al.XNOR-Net:imagenet classification using binary convolutional neural network [EB/OL].
[2016-10-11].https://pjreddie.com/media/files/papers/xnor_arxiv.pdf.
[9] Schmitz A,Bansho Y,Noda K,et al.Tactile object recognition using deep learning and dropout [J].IEEE-RAS International Conference on Humanoid Robots,2015,11(3):1044-1050.
[10] RenShaoqing,He Kaiming,Girshick R,et al.Faster R-CNN:towards real-time object detection with region proposal networks [J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2016,39(6):1137.
[11] Nilsback M E,Zisserman A.Delving into the whorl of flower segmentation [EB/OL].
[2016-10-11].http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback07.pdf.
[12] 谢晓东.面向花卉图像的精细图像分类研究 [D].厦门:厦门大学,2014.
[13] Angelova A,Zhu S.Efficientobject detection and segmentation for fine-grained recognition [J].Computer Vision & Pattern Recognition,2013,9(4):811-818.
[14] Zou J,Nagy G.Evaluation of model-based interactive flower recognition [J].International Conference on Pattern Recognition,2004,2(2):311-314.
[15] Nilsback,Maria Elena.An automatic visual flora-segmentation and classification of flower images [D].Oxford:Oxford University,2009.
[16]Chai Y,Lempitsky V,Zisserman A.BiCoS:a Bi-level co-segmentation method for image classification [J].IEEE International Conference on Computer Vision,2011,58(11):2579-2586.
[17] Yang Shulin,Bo Liefeng,Wang Jue,et al.Unsupervised template learning for fine-grained object recognition [C].Advances in Neural Information Processing Systems,2012:3122-3130.
[18] Hong Anxiang,Chen Gang,Li Junli,et al.A flower image retrieval method based on ROI feature [J].Journal of Zhejiang University:Science A,2004,5(7):764-772.

相似文献/References:

[1]朱陶,杜治国,洪卫军.一种基于深度卷积神经网络的摄像机覆盖质量评价算法[J].江西师范大学学报(自然科学版),2015,(03):309.
 ZHU Tao,DU Zhiguo,HONG Weijun.The Camera Coverage Quality Evaluation Algorithm Based on Deep Convolution Neural Network[J].Journal of Jiangxi Normal University:Natural Science Edition,2015,(05):309.

备注/Memo

备注/Memo:
收稿日期:2017-03-27基金项目:国家自然科学基金(51365017,61305019)资助项目.作者简介:杨国亮(1973-),江西丰城人,男,教授,博士,主要从事模式识别与图像处理、智能控制的研究.E-mail:ygliang30@126.com
更新日期/Last Update: 1900-01-01