[1]刘 鹤,周 勇*,潘 翼,等.结合多尺度特征和多维注意力的人脸风格转换[J].江西师范大学学报(自然科学版),2023,(01):69-76.
 LIU He,ZHOU Yong*,PAN Yi,et al.The Face Style Conversion Combining Multiscale Feature Fusion and Multi-Dimensional Attention[J].Journal of Jiangxi Normal University:Natural Science Edition,2023,(01):69-76.
点击复制

结合多尺度特征和多维注意力的人脸风格转换()
分享到:

《江西师范大学学报》(自然科学版)[ISSN:1006-6977/CN:61-1281/TN]

卷:
期数:
2023年01期
页码:
69-76
栏目:
信息科学与技术
出版日期:
2023-01-25

文章信息/Info

Title:
The Face Style Conversion Combining Multiscale Feature Fusion and Multi-Dimensional Attention
作者:
刘 鹤周 勇*潘 翼张金桃
(江西师范大学计算机信息工程学院,江西 南昌 330022)
Author(s):
LIU HeZHOU Yong*PAN YiZHANG Jintao
(School of Computer and Information Engineering,Jiangxi Normal University,Nanchang Jiangxi 330022,China)
关键词:
人脸风格转换 人脸属性合成 多尺度特征融合 多维注意力
Keywords:
face style conversion face attributes synthesis multiscale feature fusion multi-dimensional attention
分类号:
TP 391.4
文献标志码:
A
摘要:
针对StarGANv2模型生成的人脸图像存在风格重建效果不佳、人脸纹理不够自然等现象,该文提出结合多尺度特征和多维注意力的人脸风格转换模型.1)将多尺度特征融合模块PSConv嵌入StarGANv2生成器内, 提高了模型对图像特征的提取能力; 2)提出了多维注意力模块MDConv,并将该模块嵌入StarGANv2判别器内,从而提高了模型对真假人脸图像的判别能力.与StarGANv2方法在CelebA-HQ数据集上进行对比实验的结果表明:该方法生成的人脸图像风格更美观,纹理细节更自然,学习感知图像相似度(LPIPS)的值也得到了提升.
Abstract:
Aiming at the problem of smart contract property verification,the formal verification method of smart contract properties based on UPPAAL is proposed in this paper.The operational semantics of Solidity basic statements and its transformation to time automata are defined,and smart contracts are transformed into time automata network models.The properties of smart contracts include safety and activity.The common security and activity of smart contracts are defined and described.The model checking tool UPPAAL is used to verify the properties of smart contracts.The shopping contract is modeled and verified,which proves the effectiveness of the proposed method.

参考文献/References:

[1] 费建伟,夏志华,余佩鹏,等.人脸合成技术综述 [J].计算机科学与探索,2021,15(11):2025-2047.
[2] FU Yun,GUO Guodong,HUANG T S.Age synthesis and estimation via faces:a survey [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(11):1955-1976.
[3] GOODFELLOW I,POUGET-ABADIE J,MIRZA M,et al.Generative adversarial nets [EB/OL].[2022-06-16].https://arxiv.org/pdf/1406.2661.pdf.
[4] LI Mu,ZUO Wangmeng,ZHANG D.Deep identity-aware transfer offacial attributes [EB/OL].[2022-09-06].https://a-rxiv.org/pdf/1610.05586.pdf.
[5] LIU Rujie,SHEN Wei.Learning residual images for face attribute manipulation [EB/OL].[2022-09-08].https://doc.taixueshu.com/foreign/arXiv161205363.html.
[6] HE Kaiming,ZHANG Xiangyu,REN Shaoqing,et al.Deep residual learning for image recognition [EB/OL].[2022-09-02].https://zhuanlan.zhihu.com/p/370863670.
[7] CHOI Y,CHOI M,KIM M,et al.Stargan:unified generative adversarial networks for multi-domain image-to-image translation [EB/OL].[2022-09-02].https://arxiv.org/pdf/1711.09020.pdf.
[8] LIU Ming,DING Yukang,XIA Ming,et al.Stgan:a unified selective transfer network for arbitrary image attribute editing [EB/OL].[2022-09-08].https://blog.csdn.net/WhaleAndAnt/article/details/104677489.
[9] SHEN Yujun,GU Jinjin,TANG Xiaoou,et al.Interpreting the latent space of gans for semantic face editing [EB/OL].[2022-09-08].https://arxiv.org/abs/1907.10786v3.
[10] YANG Guoxing,FEI Nanyi,DING Mingyu,et al.L2m-gan:learning to manipulate latent space semantics for facial attribute editing [EB/OL].[2022-09-08].https://www.xueshufan.com/publication/3182270175.
[11] WANG Huipo,YU Ning,FRITZ M.Hijack-gan:unintended use of pretrained,black-box gans [EB/OL].[2022-09-08].https://arxiv.org/abs/2011.14107v1.
[12] KHODADADEH S,GHADAR S,MOTIIAN S,et al.Latent to latent:a learned mapper for identity preserving editing of multiple face attributes in StyleGAN-generated images [EB/OL].[2022-09-08].https://blog.csdn.net/xjm850552586/article/details/123656232.
[13] HUANG Xun,BELONGIE S.Arbitrary style transfer in real-time with adaptive instance normalization [EB/OL].[2022-09-08].https://blog.csdn.net/a19990412/article/details/84729453/.
[14] KARRAS T,LAINE S,AILA T.A style-based generator architecture for generative adversarial networks [EB/OL].[2022-09-08].https://blog.csdn.net/NGUever15/article/details/122299290.
[15] KARRAS T,LAINE S,AITTALA M,et al.Analyzing and improving the image quality of stylegan [EB/OL].[2022-09-08].https://blog.csdn.net/lynlindasy/article/details/104495583.
[16] KARRAS T,AITTALA M,LAINE S,et al.Alias-free gene-rative adversarial networks [EB/OL].[2022-09-06].https://arxiv.org/pdf/2106.12423.pdf.
[17] CHOI Y,UH Y,YOO J,et al.Stargan v2:diverse image synthesis for multiple domains [EB/OL].[2022-09-08].https://blog.csdn.net/weixin_43135178/article/details/126828444.
[18] KARRAS T,AILA T,LAINE S,et al.Progressive growingof gans for improved quality,stability,and variation [EB/OL].[2022-09-07].https://arxiv.org/pdf/1710.10196.pdf.
[19] KINGMA D P,WELLING M.Auto-encoding VariationalBayes [EB/OL].[2022-09-07].https://arxiv.org/pdf/1312.6114.pdf.
[20] KINGMA D P,DHARIWAL P.Glow:generative flow withinvertible 1x1 convolutions [EB/OL].[2022-09-09].https://arxiv.org/pdf/1807.03039.pdf.
[21] LONG J,SHELHAMER E,DARRELL T.Fully convolutional networks for semantic segmentation [EB/OL].[2022-09-08].https://ieeexplore.ieee.org/document/7478072.
[22] RONNEBERGER O,FISCHER P,BROX T.U-NET:convolutional networks for biomedical image segmentation [EB/OL].[2022-09-08].https://blog.csdn.net/weixin_36670529/article/details/102809431.
[23] SZEGEDY C,LIU Wei,JIA Yangqing,et al.Going deeper with convolutions [EB/OL].[2022-09-08].https://zhuanlan.zhihu.com/p/158914902.
[24] TAN Mingxing,LE Q V.Mixconv:mixed depthwise convolu-tional kernels [EB/OL].[2022-09-09].https://arxiv.org/pdf/1907.09595.pdf.
[25] LI Duo,YAO Anbang,CHEN Qifeng.Psconv:squeezing feature pyramid into one compact poly-scale convolutional layer [EB/OL].[2022-09-08].https://arxiv.org/abs/2007.06191.
[26] YANG B,BENDER G,LE Q V,et al.CondConv:conditionally parameterized convolutions for efficient inference [EB/OL].[2022-09-10].https://arxiv.org/pdf/1904.04971.pdf.
[27] CHEN Yinpeng,DAI Xiyang,LIU Mengchen,et al.Dynamic convolution:attention over convolution kernels [EB/OL].[2022-09-08].https://blog.csdn.net/m0_47180208/article/details/118570067.
[28] LI Chao,ZHOU Aojun,YAO Anbang.Omni-dimensional dynamic convolution [EB/OL].[2022-09-11].https://arxiv.org/pdf/2209.07947.pdf.
[29] YU Fisher,KOLTUN V.Multi-scale context aggregation by dilated convolutions [EB/OL].[2022-09-11].https://arxiv.org/pdf/1511.07122.pdf.
[30] ZHANG R,ISOLA P,EFROS A A,et al.The unreasonable effectiveness of deep features as a perceptual metric [EB/OL].[2022-09-08].https://arxiv.org/pdf/1801.03924.pdf.
[31] KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Imagenet classification with deep convolutional neural networks [J].Advances in Neural Information Processing Systems,2017,60(6):84-90.

备注/Memo

备注/Memo:
收稿日期:2022-11-23
基金项目:江西省教育厅科学技术研究基金(KJLD14021)和江西省教育厅重点教改课题(JXJG1821)资助项目.
通信作者:周 勇(1971—),男,江西南昌人,副研究员,主要从事数据库、数据挖掘和人工智能方面的研究.E-mail:zhou_yong@126.com
更新日期/Last Update: 2023-01-25