Back | Compact Representation | Cross-media Analysis | Representation Safety

Representation Safety

10. Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation
Qilong Zhang, Chaoning Zhang, Chaoqun Li, Jingkuan Song, Lianli Gao
9. Frequency Domain Model Augmentation for Adversarial Attack
Yuyang Long, Qilong Zhang, Boheng Zeng, Lianli Gao, Xianglong Liu, Jian Zhang, Jingkuan Song
8. Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack
Ye Liu, Yaya Cheng, Lianli Gao, Xianglong Liu, Qilong Zhang, Jingkuan Song
7. Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks
Shengming Yuan, Qilong Zhang, Lianli Gao, Yaya Cheng, Jingkuan Song
6. Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains
Qilong Zhang, Xiaodan Li, Yuefeng Chen, Jingkuan Song, Lianli Gao, Yuan He, Hui Xue
5. Staircase Sign Method for Boosting Adversarial Attacks
Qilong Zhang, Xiaosu Zhu, Jingkuan Song, Lianli Gao, Heng Tao Shen
4. Push & Pull: Transferable Adversarial Examples With Attentive Attack
Lianli Gao, Zijie Huang, Jingkuan Song, Yang Yang, Heng Tao Shen
3. Feature Space Targeted Attacks by Statistic Alignment
Lianli Gao, Yaya Cheng, Qilong Zhang, Xing Xu, Jingkuan Song
2. Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments
Xiao Bai, Xiang Wang, Xianglong Liu, Qiang Liu, Jingkuan Song, Nicu Sebe, Been Kim
1. Patch-wise Attack for Fooling Deep Neural Network
Lianli Gao, Qilong Zhang, Jingkuan Song, Xianglong Liu, Heng Tao Shen
x1.00
>
<
>>
<<
O
x1.00