About

I’m a first-year Ph.D. student in Electrical and Computer Enigneering, Carnegie Mellon University (CMU). I am co-advised by Prof. Anupam Datta and Prof. Matt Fredrikson

I am a member of Accountable System Lab and My current concentration is the development of interpretations for deep neural networks.

I will be receiving my Master degree in Electrical and Computer Enigneering in the comming May, 2020. During my Master program, I study at the Silicon Valley campus of CMU, a small and warm community located in Mountain View, CA. Before joining CMU, I received my Bachelor degree in Electronic Science and Technology from Beijing Institute of Technology, Beijing, China.

Outside my professional life, I am an outgoing video game player, a hiker who also loves camping and road trip, and right now I am learning to play the skateboard. I also have a cat whose name is Pikachu. He is handsome and active

Publications

  • Interpreting Interpretations: Organizing Attribution Methods by Criteria. Zifan Wang, Piotr Mardziel, Anupam Datta, Matt Fredrikson [CVPR 2020 Workshop]
    [ arXiv | bib | slides | video ]

    Motivated by distinct, though related, criteria, a growing number of attribution methods have been developed tointerprete deep learning. While each relies on the interpretability of the concept of "importance" and our ability to visualize patterns, explanations produced by the methods often differ. As a result, input attribution for vision models fail to provide any level of human understanding of model behaviour. In this work we expand the foundationsof human-understandable concepts with which attributionscan be interpreted beyond "importance" and its visualization; we incorporate the logical concepts of necessity andsufficiency, and the concept of proportionality. We definemetrics to represent these concepts as quantitative aspectsof an attribution. This allows us to compare attributionsproduced by different methods and interpret them in novelways: to what extent does this attribution (or this method)represent the necessity or sufficiency of the highlighted inputs, and to what extent is it proportional? We evaluate our measures on a collection of methods explaining convolutional neural networks (CNN) for image classification. We conclude that some attribution methods are more appropriate for interpretation in terms of necessity while others are in terms of sufficiency, while no method is always the most appropriate in terms of both.

  • Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks. Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, Xia Hu [CVPR 2020 Workshop]
    [ arXiv | bib | slides ]

    Recently, increasing attention has been drawn to the internal mechanisms of convolutional neural networks, and the reason why the network makes specific decisions. In this paper, we develop a novel post-hoc visual explanation method called Score-CAM based on class activation mapping. Unlike previous class activation mapping based approaches, Score-CAM gets rid of the dependence on gradients by obtaining the weight of each activation map through its forward passing score on target class, the final result is obtained by a linear combination of weights and activation maps. We demonstrate that Score-CAM achieves better visual performance and fairness for interpreting the decision making process. Our approach outperforms previous methods on both recognition and localization tasks, it also passes the sanity check. We also indicate its application as debugging tools. Official code has been released.

Pre-print

  • Towards Frequency-Based Explanation for Robust CNN. Zifan Wang, Yilin Yang, Ankit Shrivastava, Varun Rawal, Zihao Ding
    [ pdf | bib ]

    Current explanation techniques towards a transparent Convolutional Neural Network (CNN) mainly focuses on building connections between the human-understandable input features with models' prediction, overlooking an alternative representation of the input, the frequency components decomposition. In this work, we present an analysis of the connection between the distribution of frequency components in the input dataset and the reasoning process the model learns from the data. We further provide quantification analysis about the contribution of different frequency components toward the model's prediction. We show that the vulnerability of the model against tiny distortions is a result of the model is relying on the high-frequency features, the target features of the adversarial (black and white-box) attackers, to make the prediction. We further show that if the model develops stronger association between the low-frequency component with true labels, the model is more robust, which is the explanation of why adversarially trained models are more robust against tiny distortions.

  • Smoothed Geometry for Robust Attribution.  Zifan Wang, Haofan Wang, Shakul Ramkumar, Matt Fredrikson, Piotr Mardziel, Anupam Datta
    [ pdf | bib ]

    Feature attributions are a popular tool for explaining the behavior of Deep Neural Networks (DNNs), but have recently been shown to be vulnerable to attacks that produce divergent explanations for nearby inputs. This lack of robustness is especially problematic in high-stakes applications where adversarially-manipulated explanations could impair safety and trustworthiness. Building on a geometric understanding of these attacks presented in recent work, we identify Lipschitz continuity conditions on models' gradient that lead to robust gradient-based attributions, and observe that smoothness may also be related to the ability of an attack to transfer across multiple attribution methods. To mitigate these attacks in practice, we propose an inexpensive regularization method that promotes these conditions in DNNs, as well as a stochastic smoothing technique that does not require re-training. Our experiments on a range of image models demonstrate that both of these mitigations consistently improve attribution robustness, and confirm the role that smooth geometry plays in these attacks on real, large-scale models.

Teaching

  • [Teaching Assistance] 2019 Fall 18-661: Introduction to Machine Learning for Engineers course homepage
  • [Teaching Assistance] 2020 Spring 18-739: Security and Fairness of Deep Learning course homepage
  • News

  • [2020-05-16] 言聚 Talk 强大却无法理解的AI, 该不该使用?
  • Gateway

    My Latest Travel Picture

    Photo

    Photo