===== Paper Discussion ===== (Under construction) Students will choose one of the following discussion topics. Those that have the same interest form a group. Each group will be given a number of continuous discussion slots. Each group will work on a summary presentation about the topic, which will be presented at the beginning of the group's discussion slots. The group decides the set of papers to discuss, the time allocated for each paper, and the group members that discuss the individual papers. I expect the groups to meet beforehand (on the own) to figure these out. In the summary presentation, you are supposed to (1) define the area; (2) summarize the status of the research area (e.g., discussing the state-of-the-art, open problems, the important papers, and the leading research groups); (3) quickly introduce the papers that your group will cover and any hints for how the rest of the class should read the papers. Individual group members will cover different papers. For each paper, a group member shall present the paper and lead the class discussion. In the paper presentation, you are supposed to: (1) define the problem; (2) introduce any background that is necessary to understand the paper; (3) discuss technical solution; (4) comment on the pros and cons of the paper; (5) discuss unsolved open problems; (6) advocate any idea you may have; (7) go over topics that you want the class to discuss. You shall prepare to draw examples on board to explain the technique. Each student needs to present at least one hour. He/she can choose to present one or two papers within that duration. The entire class is supposed to read the paper(s) that are discussed. Quiz will be given (by the presenter) before a paper presentation starts. The paper presentation grade will be divided to two parts: the group grade shared by all group members, and the individual grades [[https://docs.google.com/spreadsheets/d/1V5lC4UQHOhNRoC3cKyY14wE0-k4Jw2r-zcvDJqjXIcI/edit?usp=sharing|Paper Discussion Schedule]] == Topics of Interest == *AI security *Adversarial sample attack and defense *[[https://arxiv.org/pdf/1608.04644.pdf|Towards Evaluating the Robustness of Neural Networks]] *[[https://arxiv.org/pdf/1706.06083.pdf|Towards Deep Learning Models Resistant to Adversarial Attacks]] *[[https://www.cs.purdue.edu/homes/ma229/papers/NDSS19.pdf|NIC: Detecting Adversarial Samples with Neural Network Invariant Checking]] *[[https://www.cs.purdue.edu/homes/taog/docs/NeurIPS18.pdf|Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples]] *[[https://arxiv.org/pdf/1901.08573.pdf|Theoretically Principled Trade-off between Robustness and Accuracy]] *[[https://arxiv.org/pdf/1712.04248.pdf|Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models]] *[[https://arxiv.org/pdf/1804.08598.pdf|Black-box Adversarial Attacks with Limited Queries and Information]] *[[https://arxiv.org/pdf/1901.08573.pdf|Theoretically Principled Trade-off between Robustness and Accuracy]] *[[https://arxiv.org/pdf/1902.04818.pdf|The Odds are Odd: A Statistical Test for Detecting Adversarial Examples]] *[[https://arxiv.org/pdf/1805.06605.pdf|Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models]] *constraint solving based attack *[[https://arxiv.org/pdf/1903.10346.pdf|Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition]] *[[https://www.ndss-symposium.org/wp-content/uploads/2019/02/ndss2019_08-2_Schonherr_paper.pdf|Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding]] *[[https://www.ndss-symposium.org/wp-content/uploads/2019/02/ndss2019_08-1_Abdullah_paper.pdf|Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems]] *[[https://openreview.net/pdf?id=r1g4E3C9t7|Characterizing Audio Adversarial Examples Using Temporal Dependency]] *[[https://www.ndss-symposium.org/wp-content/uploads/2019/02/ndss2019_03A-5_Li_paper.pdf|TextBugger: Generating Adversarial Text Against Real-world Applications]] *[[https://arxiv.org/pdf/1704.08006.pdf|Interpretable Adversarial Perturbation in Input Embedding Space for Text]] *[[https://arxiv.org/pdf/1704.08006.pdf|Deep Text Classification Can be Fooled]] *[[https://aclweb.org/anthology/D18-1316|Generating Natural Language Adversarial Examples]] *[[https://aclweb.org/anthology/P18-2006|HotFlip: White-Box Adversarial Examples for Text Classification]] *[[https://arxiv.org/pdf/1801.02610.pdf|Generating Adversarial Examples with Adversarial Networks]] *[[https://arxiv.org/pdf/1707.08945.pdf|Robust Physical-World Attacks on Deep Learning Models]] *Backdoor attack and defense *[[https://machine-learning-and-security.github.io/papers/mlsec17_paper_51.pdf|BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain]] *[[https://www.cs.purdue.edu/homes/ma229/papers/NDSS18.TNN.pdf|Trojaning Attack on Neural Networks]] *[[https://sites.cs.ucsb.edu/~bolunwang/assets/docs/backdoor-sp19.pdf|Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks]] *[[https://arxiv.org/pdf/1811.03728.pdf|Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering]] *[[https://arxiv.org/pdf/1805.12185.pdf|Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks]] *[[https://arxiv.org/pdf/1812.00483.pdf|Model-Reuse Attacks on Deep Learning Systems]] *[[https://www.cs.purdue.edu/homes/taog/docs/CCS19.pdf|ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation]] *Model privacy *[[https://arxiv.org/pdf/1709.01604.pdf|Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting]] *[[https://www.usenix.org/system/files/sec19-carlini.pdf|The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks]] *[[https://arxiv.org/pdf/1812.00910.pdf|Comprehensive Privacy Analysis of Deep Learning]] *[[https://www.ndss-symposium.org/wp-content/uploads/2019/02/ndss2019_03A-1_Salem_paper.pdf|ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models]] *[[https://arxiv.org/pdf/1807.05852.pdf|Machine Learning with Membership Privacy using Adversarial Regularization]] *[[https://arxiv.org/pdf/1802.04889.pdf|Understanding Membership Inferences on Well-Generalized Learning Models]] *[[https://arxiv.org/pdf/1712.09136.pdf|Towards Measuring Membership Privacy]] *[[https://www.cs.cornell.edu/~shmat/shmat_ccs15.pdf|Privacy-Preserving Deep Learning]] *[[https://www.cs.cornell.edu/~shmat/shmat_oak17.pdf|Membership Inference Attacks Against Machine Learning Models]] *[[https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_tramer.pdf|Stealing Machine Learning Models via Prediction APIs]] *[[https://arxiv.org/pdf/1802.05351.pdf|Stealing Hyperparameters in Machine Learning]] *[[https://arxiv.org/pdf/1805.02628.pdf|PRADA: Protecting Against DNN Model Stealing Attacks]] *[[http://www.princeton.edu/~liweis/Publications/privacy_vs_robustness_dls19.pdf|Membership Inference Attacks against Adversarially Robust Deep Learning Models]] *[[http://proceedings.mlr.press/v97/sablayrolles19a/sablayrolles19a.pdf|White-box vs Black-box: Bayes Optimal Strategies for Membership Inference]] *[[https://openreview.net/pdf?id=S1zk9iRqF7|PATE-GAN: Generating Synthetic Data with Differential Privacy Guarantees]] *[[https://arxiv.org/pdf/1602.05629.pdf|Communication-Efficient Learning of Deep Networks from Decentralized Data​]] *[[https://arxiv.org/pdf/1712.07557.pdf|Differentially Private Federated Learning: A Client Level Perspective]] *[[http://youngwei.com/pdf/PermuteInvariance.pdf|Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations]] *[[https://arxiv.org/pdf/1306.4447.pdf|Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers]] *AI debugging *[[https://homes.cs.washington.edu/~marcotcr/acl18.pdf|Semantically Equivalent Adversarial Rules for Debugging NLP Models]] *[[https://www.cs.purdue.edu/homes/ma229/papers/FSE18.pdf|MODE: Automated Neural Network Model Debugging via State Differential Analysis and Input Selection]] *[[https://www.cs.purdue.edu/homes/ma229/papers/FSE17.pdf|LAMP: Data Provenance for Graph Based Machine Learning Algorithms Through Derivative Computation]] *[[https://arxiv.org/pdf/1812.08999.pdf|Feature-Wise Bias Amplification]] *Overfitting, underfitting, stereotyping *AI testing *[[http://www.cs.columbia.edu/~junfeng/papers/deepxplore-sosp17.pdf|DeepXplore: Automated Whitebox Testing of Deep Learning Systems]] *[[https://arxiv.org/pdf/1708.08559.pdf|DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars]] *[[https://www.ntu.edu.sg/home/yi_li/files/fse19.pdf|DeepStellar: Model-Based Quantitative Analysis of Stateful Deep Learning Systems]] *[[https://arxiv.org/pdf/1809.01266.pdf|DeepHunter: A Coverage-Guided Fuzz Testing Framework for Deep Neural Networks]] *[[https://arxiv.org/pdf/1803.07519.pdf|DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems]] *[[https://arxiv.org/pdf/1805.05206.pdf|DeepMutation: Mutation Testing of Deep Learning Systems]] *AI verification *[[https://arxiv.org/pdf/1702.01135.pdf|Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks]] *[[https://files.sri.inf.ethz.ch/website/papers/sp2018.pdf|AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation]] *[[https://files.sri.inf.ethz.ch/website/papers/DeepPoly.pdf|An Abstract Domain for Certifying Neural Networks]] *[[https://www.cs.purdue.edu/homes/suresh/papers/pldi19.pdf|An Inductive Synthesis Framework for Verifiable Reinforcement Learning]] *[[https://www.cs.columbia.edu/~tcwangshiqi/docs/reluval.pdf|Formal Security Analysis of Neural Networks using Symbolic Intervals]] *[[https://arxiv.org/pdf/1809.08098.pdf|Efficient Formal Safety Analysis of Neural Networks]] *[[https://arxiv.org/pdf/1902.02918.pdf|Certified Adversarial Robustness via Randomized Smoothing]] *AI interpretation *[[http://www.personal.psu.edu/wzg13/publications/ccs18.pdf|LEMNA: Explaining Deep Learning based Security Applications]] *[[http://www.personal.psu.edu/wzg13/publications/neurips18.pdf|Explaining Deep Learning Models -- A Bayesian Non-parametric Approach]] *[[https://homes.cs.washington.edu/~marcotcr/aaai18.pdf|Anchors: High-Precision Model-Agnostic Explanations]] *[[https://arxiv.org/pdf/1602.04938.pdf|"Why Should I Trust You?": Explaining the Predictions of Any Classifier]] *AI and program analysis *[[https://www.cs.purdue.edu/homes/ma229/papers/PLDI19.pdf|Programming Support for Autonomizing Software]] *Symbolic execution of AI models (by NUS) *[[https://arxiv.org/pdf/1510.00149v5.pdf|Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding]] *[[https://arxiv.org/pdf/1602.01528v2.pdf|EIE: Efficient Inference Engine on Compressed Deep Neural Network]] *[[https://arxiv.org/pdf/1612.00694.pdf|ESE: Efficient Speech Recognition Engine for Sparse LSTM on FPGA]] *[[https://arxiv.org/pdf/1712.01887.pdf|Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training]] *[[http://homes.sice.indiana.edu/lukefahr/papers/jiecaoyu_isca17.pdf|Scalpel: Customizing DNN Pruning to the Underlying Hardware Parallelism]] *[[https://users.cs.northwestern.edu/~stamourv/papers/unconventional-parallelization.pdf|Unconventional Parallelization of Nondeterministic Applications]]