Learning to Deceive Knowledge Graph Augmented Models via Targeted Pertubation
Mrigank Raman1
Aaron Chan* 2
Siddhant Agarwal* 3
Peifeng Wang2
Hansen Wang4
Sungchul Kim5
Ryan Rossi5
Handong Zhao5
Nedim Lipka5
Xiang Ren2
*: Equal Contribution
1 IIT Delhi
2 University of Southern California
3 IIT Kharagpur
3 Tsinghua University
3 Adobe Research
Ninth International Conference on Learning Representations (ICLR) 2021

Abstract

Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such KG-augmented models can also "explain" which KG information was most relevant for making a given prediction. In this paper, we question whether these models are really behaving as we expect. We show that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs, which maintain the downstream performance of the original KG while significantly deviating from the original KG's semantics and structure. Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.

Paper & Code

Mrigank Raman, Aaron Chan, Siddhant Agarwal, Peifeng Wang, Hansen Wang, Sungchul Kim, Ryan Rossi, Handong Zhao, Nedim Lipka, Xiang Ren
Learning to Deceive Knowledge Graph Augmented Models via Targeted Pertubation
Ninth International Conference on Learning Representations, 2021
[PDF] [Code]