TY - GEN
T1 - A Tunable Measure for Information Leakage
AU - Liao, Jiachun
AU - Kosut, Oliver
AU - Sankar, Lalitha
AU - Calmon, Flavio P.
N1 - Funding Information:
This material is based upon work supported by the National Science Foundation under Grant No. CCF-1350914
Funding Information:
This material is based upon work supported by the National Science Foundation under Grant No. CCF-1350914.
Publisher Copyright:
© 2018 IEEE.
PY - 2018/8/15
Y1 - 2018/8/15
N2 - A tunable measure for information leakage called maximal a-leakage is introduced. This measure quantifies the maximal gain of an adversary in refining a tilted version of its prior belief of any (potentially random) function of a dataset conditioning on a disclosed dataset. The choice of \alpha determines the specific adversarial action ranging from refining a belief for \alpha=1 to guessing the best posterior for \alpha=\infty, and for these extremal values this measure simplifies to mutual information (MI) and maximal leakage (MaxL), respectively. For all other \alpha this measure is shown to be the Arimoto channel capacity. Several properties of this measure are proven including: (i) quasi-convexity in the mapping between the original and disclosed datasets; (ii) data processing inequalities; and (iii) a composition property. A full version of this paper is in [1].
AB - A tunable measure for information leakage called maximal a-leakage is introduced. This measure quantifies the maximal gain of an adversary in refining a tilted version of its prior belief of any (potentially random) function of a dataset conditioning on a disclosed dataset. The choice of \alpha determines the specific adversarial action ranging from refining a belief for \alpha=1 to guessing the best posterior for \alpha=\infty, and for these extremal values this measure simplifies to mutual information (MI) and maximal leakage (MaxL), respectively. For all other \alpha this measure is shown to be the Arimoto channel capacity. Several properties of this measure are proven including: (i) quasi-convexity in the mapping between the original and disclosed datasets; (ii) data processing inequalities; and (iii) a composition property. A full version of this paper is in [1].
UR - http://www.scopus.com/inward/record.url?scp=85052481906&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85052481906&partnerID=8YFLogxK
U2 - 10.1109/ISIT.2018.8437307
DO - 10.1109/ISIT.2018.8437307
M3 - Conference contribution
AN - SCOPUS:85052481906
SN - 9781538647806
T3 - IEEE International Symposium on Information Theory - Proceedings
SP - 701
EP - 705
BT - 2018 IEEE International Symposium on Information Theory, ISIT 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE International Symposium on Information Theory, ISIT 2018
Y2 - 17 June 2018 through 22 June 2018
ER -