A tunable measure for information leakage called maximal a-leakage is introduced. This measure quantifies the maximal gain of an adversary in refining a tilted version of its prior belief of any (potentially random) function of a dataset conditioning on a disclosed dataset. The choice of \alpha determines the specific adversarial action ranging from refining a belief for \alpha=1 to guessing the best posterior for \alpha=\infty, and for these extremal values this measure simplifies to mutual information (MI) and maximal leakage (MaxL), respectively. For all other \alpha this measure is shown to be the Arimoto channel capacity. Several properties of this measure are proven including: (i) quasi-convexity in the mapping between the original and disclosed datasets; (ii) data processing inequalities; and (iii) a composition property. A full version of this paper is in .