Differential privacy/Docs/DP error metrics

Applied differential privacy is a new field. Within that field, there are many different (and at times discordant) ways of measuring the success or failure of a given data release. This page is intended to provide a general purpose list of common differential privacy error metrics, along with how they're calculated.

These ideas and mathematical derivations are not set in stone; they are just examples of how we at WMF have calculated error in some of our ongoing DP projects.

Mathematical definitions

• ${\displaystyle X}$ is the input dataset.
• ${\displaystyle M_{t}}$ is a mechanism that takes in a dataset and does some non-differentially private transformation on it.
• ${\displaystyle M_{dp}}$ is a mechanism that takes in a dataset and does some differentially private transformation on it.
• ${\displaystyle U_{t}}$ as the set of values making up the true output of an aggregation (i.e. a groupby-count, a groupby-sum). Each element of the set is represented by ${\displaystyle u_{t}\in U_{t}}$.
• ${\displaystyle U_{dp}}$ as the set of values making up the differentially private output of an aggregation (the true output with some random noise added). Each element of the set is represented by ${\displaystyle u_{dp}\in U_{dp}}$, and values less than 0 are cast to 0.
• ${\displaystyle t}$ is the publication threshold. If a specific element ${\displaystyle u_{dp}>t}$, it is publicly released.

Therefore, ${\displaystyle M_{t}(X)=U_{t}}$ and ${\displaystyle M_{dp}(X)=U_{dp}}$.

Note: some error formulas here use the "|" notation to denote set cardinality, and some use the "|" notation to denote absolute value.

Counts

It is useful to ground discussions of error metrics in an understanding of the underlying number of elements that would be released given a specific value of ${\displaystyle t}$.

True count

We can calculate the true count for a dataset as follows:

${\displaystyle {\text{TrueCount}}(U_{t},t)=|\{u_{t}\in U_{t}|u_{t}>t\}|}$

This is the number of data points that would be published at a threshold ${\displaystyle t}$ if no differential privacy were used, and serves as a kind of ground truth.

Published count

We can calculate the published count for a dataset as follows:

${\displaystyle {\text{PublishedCount}}(U_{dp},t)=|\{u_{dp}\in U_{dp}|u_{dp}>t\}|}$

This is the number of data points that would be published at a threshold ${\displaystyle t}$ while using differential privacy.

Errors

We can also calculate different sorts of error on an element-by-element basis by comparing differentially private values to true values. These error metrics can then be looked at in the aggregate.

Some useful questions to ask if you've calculated these metrics for all elements of a DP data release:

• What is the median error of published elements?
• What is the mean error of published elements?
• What fraction of published elements are below some error threshold?

Absolute error

We can calculate the absolute error for an element ${\displaystyle u_{dp}}$, corresponding to element ${\displaystyle u_{t}}$, as:

${\displaystyle {\text{AbsoluteError}}(u_{dp},u_{t})=|u_{t}-u_{dp}|}$

The absolute error is calculated in the units of ${\displaystyle u}$, and is the actual difference between a DP value and its corresponding true value.

Relative error

We can calculate the relative error for an element ${\displaystyle u_{dp}}$, corresponding to element ${\displaystyle u_{t}}$, as:

${\displaystyle {\text{RelativeError}}(u_{dp},u_{t})={\frac {|u_{t}-u_{dp}|}{u_{t}}}}$

The relative error is calculated in percentages, and is the percent difference between a DP value and its corresponding true value.

Other metrics

Besides counts and errors, there are other, more specific metrics of error that apply to differential privacy.

Bias

Differential privacy often involves transforming input data through normalization and the removal of outliers. These transformations can have effects on the published data as a whole. Bias seeks to quantify those effects, and can be calculated as:

${\displaystyle {\text{Bias}}(U_{dp},U_{t})={\frac {\sum U_{dp}-\sum U_{t}}{\sum U_{t}}}}$

Bias adds up the entire set of elements in the DP release and in the true release, subtracts them from each other, and divides by the true sum, yielding a percentage. If the percentage is negative, the sum of the DP release is less than that of the true data; if the percentage is positive, the sum of the DP release is greater than that of the true data. The closer it is to 0, the closer the two sums are to each other.

Drop rate

The drop rate is relatively analogous to the idea of a false negative rate in machine learning/statistics. In the context of DP, a data point that is dropped is one where the true count is greater than 0, but the noisy count is less than or equal to the release threshold (i.e. ${\displaystyle u_{t}>0\land u_{dp}\leq t}$). The drop rate is the extension of this element-by-element principle to the entire dataset, and can be calculated as

${\displaystyle {\text{DropRateCount}}(U_{dp},U_{t})={\frac {|\{u_{t}\in U_{t},u_{dp}\in U_{dp}|u_{t}>t\land u_{dp}\leq t\}|}{|U_{t}|}}}$

${\displaystyle {\text{DropRateSum}}(U_{dp},U_{t})={\frac {\sum \{u_{t}\in U_{t},u_{dp}\in U_{dp}|u_{t}>t\land u_{dp}\leq t\}}{\sum U_{t}}}}$

The drop rate seeks to quantify the data that would be released without noise, but is instead not included in a differentially private data release, and can either look at the count or the sum of the elements that are dropped.

Spurious rate

The spurious rate is relatively analogous to the idea of a false positive rate in machine learning/statistics. In the context of DP, a data point that is spurious is one where the true count is 0, but the noisy count is greater than the release threshold (i.e. ${\displaystyle u_{t}=0\land u_{dp}>t}$). The spurious rate is the extension of this element-by-element principle to the entire dataset, and can be calculated as

${\displaystyle {\text{SpuriousRateCount}}(U_{dp},U_{t})={\frac {|\{u_{t}\in U_{t},u_{dp}\in U_{dp}|u_{t}=0\land u_{dp}>t\}|}{|U_{t}|}}}$

${\displaystyle {\text{SpuriousRateSum}}(U_{dp},U_{t})={\frac {\sum \{u_{t}\in U_{t},u_{dp}\in U_{dp}|u_{t}=0\land u_{dp}>t\}}{\sum U_{t}}}}$

The spurious rate seeks to quantify the data that wouldn't be released without noise, but is instead included in a differentially private data release, and can either look at the count or the sum of the elements that are dropped. It is the data that is, in effect, "hallucinated" in the data release because of random noise added.

To come