Privacy Impact Assessments are intended to identify privacy risks and propose ways to mitigate them. While these assessments are a very useful tool for helping organizations to start thinking about their privacy practices, they often conceptualize risk and solutions more in terms of security than privacy. A more comprehensive understanding of privacy risk, that considers the inherent risks of using and sharing information, can help to envision a broader range of privacy solutions.
Privacy Impact Assessments (PIAs), required for most new or renewed government programs that handle citizens’ personal information, are intended to identify privacy risks and create plans to mitigate them. More broadly, PIAs are a tool for assessing the impact on citizens of the way that government organizations manage their personal information.PIAs examine the types of personal information an organization holds and how it is collected, stored, used, shared, and destroyed. They establish whether these information management practices are aligned with laws, government standards, and organizational policy. Based on this assessment, PIAs outline risks and gaps in privacy practices. PIAs are generally very helpful not only for ensuring legal compliance, but also for getting organizations to start thinking about information management from a privacy perspective and planning ways to improve privacy.
Moving beyond a security paradigm
Privacy risk assessment is the central aspect of PIAs, but is often a poorly understood concept. PIAs often base their risk assessments on principles borrowed from Threat & Risk Assessments (TRAs), which examine physical and virtual security, when there is in fact a fundamental difference between security and privacy risk.
Security risk is based on two main factors: the likelihood of negative events (such as data breaches), and the impact of these potential events. Metrics for likelihood usually consist of probability estimates for various potential risk scenarios. Impact, on the other hand, is a subjective assessment which often focuses more on the amount of data that could be released inappropriately and the severity of security flaws than on the real-world impact on people whose personal information could be breached. For instance, TRAs often classify all potential breaches of health data as high impact. In reality, the breach of blood sugar test results, for example, would have a very different impact on patients than the breach of HIV test results.
Privacy risk, as distinct from security risk, has three main elements. The first is the likelihood of privacy violations (inappropriate use or disclosure of information). This is similar in principle to the likelihood factor assessed in TRAs, but covers a broader range of scenarios, including privacy violations by people with authorized access to data. The second element is structural risk, which reflects how data is managed and shared by the organization; for instance, organizations that allow many users to access databases containing personal information have a higher structural risk than those that more carefully limit access. The final key element is the inherent privacy risk of the data itself: this is based on the identifiability of the data (how easily individuals can be identified based on the data) and the sensitivity of the data (the degree to which inappropriate use or disclosure of the data could harm individuals).
Diversifying access: A broader range of privacy solutions
Perhaps the most important difference between privacy and security risk pertains to the question of access. Security is basically a matter of access control, and security risks are the possibility of unauthorized access to data. Privacy risk is broader, and includes potential violations of privacy by users with authorized access to information. PIAs are designed to uncover privacy risks that are not related to security flaws, but rather are inherent to the way that the organization handles information. A privacy risk assessment will include questions such as these:
- Do the organization’s policies for data sharing respect privacy laws and standards?
- Does the organization use or disclose personal information for purposes other than those to which individuals consented?
- Do system users have access to more data than they need to do their work?
- Could data released to third parties (for example, health or social service data disclosed to researchers) be used to identify individuals, either on its own or by linking it to other available data?
Most PIAs do ask these questions, but approach them from a security perspective, emphasizing restricted access to information and strong security safeguards as the way to contain privacy risk. Safeguards that address structural risk and data risk are often ignored: these include customizing access so that users can be granted access only to the data they need, and de-identifying data so that it can be shared without endangering privacy. De-identification is particularly well-suited to mitigating the privacy risk of using personal records for research and analytics, as it ensures that the risk that the data could identify individuals is reduced to a very low level. When users have access to datasets that serve their needs without providing unnecessary identifying information, privacy risk is greatly reduced and data can be better utilized for a variety of purposes.
PIAs are an important part of the process of identifying and mitigating risks, but too often understand risk in terms of security rather than broader privacy issues. A multidimensional understanding of privacy, that considers structural and data risk rather than only the risk of data breaches, reveals a variety of risk mitigation options that are often more efficient than security solutions. Utilizing a variety of privacy solutions, including customized access and de-identification, makes it possible for data to be more widely used and shared with very little risk to privacy.