As a conceptual framework, contextual integrity has been used to analyze and understand the privacy implications of
socio-technical systems on a wide array of platforms (e.g. Web, smartphone, IoT systems), and has led to many tools, frameworks, and system designs that help study and address these privacy issues.
Social media: privacy in the public In her book
Privacy In Context: Technology, Policy, and the Integrity of Social Life, Nissenbaum discussed the privacy issues related to public data, discussing examples like
Google Street View privacy concerns and problems caused by converting previously paper-based
public records into digital forms and making them online. In recent years, similar issues happening in the context of social media have revived the discussion. Shi et al. examined how people manage their interpersonal information boundary with the help of the contextual integrity framework. They found that the information access norms was related to who was expected to view the information. Researchers have also applied contextual integrity to more controversial social events, e.g.
Facebook–Cambridge Analytica data scandal The concept of contextual integrity have also influenced the norms of ethics for research work using social media data. Fiesler et al. studied Twitter users' awareness and perception of research work that analyzed
Twitter data, reported results in a paper, or even quoted the actual tweets. It turned out that users' concerns were largely dependent on contextual factors, i.e. who is conducting the research, what the study is for, etc., which is in line with the contextual integrity theory.
Mobile privacy: using contextual integrity to judge the appropriateness of the information flow The privacy concerns induced by the collection, dissemination and use of
personal data via
smartphones have received a large amount of attention from different
stakeholders. A large body of computer science research aims to efficiently and accurately analyze how sensitive personal data (e.g. geolocation, user accounts) flows across the app and when it flows out of the phone. Contextual integrity has been widely referred to when trying to understand the privacy concerns of the objective data flow traces. For example, Primal et al. argued that smartphone permissions would be more efficient if it only prompts the user "when an application's access to sensitive data is likely to defy expectations", and they examined how applications were accessing personal data and the gap between the current practice and users' expectations. Lin et al. demonstrated multiple problematic personal data use cases due to the violation of users' expectations. Among them, using personal data for
mobile advertising purposes became the most problematic one. Most users were unaware of the implicit data collection behavior and found it unpleasantly surprising when researchers informed them of this behavior. Contextual integrity has also influenced the design of mobile operating systems. Both
iOS and
Android are using a permission system to help developers manage their access to sensitive resources (e.g.
geolocation, contact list, user data, etc.) and to provide users with control over which app can access what data. In their official guidelines for developers, both iOS and Android recommend developers to limit the use of permission-protected data to situations only when necessary, and recommend developers to provide a short description of why the permission is requested. Since Android 6.0, users are prompted at runtime, in the context of the app, which is referred to as "Increased situational context" in their documentation.
Other applications In 2006 Barth, Datta, Mitchell and Nissenbaum presented a formal language that could be used to reason about the privacy rules in privacy law. They analyzed the privacy provisions of the
Gramm-Leach-Bliley act and showed how to translate some of its principles into the formal language. ==References==