OpenAI API and Privacy
Your questions answered
Luke Bowler – Legal consultant
Last updated on 29 July 2024
As we tell prospective clients, and as most of our clients will already know, the areas in which we focus our legal assistance are Privacy, Information Security, and Quality. Within the legal consulting industry, these three areas remain at the center of any sound compliance strategy. They are continually reoccurring themes that are appropriate for both business output as well as internal usage policies for new software and hardware.
The area where we are seeing the most questions coming in about the usage of technology and the three areas mentioned above, is in the usage of OpenAI’s ChatGPT and the ChatGPT API. We’ll spare you the repetitive marketing talk about the functionalities and potential benefits of adopting the technology. Rather, we would like to begin with a short overview of the OpenAI API and share the answers to the two central questions that make up the bulk of the things that we have spent the past 1.5 years answering.
Understanding the GPT-API
OpenAI’s Generative Pre-trained Transformer (GPT) API provides developers with access to various versions of their GPT (i.e. GPT-4). The API enables applications in text creation, creative ideation, linguistic interactions, and text analysis. This API supports a variety of uses including generating content, developing chatbots, and summarizing large volumes of text.
Within the same breath, the integration of the API has raised privacy and information security questions that had not been central before the mass-scale adoption of the technology. This is exemplified by the recent developments of European data protection authorities that have begun inquiries into OpenAI’s compliance with the GDPR. For instance, the Italian data protection authority (Garante per la protezione dei dati personali), at the beginning of 2024, accused OpenAI of violating GDPR provisions (Garante per la protezione dei dati personali, 2024). Similarly, the European Data Protection Board (EDPB) has established a task force to promote cooperation and the exchange of information regarding OpenAI’s compliance with data protection regulations (European Data Protection Board, 2024).
The adoption of ChatGPT API into business operations necessitates a consideration of relevant privacy regulations. Companies must understand that they become data controllers when using the API. An important compliance step that can be taken is the creation of data processing agreements (DPAs) with OpenAI so that users are adequately informed about how their data is being used.
Practical Steps:
- Execute a Data Processing Agreement: Enter into a legally binding DPA signed with OpenAI, which outlines the responsibilities and obligations regarding data processing. OpenAI offers a DPA which can be created using a form on Ironclad.
- External Data Sharing: The DPA explicitly addresses external data sharing. OpenAI may engage Subprocessors to process Customer Data, but it must notify customers of any changes to the Subprocessor list and obtain consent. Customers have the right to object to new Subprocessors on reasonable grounds related to data protection. OpenAI mandates that all Subprocessors comply with data protection obligations comparable to those in the DPA.
- Processing Purposes: OpenAI processes Customer Data primarily to provide and support its services. This includes insights, reporting, analytics, platform abuse monitoring, trust and safety monitoring, and other purposes as specified in the agreement.
- Data Use for Improvements: The DPA allows OpenAI to process Customer Data for the primary purpose of providing and maintaining the Services. However, it also specifies that OpenAI may use de-identified, anonymized, or aggregated data to improve its systems and services.
- Update Privacy Policies: Clearly articulate how personal data is processed, the purpose of processing, retention periods, and users’ rights. In 2023, we wrote a blog on how to draft a privacy statement (NAALA, 2023).
- Conduct a Data Protection Impact Assessment: For high-risk processing activities, a DPIA is required to identify and mitigate potential risks. For a more detailed description of how to conduct a DPIA, see our January 2024 blog (NAALA, 2024).
Information security is another important topic to consider when integrating AI technologies like the ChatGPT API. Protecting the integrity, confidentiality, and availability of data processed by the API requires internal security measures.
Recommended Actions:
- Access Controls: Implement strict access controls to ensure that only authorized personnel can access sensitive data.
- Regular Security Audits: Conduct regular security audits and vulnerability assessments to identify and address potential security weaknesses.
- API Monitoring: Implement continuous monitoring to detect and respond to unusual activity or potential security threats. This includes logging all API access, monitoring for unusual patterns or spikes in traffic, and setting up alerts for suspicious activities.
- Penetration Testing: While you may not be able to perform penetration testing directly on OpenAI’s ChatGPT API, you can conduct penetration tests on your own implementation and integration of the API. This contributes to your application and its interaction with the API being secure.
- Consider using the API through Azure: Extended terms of use apply to these preview functions. Microsoft’s data protection guide restricts data use, which mandates customer data is not accessible to others or used to improve OpenAI models (Microsoft Azure, 2024). Data processed by Microsoft includes content generation, model creation, and abuse monitoring. Data, including prompts, is stored on special servers for 30 days for misuse prevention, with objections to data storage requiring an approved request (Microsoft Azure, 2024).
Questions? We are happy to discuss your specific case.
Related
Stay updated on the AI Act timeline for medical device software with NAALA.
Learn about important compliance dates and what the AI Act means for healthcare AI manufacturers.
Explore the regulatory landscape of AI-driven predictive models in rehabilitation with insights from NAALA. Our detailed analysis highlights the role of these models in enhancing patient care and the stringent quality and safety standards they must meet under the Medical Devices Regulations (MDR).
Learn about the ethical considerations, transparency challenges, and the evolving nature of AI in the medical field.