Incorporating ChatGPT into your work routine can be immensely beneficial, but prioritizing your work privacy is paramount. As an advanced language model, ChatGPT processes and generates text based on the input it receives. To uphold a secure work environment, follow these essential practices. Exercise caution in the information you share, refraining from disclosing sensitive data such as financial records or confidential work details.

Regularly clear your chat history and delete conversations to minimize data exposure. Familiarize yourself with ChatGPT’s privacy controls, including disabling chat history and training when not needed. By embracing these practices, you can confidently leverage the capabilities of ChatGPT while ensuring the security of your work-related information.

1. Avoid Storing Chat History:

Protecting your privacy can begin with a straightforward yet impactful step: refraining from saving your chat history. By default, ChatGPT retains all interactions, serving as valuable training data but also subjecting conversations to moderation and potential security risks.

While account moderation aligns with OpenAI’s terms and services, it introduces security vulnerabilities. Major companies like Apple, J.P. Morgan, Verizon, and Amazon have restricted AI tool usage due to concerns about leaked or collected confidential information. To disable chat history:

  • Click the ellipsis or three dots next to your ChatGPT account name.
  • Navigate to Settings.
  • Access Data controls.
  • Turn off Chat history and training.

Note that even with this setting enabled, conversations are retained for 30 days, with moderators having the option to review them before permanent deletion. However, disabling chat history remains a proactive step in safeguarding your privacy.

Tip: If access to your ChatGPT data is needed, consider exporting or saving them through alternative methods such as screenshots, manual notes, copy-pasting into a secure application, or using encrypted cloud storage.

2. Erase Conversations:

A notable concern with OpenAI’s ChatGPT is the potential for data breaches. The ChatGPT outage leading to a Federal Trade Commission investigation illustrates the risks associated with using the app.

According to OpenAI’s update on the March 20, 2023 outage, a bug in an open-source library triggered the incident, allowing users to view others’ chat history titles. It also exposed payment-related details of 1.2% of ChatGPT Plus subscribers, including names, credit card information, and email addresses.

Deleting your conversations is a proactive measure to shield your data. Follow these steps to clear your chats:

  • Click the ellipsis or three dots beside your ChatGPT account name.
  • Access Settings.
  • Under General, click Clear to remove all chats.

Alternatively, selectively delete specific conversations by clicking the chat you wish to remove and selecting the trash icon.

3. Refrain from Sharing Sensitive Work Information with ChatGPT:

Exercise caution and avoid disclosing sensitive work-related details to ChatGPT. Assumptions that companies will protect your data solely based on general terms of service statements can be misleading.

Steer clear of sharing financial records, intellectual property, customer information, and protected health data, reducing the risk of sharing confidential information with potential cyber threats. This cautionary approach can prevent legal complications for both you and your company.

The significant ChatGPT data leak between June 2022 and May 2023, where over 100,000 ChatGPT account credentials were compromised and sold on dark web marketplaces, underscores the importance of this practice.

4. Implement Data Anonymization Techniques:

Employ data anonymization techniques to safeguard individual privacy while extracting insights from datasets. When using ChatGPT for work, apply these techniques to prevent direct or indirect identification of individuals in the data.

Consider these basic data anonymization techniques recommended by the Personal Data Protection Commission of Singapore:

  • Attribute Suppression: Remove unnecessary parts of data for your query.
  • Pseudonymization: Replace identifiable information with pseudonyms.
  • Data Perturbation: Slightly modify data values within a certain range.
  • Generalization: Deliberately reduce data by grouping information.
  • Character masking: Display only a portion of sensitive data.

5. Restrict Access to Sensitive Data:

Crucial in ChatGPT usage is the limitation of access to sensitive work data, especially when ChatGPT is used by employees. In leadership roles, control access to sensitive information by authorizing personnel who require it for specific roles.

Implement access controls, such as role-based access control (RBAC), granting authorized employees access only to the necessary data for their job functions. Regular access reviews ensure the effectiveness of these controls, and promptly revoke access for employees who change roles or leave the company.

6. Exercise Caution with Third-Party Apps:

Before using third-party ChatGPT apps and browser extensions for work, scrutinize and vet them carefully. Verify that they adhere to privacy standards and avoid collecting and retaining information for questionable purposes.

Avoid installing apps that request unnecessary permissions on your device, and ensure their data handling practices align with your organization’s privacy standards.

Exercise Responsible Usage of ChatGPT in a Professional Environment

Ensuring privacy when incorporating ChatGPT into your work demands thoughtful deliberation. Although there’s no infallible approach to completely safeguarding your data, proactive measures can markedly diminish the likelihood of potential data breaches.

ChatGPT presents valuable practicality for professional use, yet the paramount concern remains the protection of your company’s data. Acquainting yourself with the specifics outlined in ChatGPT’s privacy policy becomes instrumental in making well-informed choices regarding the tool’s utilization.