Govt Cautions On ‘Information-Stealing Malware’ Targeting ChatGPT Users in Pakistan

The federal government has issued an advisory regarding the cyber security threat of ChatGPT, saying that a breach of around 100,000 ChatGPT user accounts on the dark web through information-stealing malware (Raccoon, Vider, Redline) has been reported.

The advisory further stated that the report about the breach also highlights one of the major challenges of Al-driven projects (including ChatGPT); the sophistication of Cyber-attacks.

The government has suggested precautionary measures and cautious use of ChatGPT (at organizational and individual levels).

Globally, many organizations are integrating ChatGPT and other Al-powered APIs into their operational flows/information systems. ChatGPT accounts signify the importance of Al-powered tools along with the associated Cyber risks as it allows users to store conversations. In case of a breach, access to a user account may provide insight into proprietary information, area of interest/research, internal operational/business strategies, personal communications, and software code, etc.

The precautionary measures for users include;

  1. Do not enter sensitive data into ChatGPT. If essential, ensure to disable the chat saving feature from the platform’s settings menu or manually delete those conversations as soon as possible.
  2. Use a malware-free/screened system for ChatGPT. An infected system (with information stealer malware) may take snap screenshots or perform keylogging, leading to a data leak.
  3. ChatGPT/other Al-powered tools and APIs must not be used by users handling extremely sensitive data. Masking of critical information/ dummy data may be utilized where absolutely essential.

For organizations the precautionary measures include, through best practices, organizations can ensure that ChatGPT is used securely and the data is protected. It is also important to note that Al technology is constantly evolving.

The key to protection may be that organizations must stay up-to-date with the latest security trends. A few best practices (but not limited to) are as follows:

  1. Conduct Risk Assessment. Before the use of ChatGPT, conduct a comprehensive risk assessment to identify any potential/exploitable vulnerabilities. This will help organizations to develop a plan to mitigate risks and ensure that their data is protected.
  2. Use Secure Channels. To prevent unauthorized access to ChatGPT, use secure channels to communicate with the chatbot. This includes using encrypted communication channels and secure APIs.
  3. Mechanism to Monitor Access. It is important to monitor who has access to ChatGPT. A mechanism be ensured that access is granted only to authorized individuals. This can be achieved by implementing strong access controls and monitoring access logs.
  4. Implement Zero-Trust Security. Zero-trust security (an approach that assumes that every user and device on a network is a potential threat) be adopted. This means that access to resources should be granted only on a need-to-know basis followed by a strong authentication mechanism.
  5. Train the Employees. Employees be trained on the use of ChatGPT and the potential risks associated with its use. The employees do not share sensitive data with a chatbot and are aware of the potential threat of social engineering attacks.



  • Get Alerts

    Follow ProPakistani to get latest news and updates.


    ProPakistani Community

    Join the groups below to get latest news and updates.



    >