NCERT Issues Cyber Alert Against Usage of AI Chatbots Like ChatGPT

The National Computer Emergency Response Team (CERT) has issued a cybersecurity advisory addressing the potential risks associated with AI-driven chatbots like OpenAI’s ChatGPT.

While these tools offer innovative solutions for productivity and engagement, CERT warns they also present significant cybersecurity and privacy challenges. The advisory highlights risks and recommends best practices to ensure that both individuals and organizations manage these tools with heightened caution.

According to the National CERT, AI chatbots have rapidly integrated into both professional and personal workflows, leading to increased adoption across various digital platforms. However, this surge in usage has also introduced vulnerabilities, particularly in the realm of data exposure. Interactions with chatbots often involve sensitive information, such as business strategies or personal communications. If data breaches occur, threat actors could exploit these insights, risking intellectual property theft, reputational harm, and potential regulatory consequences.

The advisory further emphasizes the threat posed by social engineering attacks. Cybercriminals increasingly use sophisticated techniques, including phishing disguised as chatbot interactions, to deceive users into disclosing confidential information. Additionally, interactions with AI chatbots on compromised systems can result in malware infections, threatening data integrity and privacy. CERT underscores the need for robust cybersecurity frameworks to address these entry points and prevent cyberattacks.

To mitigate risks, CERT recommends specific precautionary measures for users, such as avoiding sensitive data entry in chat interfaces and conducting regular system security scans. Users are advised to manage chatbot interactions by disabling chat-saving features and deleting conversations containing sensitive information. Ensuring that chatbots are only accessed from secure, malware-free environments is essential in reducing exposure to potential threats.

For organizations, CERT proposes the use of secure, dedicated workstations exclusively for chatbot interactions. They recommend implementing strict access controls, comprehensive risk assessments, and a zero-trust security model. Encryption of all chatbot communication and regular employee training on cybersecurity awareness are also deemed critical to safeguarding sensitive information. Organizations are encouraged to adopt monitoring tools to flag suspicious chatbot activity and establish robust incident response protocols for potential breaches.

With the evolving digital landscape, CERT advises a proactive approach to AI chatbot security. Regular updates, application whitelisting, and crisis communication plans should be part of every organization’s long-term strategy. CERT urges entities, particularly in government and public sectors, to adhere to these guidelines to protect sensitive data and minimize risks associated with AI-driven technologies.

Follow ProPakistani on Google News & scroll through your favourite content faster!

Support independent journalism

If you want to join us in our mission to share independent, global journalism to the world, we’d love to have you on our side. If you can, please support us on a monthly basis. It takes less than a minute to set up, and you can rest assured that you’re making a big impact every single month in support of open, independent journalism. Thank you.



Get Alerts

Follow ProPakistani to get latest news and updates.


ProPakistani Community

Join the groups below to get latest news and updates.



>