The National Information Technology Development Agency (NITDA) has issued a cybersecurity alert, warning that recently discovered vulnerabilities in ChatGPT’s GPT‑4.0 and GPT‑5 models could lead to serious data‑leakage risks.
NITDA explained that attackers could exploit seven different security flaws by embedding hidden instructions in webpages, comments, or manipulated URLs. When ChatGPT browses, summarises, or searches through such content, these “indirect prompt injections” may trigger unintended commands.
The agency highlighted that some of these flaws bypass safety filters — using trusted domains, exploiting markdown‑rendering bugs, or even “poisoning” ChatGPT’s memory so that malicious instructions persist across future sessions.
Such vulnerabilities could result in unauthorized actions, leakage of personal or sensitive information, manipulated outputs, and long‑term behavioural manipulation. Some attacks may occur without any direct user interaction, simply when ChatGPT processes tainted web content.
To reduce risk, NITDA recommends limiting or disabling ChatGPT’s browsing and summarisation functions for untrusted websites, using memory features only when necessary, and ensuring that GPT‑4.0 and GPT‑5 installations are regularly updated.
Leave a comment