NITDA warns Nigerians of new GPT security flaws that enable data leakage, hidden prompt injections, and long-term manipulation in OpenAI’s latest models
Nigeria’s National Information Technology Development Agency has issued a fresh cybersecurity warning on new vulnerabilities in OpenAI’s latest language models.
Also read: ChatGPT’s privacy legal disclosure warning sparks global user concern
NITDA’s Computer Emergency Readiness and Response Team said seven flaws were found in GPT-4.0 and GPT-5 models.
The agency said attackers could plant hidden commands in webpages, comments, or crafted URLs to manipulate ChatGPT.
It warned that the models may execute harmful actions during normal browsing, summarisation, or search tasks.
Some flaws let attackers bypass safety systems through trusted domains or exploit markdown weaknesses to hide malicious input.
CERRT added that attackers could poison ChatGPT’s memory so injected instructions persist across future sessions.
The agency said the flaws raise risks of data leakage, manipulated outputs, unauthorised actions, and long-term behavioural influence.
It warned that attacks may trigger without user clicks when ChatGPT processes booby-trapped webpages or search results.
Preventive Measures
NITDA advised organisations to restrict or disable browsing and summarisation on untrusted sites.
It urged users to enable features like browsing or memory only when needed.
It recommended regular updates for GPT-4.0 and GPT-5 models to apply available patches.
Additional Advisory on Cisco Devices
NITDA also warned of new security issues affecting Cisco Secure Firewall ASA and FTD devices.
CERRT said criminals are exploiting a fresh attack method that uses older flaws to force devices to reboot.
Also read: Lawyer accused of citing fake cases using generative AI
The agency noted that sudden restarts can cause network outages and denial of service across critical systems.























