How AI Fakes Are Framing Bandits as Soldiers and Undermining Trust in Nigeria’s Military
A wave of AI-powered misinformation is targeting Nigeria’s security forces. Manipulated images and careless chatbot outputs are being used to paint bandits as soldiers and to erode public trust in troops fighting terrorism in the North-East and North-West. The latest case began with a video of a kidnapped victim guarded by armed bandits. One bandit wore camouflage, a common tactic among criminals who imitate military uniforms. A social media user then reportedly used an AI image tool to alter a screenshot, shaping a face to resemble a Nigerian soldier, and posted it with an accusatory caption. Another user asked the Grok chatbot to verify the screenshot. The AI said it “appears real,” described the scene as a “security operation,” and misread armed bandits as “troops with suspects,” claiming no edits were detected. This exposes a core weakness of automated tools. They analyse pixels, not context. When criminals wear camouflage, these systems may wrongly label them as soldiers. Security officials call this information warfare and warn that fabricating images to portray bandits as soldiers is malicious disinformation that threatens national security. They urge media platforms, law enforcement, and cybercrime units to act, and advise the public to treat viral claims—especially screenshots and AI-generated images—with extreme caution. Nigeria’s fight now extends from the battlefield to the digital space.
Stories are shared by community members. This article does not represent the official view of NaijaWorld — the author is solely responsible for its content.

