APT36 Emerges as a Vibeware Cyber Nightmare Targeting Victims With Sophisticated Attacks
A Pakistan-based advanced persistent threat group known as APT36, or Transparent Tribe, has launched a sprawling cyber espionage campaign against the Indian government and its diplomatic missions abroad, and this time, they have brought artificial intelligence to the battlefield.
Researchers have uncovered a novel malware development model dubbed “vibeware”: a high-volume approach to producing AI-generated, disposable malware implants that prioritizes quantity over technical brilliance.
Unlike traditional malware campaigns defined by technical innovation, the APT36 vibeware model is characterized by sheer industrialization.
Rather than crafting sophisticated custom tools, the group is using large language models (LLMs) to rapidly generate functional malware in niche programming languages, including Nim, Zig, and Crystal languages, for which most endpoint detection and response (EDR) platforms have limited behavioral signatures.
The strategy, which researchers are calling a “Distributed Denial of Detection” (DDoD), does not aim to outwit defenders through clever coding.
Instead, it aims to overwhelm them. Identified victims were simultaneously infected with multiple implants, each written in a different language and using a different communication protocol. Neutralize one, and the attacker retains access through another.
Investigators found conclusive evidence of AI assistance embedded within the malware fleet itself. Metadata in project files pointed to the use of AI-integrated code editors.
Multiple binaries contained Unicode emojis scattered through code strings a common hallmark of LLM-generated output including strings such as ‘Browser simulation enabled’ and ‘ Sending folders to Firebase…’ found inside Rust-based components.
The group maintains what researchers describe as a “malware-a-day” cadence, producing new variants daily. But the quality reflects its origins.
In one notable failure, a Go-based credential-stealing binary was deployed with a template placeholderfore the command-and-control (C2) URL, rendering the tool unable to exfiltrate stolen data. These are the footprints of code that is syntactically correct but logically unfinished.
Perhaps the most operationally significant aspect of the campaign is the group’s embrace of Living Off Trusted Services (LOTS) for command and control. Rather than standing up suspicious custom infrastructure, APT36 is routing communications through platforms that organizations routinely allow:
- Google Sheets – used as a bidirectional C2 hub, with commands and responses exchanged as encrypted spreadsheet cell data
- Discord and Slack – leveraged for real-time command issuance and data retrieval by implants CrystalShell and ZigShell
- Supabase and Firebase – cloud databases used to store stolen credentials, metadata, and malware configuration tokens
- Google Drive – used for actual file exfiltration of harvested documents
LLMs are particularly well-suited to generate reliable integration code for these platforms, given the massive volume of public SDKs and documentation in their training data. For attackers, this is a strategic win: no need to build suspicious infrastructure or understand complex protocol internals.
Researchers assess with medium confidence that APT36 is responsible for the campaign, based on the reappearance of a known loader binary, warcode.exe, previously documented as a loader for the Havoc post-exploitation framework.
The actor’s targeting profile is consistent with historical activity: Indian government ministries, diplomatic embassies abroad, and entities connected to the country’s military and foreign affairs apparatus. Secondary targets include Afghan government entities and select private organizations.
The data of interest is telling: army personnel records, strategic policy documents, defense materials, and foreign affairs correspondence. A recovered LinkedIn screenshot showed a curated list of Indian government employees with military ties, suggesting the attackers are actively profiling targets through open-source intelligence.
Crucially, the industrialization of malware production does not mean the attacks are automated end-to-end. While the implants are AI-generated and rapidly produced, the post-exploitation phase remains a distinctly manual operation.
Recovered command logs reveal operators making frequent typos, issuing malformed commands, and cycling through standard tradecraft: network enumeration with ipconfig and arp, PowerShell execution for privilege escalation, and systematic staging of sensitive files for exfiltration, according to Bitdefender.
APT36 maintains a hybrid posture: experimental vibeware on the front line, backed up by proven commercial frameworks like Cobalt Strike and Havoc. If the AI-generated tools fail, and they often do, the established tools hold the line.
Security researchers urge organizations, particularly government and diplomatic entities in the region, on to shift detection strategies away from binary signatures toward behavioral analysis. Key recommendations include:
- Monitor for process injection, unusual API calls, and unsigned binaries executing from user-writable directories, regardless of programming language
- Implement granular monitoring of outbound connections to Discord, Slack, Google Sheets, and cloud database endpoints from non-sanctioned applications.
- Deploy EDR/XDR platforms with mature behavioral detection and ensure SOC coverage to catch lateral movement during the manual hacking phase.
The broader lesson from this campaign is clear: AI has not created a new generation of elite hackers. It has, however, dramatically lowered the barrier to entry for mediocre ones, and in cybersecurity, mediocre-but-relentless can be enough to cause serious harm.