top of page

Prompt Injection as a Framework: ATT&CK-Inspired Adversarial Testing for LLMs

  • Writer: Mark Miller
    Mark Miller
  • Jun 27
  • 3 min read


The Artificial (Un)Intelligence Conference. Prompt Injection as a Framework: ATT&CK-Inspired Adversarial Testing for LLMs. Austin Howard.
Austin Howard

Large Language Models (LLMs) are embedded into everything—from chatbots and productivity tools to security platforms—yet our understanding of how to model, simulate, and defend against their adversarial weaknesses remains dangerously underdeveloped.


InjectLab is a first-of-its-kind, open source ATT&CK-style matrix for LLM-specific threats: a structured, visual framework that maps real-world prompt injection, role hijacking, memory abuse, system prompt leakage, and more into a unified language for defenders, red teamers, and educators alike.


Austin Howard's session, "Prompt Injection as a Framework: ATT&CK-Inspired Adversarial Testing for LLMs, will be streamed globally September 16-17, 2025 as part of The Artificial UnIntelligence Conference.




About the Session: Adversarial Testing for LLMs


This session is a tactical exploration of real-world adversarial techniques against large language models, mapped to a custom ATT&CK-style framework and demonstrated through a deliberately vulnerable AI honeypot.


Large Language Models (LLMs) are embedded into everything—from chatbots and productivity tools to security platforms—yet our understanding of how to model, simulate, and defend against their adversarial weaknesses remains dangerously underdeveloped. InjectLab is a first-of-its-kind, open source ATT&CK-style matrix for LLM-specific threats: a structured, visual framework that maps real-world prompt injection, role hijacking, memory abuse, system prompt leakage, and more into a unified language for defenders, red teamers, and educators alike.


This talk introduces both InjectLab and its vulnerable AI sandbox companion, Injectable, built to emulate—and deliberately mishandle—model behavior in the wild. Together, they form a testbed for emulating threat activity, building detections, and training humans on the misunderstood risks of LLM misuse.


Whether you’re securing model-integrated products or preparing for adversarial AI threats in enterprise environments, this talk presents a clear path forward: open standards, tactical emulation, and a community-driven framework designed to evolve alongside the threat landscape. InjectLab isn't just a project—it's a starting point for how we structure, share, and ultimately neutralize the new language of LLM attack.


Tools and Technologies Featured


Frontend and Visualization


InjectLab is built as a lightweight, browser-based application using HTML5, CSS3, and JavaScript. The interface renders an interactive MITRE-style matrix of LLM-specific adversarial tactics and techniques. All technique data is structured and rendered dynamically using modular JavaScript functions.


Data Structure and TTP Modeling


Each technique in the matrix is defined using human-readable, version-controlled YAML files that include unique identifiers, tactic classifications, descriptions, detection guidance, and mitigation strategies. These files are parsed into the front-end and linked across the matrix for seamless navigation and cross-referencing.


Content Framework


Technique pages are hyperlinked directly from the matrix and written in clean, semantic HTML with support for dynamic content insertion. Each TTP includes multiple paragraphs explaining the threat, detection heuristics, and practical mitigation strategies, along with active hyperlinks to academic research, real-world attack writeups, and security advisories.


Deployment and Hosting


InjectLab is deployed through GitHub Pages and versioned in a public GitHub repository. It uses a custom domain (injectlab.org) for professional branding and to ensure maximum visibility. The project is fully static, requiring no backend or runtime dependencies, making it portable and easy to mirror or fork.


About Austin Howard


Austin Howard is the IT Coordinator for Destination El Paso, where he manages infrastructure, systems support, and technical operations across city-run event venues. A U.S. Air Force veteran with a background in avionics, he is an emerging voice in AI security and LLM threat research. 


Austin is the creator of InjectLab, an open-source prompt injection testing framework featured by the AVID research initiative. His work focuses on practical AI red teaming, adversarial input analysis, and building tools that make AI behavior more transparent, auditable, and secure.




Save your seat, listen from anywhere. The Artificial (Un)Intelligence Conference.
Save Your Seat

The Artificial (Un)Intelligence Conference is a global, 24 hour live online conference. Registration is free, and includes access to all sessions, including on-demand, at the conclusion of the event. No sales pitches, no marketing. Just the good stuff.




We want to hear your story... The Artificial (Un)Intelligence Conference.
CFP is Open

We're trying to find the unheralded people around the world who are doing cool things with AI. Is that you?

Check out our speaker gallery and then let us know what you're working on. We look forward to hearing your story.






The Artificial (Un)Intelligence Conference. Meet the Speakers.

More speakers added daily. Register to get the latest update.



Comments


bottom of page