Phil's Journey
AI Red Teamer & Technologist
Click play to start...
Summary
Security Researcher and Engineer specializing in adversarial AI testing, prompt injection, and LLM safety. Transitioned from a background in critical infrastructure security (KRITIS) and DevSecOps to offensive research. Expertise in bypassing model safety guardrails, detecting deepfake audio artifacts, and hardening RAG pipelines against poisoning attacks.
Station Details
Independent AI Security Researcher (Stealth)
Jan 2024 - Present
- Adversarial Tokenization: Discovered and exploited novel tokenizer vulnerabilities using zero-width Unicode characters and metadata injection to bypass commercial model safety filters.
- RAG Pipeline Security: Researched 'Hallucination Chaining' vectors, demonstrating how forced model hallucinations can permanently corrupt Retrieval-Augmented Generation knowledge bases.
- Deepfake Forensics: Developed automated audio analysis pipelines using Whisper and Pyannote to detect synthetic artifacts in high-fidelity voice clones (11Labs).
- ML Infrastructure: Engineered reproducible NVIDIA NeMo and Triton inference containers for local model fine-tuning and alignment testing.
ResearchAdversarial AILLM SecurityAdversarial PromptingRed TeamingTokenizer ManipulationRAG Poisoning
Ask about this experience
AI response will appear here...
Independent AI Security Researcher (Stealth)
Jan 2024 - Present
Stop 1 of 3