Amr Gomaa
Senior AI Security Engineer & ML PhD | 7+ years building AI for BMW, Carl Zeiss & Microsoft Research — now red-teaming and securing LLMs in production
Über mich
I've spent 7 years building real AI systems for BMW, Carl Zeiss, and Microsoft Research, and I now spend my time figuring out how to break them securely. Today, I work as a Senior AI Security Engineer at MATVIS, where we build AI Firewalls and automated red-teaming tools for companies deploying generative AI in real products. Separately, I continue as a consultant at DFKI on a project I conceived and secured BMFTR funding for, specifically to secure language models for knowledge management. I also still collaborate with Microsoft Research UK on LLM agent robustness and evaluation. The foundation for this came from my industrial PhD at DFKI — Germany's largest AI research center — where I worked across a wide range of areas: gesture recognition for surgical robots with Carl Zeiss, personalized driver-assistance systems with BMW, multimodal interaction for autonomous vehicles, and eventually LLM agents with Microsoft Research in Cambridge. That breadth across computer vision, NLP, reinforcement learning, and personalization means I understand the full stack of how production AI is built, not just the security layer on top.
Meine Stärken
I've worked across the full AI stack, from low-level computer vision models and sensor fusion to large language models and agentic systems, which means I can usually spot where a problem actually lives rather than just the layer where it shows up. That breadth is genuinely useful in security work, where attacks rarely respect clean architectural boundaries. I'm equally comfortable in a technical deep-dive and in a conversation with a non-technical founder trying to understand what they should actually be worried about. After years of presenting to government stakeholders, engineering teams at BMW and Carl Zeiss, and academic reviewers simultaneously, I've learned to adjust to the room without losing the substance of what matters. I tend to be direct. If something isn't a real risk, I'll say so. If the fix you're considering won't actually help, I'll tell you that too.
Das nimmst du mit
A clear and honest picture of where your AI system is exposed and what is worth doing about it. Not a generic checklist, but something specific to what you are building. If you are deploying an LLM product, I can help you understand the real attack surface: prompt injection, jailbreaks, data leakage through RAG pipelines, and trust boundary issues in multi-agent setups. If you are earlier in the process, I can help with broader questions around ML architecture, personalization, model robustness, or how to evaluate whether what you have is actually ready to ship. You'll leave with a clear picture of where your system is exposed, ranked by what actually matters, and the two or three things worth fixing first, not a generic checklist.
Meine Themen
LLM Alignment & Safety, Agentic AI & Multi-agent Systems, Reinforcement & Imitation Learning, Incremental & Continual Learning, and Multimodal Interaction & Foundation Models specifically: Prompt injection and jailbreaking — how they work, real-world examples, practical mitigations. AI Firewalls and guardrails for generative AI applications. Red-teaming your LLM product, manual and automated approaches. Agentic AI and multi-agent system security. LLM robustness and evaluation. Safe deployment of generative AI in real products. Computer vision and gesture recognition for industrial applications. ML in automotive — personalization, adaptive interfaces, sensor fusion. Personalization and adaptive AI systems in production.
Was dich erwartet
A conversation that gets to the point quickly. I'll ask what you are actually working on before offering anything — the problem you think you have and the problem you actually have are often different things, and it's worth spending a few minutes on that first. If the problem is technical, we can go into as much depth as useful: architecture, system prompts, deployment setup, whatever is relevant. If you're still figuring out what the problem is, that's a fine place to start. I don't work from templates, and I won't pretend everything has a clean solution. Some things don't, and I'd rather say that upfront.
Fragen & Antworten
Anyone building or deploying something with AI — a startup adding an LLM to their product, an engineering team trying to understand what "secure AI" actually means in practice, a founder exploring computer vision or automation for their workflow, or someone who just wants an honest second opinion on what they're shipping. You don't need to be an AI expert, but having a concrete problem or product in mind helps.
Quite a lot. AI security is my current focus, but my background covers the wider ML stack — computer vision and gesture recognition for industrial applications, NLP and language model development, personalization and adaptive systems in production, and AI automation for real workflows. If it involves building or deploying AI in some form, it's likely something we can dig into together.
I ask more questions than I answer at the start. I want to understand your actual situation before offering anything. After that, I can be pretty direct about where I think the gaps are and what I'd focus on first. If you want to share a rough topic or problem before our session, I'll make an effort to come prepared with something more specific — just drop it in when you book.
No. Some of the most useful conversations I've had were with non-technical founders who just needed to understand what risk they were actually carrying. I'll adjust to wherever you are.
Most AI consulting on the market right now is about strategy and adoption — should we use AI, where, and how do we roll it out? That's useful work, but it's not what I do. I work on the system itself: what's actually running, where it can be attacked, how it behaves under stress, and whether it's robust enough to ship. If you already know you want to use AI and need someone to look at the actual architecture and tell you what's broken or risky, that's where I come in.
Vereinbare dein persönliches Beratungsgespräch
Preise
Sprachen
Gespräch buchen
Vereinbare dein persönliches Beratungsgespräch