AI Security Solutions Engineer | Architecting Trust for the Generative AI Era | Cloud & AppSec + LLM Orchestration | APAC & ME | Tier-1 VC Backed
- Bengaluru
Full Job Description
The Role
We are seeking an AI Security Solutions Engineer – a rare “Purple Person” fluent in both Cybersecurity and Machine Learning.
You will:
Lead high-stakes enterprise PoCs
Architect AI guardrails across production deployments
Conduct adversarial red-teaming on LLM systems
Secure RAG architectures and LLM orchestration platforms
Harden AI inference endpoints and data pipelines
This is hands-on, production-grade AI security.
The Bar
Deep Cloud Security and Application Security foundations
Strong experience in API security and distributed systems
Hands-on LLM orchestration exposure (LiteLLM / LangChain)
Comfort operating in high-velocity startup environments
We are looking for builders who understand both attack surfaces and system architecture.
About You
You have architected production systems where security and AI intersect.
You have:
Built, deployed, and hardened AI implementations
Reviewed OpenAPI specifications and API flows
Analyzed model behavior logs
Debugged OAuth and identity flows
Optimized latency across guardrail chains
You translate across domains:
Explain embedding distance thresholds to security teams
Map OWASP Top 10 risks to LLM vulnerabilities
Design mTLS and secure endpoints for model APIs
You move fast, document clearly, and build systems that engineering teams rely on.
You are comfortable with:
Infrastructure-as-code reviews
GitOps workflows
Async collaboration across global pods
Operating inside distributed system architectures
We Will Go Deep
Expect technical conversations such as:
Prompt Injection
In a RAG architecture, how would you execute an indirect prompt injection?
Which Cloud or AppSec control would you attempt to bypass?
Orchestration Security
If LiteLLM or Portkey is used as a model gateway, what risks exist in API key management and raw prompt logging?
Guardrail Design
What is the difference between a System Prompt and an External Guardrail such as NVIDIA NeMo?
Which is easier to jailbreak using an adversarial suffix, and why?
If these discussions energize you, you will thrive here.
Why This Role Matters
AI security is not an extension of traditional cybersecurity.
It is a new attack surface.
This role sits at the frontier of defending enterprise AI at scale.
High Impact Jobs: CareerXperts Jobs
Follow CareerXperts on LinkedIn: CareerXperts Consulting