AI Security Researcher – LLM Red Teaming & Jailbreaking Specialist

  • Bengaluru, India
Job Details
Full Time

Full Job Description

About the job

Our Client is “On a mission to help enterprises accelerate AI adoption with confidence.”

Distinguished Founders / Board / Founding Team / Investors!

Shape the Future of AI Security from Day One.

Join a elite founding team of cybersecurity veterans to pioneer the next generation of AI threat defense.

We’re building the definitive platform for AI security and need a world-class AI Security Researcher with 1-6 years of cutting-edge experience in LLM jailbreaking and AI agent red teaming to architect our core research initiatives.

Revolutionary Impact: Own critical research domains, publish industry-defining papers, develop proprietary attack frameworks, and establish the gold standard for AI security practices that will protect billions of AI interactions globally.

What You’ll Pioneer:

  • Advanced threat modeling for AI systems
  • Red team scenarios: prompt injection, jailbreaking, model manipulation
  • Build proof-of-concept exploits demonstrating AI vulnerabilities
  • Shape industry-wide security standards

Your Background:

  • 0-10+ years security research experience
  • Bug bounty hunter with demonstrated exploits
  • Deep application security knowledge
  • Passion for AI safety & governance
  • Ready to define the future of AI security? Let’s build something extraordinary together!

Ready to define the future of AI security?

Write to AI.Security.Research@Careerxperts.com to get connected!

High Impact Jobs: CareerXperts Jobs 

Follow CareerXperts on LinkedIn: CareerXperts Consulting