AI Security Researcher | LLM Red Teaming & Jailbreaking | Serial-Entrepreneur-Founded | Ground-Floor Role

  • 40K-45K
  • Bengaluru, India
Job Details
Full Time 1-7 years
Skills

Full Job Description

About the Mission

Our client is on a mission to help enterprises accelerate AI adoption with confidence.
They are building the definitive AI security platform designed to protect the next generation of intelligent systems at scale.

Backed by a distinguished group of founders, board members, and investors, this is a ground-floor opportunity to shape how AI security is researched, practiced, and trusted globally.


The Opportunity

As an AI Security Researcher – LLM Red Teaming & Jailbreaking, you will join an elite founding team of cybersecurity veterans to define how AI systems are tested, attacked, and defended.

This is not a support role. You will own core research initiatives, influence product direction, and help establish the security standards that enterprises will rely on as AI becomes mission-critical.


What You’ll Build & Pioneer

  • Design advanced threat models for LLMs and AI agent-based systems

  • Execute real-world red team scenarios, including prompt injection, jailbreaking, and model manipulation

  • Build proof-of-concept exploits that demonstrate critical AI vulnerabilities

  • Develop proprietary attack frameworks and testing methodologies

  • Contribute to and shape industry-wide AI security standards

  • Publish high-impact research that influences enterprise AI security practices


What Sets This Role Apart

  • Ground-floor ownership of AI security research domains

  • Direct influence on platform architecture and long-term research strategy

  • Opportunity to publish industry-defining work

  • Work alongside seasoned cybersecurity leaders building for global scale

  • Protect billions of AI interactions across enterprise environments


Your Background

  • 1–7 years of hands-on security research experience

  • Proven bug bounty or exploit development track record

  • Strong foundation in application security and adversarial thinking

  • Practical experience with LLM jailbreaking, AI agent testing, or red teaming

  • Deep interest in AI safety, security, and governance


Who Thrives Here

You’re curious, relentless, and comfortable operating in ambiguity. You enjoy breaking systems to make them stronger and want your research to shape how the industry thinks about AI security.

If you’re looking to define the future of AI defense, not just participate in it, this role puts you at the center of that mission.

High Impact Jobs: CareerXperts Jobs 

Follow CareerXperts on LinkedIn: CareerXperts Consulting