AI Security Researcher – AI-Native Security Platform | Red Teaming, Ethical Hacking, Jailbreaking, Bug Bounty, Reasoning Models

  • 45k-55k
  • Bengaluru, Hybrid
Job Details
Full Time 3–6 years
Skills

Full Job Description

About the Company

An AI-native security company built from the ground up to protect agentic AI applications across every layer of the stack-not just the prompt layer. The platform uses fine-tuned models to secure LLMs, agents, orchestration frameworks, and AI workflows end-to-end.
Serious about AI and cybersecurity, with a culture shaped by experienced AI leaders, cybersecurity experts, and serial entrepreneurs who value innovation, experimentation, and real-world impact-without taking themselves too seriously.

Job Description

The AI Security Researcher role operates at the intersection of AI systems, application security, and offensive security research. The focus is on identifying, simulating, and mitigating risks across LLMs, agentic systems, and AI orchestration frameworks.

This role involves hands-on red teaming, advanced ethical hacking, and deep research into emerging AI threat vectors, helping shape both internal security posture and broader industry standards.

Key Responsibilities

  • Conduct threat modeling for AI applications, including LLMs, agentic systems, and plugin/tool-based architectures

  • Design and execute red teaming scenarios targeting AI misuse, prompt injection, jailbreaking, supply chain abuse, and model manipulation

  • Research and build proof-of-concept exploits to validate and demonstrate real-world AI risks

  • Simulate attacks against reasoning models, autonomous agents, and AI workflows

  • Track evolving risks across AI safety, data governance, and compliance

  • Contribute to secure-by-design practices for AI-native and enterprise AI products

Required Experience

  • 3–6 years of experience in AI security research or application security

  • Hands-on exposure to enterprise AI products such as Microsoft 365, Copilot Studio, or Microsoft Copilot

  • Strong background in ethical hacking, red teaming, bug bounty programs, and AI jailbreaking

  • Deep understanding of LLMs, agentic systems, AI orchestration frameworks, and reasoning models

  • Ability to research, model, and communicate emerging AI security threats clearly

High Impact Jobs: CareerXperts Jobs 

Follow CareerXperts on LinkedIn: CareerXperts Consulting