Opening. This role is a key part of Synopsys' efforts to protect its cutting-edge AI technologies. The successful candidate will be responsible for designing and implementing advanced security controls for AI/ML systems, focusing on threats unique to generative AI such as adversarial examples, prompt injections, and jailbreaks.
What you'll do
- Design and implement advanced security controls for AI/ML systems, focusing on threats unique to generative AI such as adversarial examples, prompt injections, and jailbreaks.
- Conduct thorough threat modeling, vulnerability assessments, and red teaming exercises tailored to AI models, data pipelines, and supporting infrastructure.
- Integrate security into every stage of the GenAI lifecycle, from data ingestion and model training to deployment and inference.
- Monitor, detect, and respond to AI-specific security incidents including model inversion, membership inference, and supply chain vulnerabilities.
- Collaborate closely with AI architecture, research, and engineering teams to evaluate new features and mitigate security risks in real-time.
- Research and track emerging AI threats, contributing to the development of internal security tools, policies, and governance for responsible AI use.
- Assist in shaping the enterprise AI strategy, ensuring robust security alignment with business objectives.
- Create and document reusable AI security patterns, and develop AI-driven use cases to strengthen cybersecurity operations.
- Evaluate, recommend, and implement best-in-class AI security tools and frameworks for Synopsys' AI infrastructure.
- Drive comprehensive threat modeling for AI/ML systems, addressing adversarial risks and emerging attack vectors.
What you need
- Advanced degree in Computer Science, Cybersecurity, Artificial Intelligence, or a related field.
- Relevant industry certifications such as CISSP, CCSP, CEH, or specialized AI/ML security credentials.
- Strong knowledge of product security concepts—data security and privacy, security engineering, open-source software security, and security assurance.
- Deep understanding of security architecture, threat modeling, secure coding practices, and incident response for AI/ML environments.
- Hands-on experience with machine learning algorithms, model training, data preprocessing, and end-to-end AI/ML pipelines.
- Expertise in AI-specific threats: adversarial machine learning, model inversion, data poisoning, and evasion attacks.
- Proficiency in programming languages such as Python, with experience in scripting for vulnerability scanning and security automation.
- Strong familiarity with cloud security (AWS, Azure, GCP) and containerized environments (Kubernetes, Docker).
- Experience with security frameworks and standards relevant to AI (e.g., OWASP Top 10 for LLMs, NIST AI Risk Management Framework).
- Exceptional verbal and written communication skills to convey technical concepts to diverse audiences.