Job Description
Hybrid This role is categorized as hybrid. This means the successful candidate is expected to report to the GM Global Technical Center - Cole Engineering Center Podium or Mountain View Technical Center , CA at least three times per week, at minimum or other frequency dictated by the business. This job is eligible for relocation assistance.
About the Team:
The ML Orchestration team at GM builds and maintains the foundational infrastructure that powers ML workflows across the company. Our core responsibility is the development and evolution of Roboflow , GM’s in-house semantic orchestration platform designed to streamline and scale complex ML pipelines, from experimentation to production. A key pillar of our work is AI Lineage —our capability to track, visualize, and understand the entire lifecycle of ML artifacts. This includes tracing the origin of data, model training runs, hyperparameters, code versions, and evaluation metrics. AI Lineage provides transparency, auditability, and reproducibility across our ML systems, which is essential for debugging, model governance, regulatory compliance, and improving long-term model quality. Together, Roboflow and AI Lineage help our engineers move faster with higher confidence, enabling GM to iterate quickly while maintaining the safety and performance standards required for autonomous vehicle development.
Position Overview:
We are seeking an experienced Staff Machine Learning Engineer to drive key initiatives within our ML Orchestration team. In this role, you will be instrumental in scaling our internal ML platform, building automation and self-service tools, and ensuring the reliability and efficiency of large-scale ML pipelines across GM. A major focus area for this role is the development and evolution of AI Lineage —our system for capturing, querying, and visualizing the full lifecycle of machine learning artifacts. You will help design lineage tracking for data transformations, model training, evaluation runs, and pipeline dependencies. This functionality is critical for enabling transparency, reproducibility, debugging, and regulatory compliance across our ML ecosystem.
Please note : This is an ML infrastructure engineering role. It does not involve training or applying machine learning models to specific business problems. Instead, your impact will come from building core infrastructure products that empower hundreds of ML and data science practitioners at GM to experiment, deploy, and manage ML workflows at scale.
What You’ll Be Doing
- Design & Implementation : Architect, implement, and test scalable, cloud-native distributed systems using modern cloud platforms such as Google Cloud Platform (GCP) or Microsoft Azure. Build robust infrastructure to support large-scale ML workflows and data processing at GM.
- Project Ownership: Lead technical projects end-to-end—from early design through production deployment. Shape the product roadmap and drive key architectural decisions, balancing performance, reliability, and long-term maintainability.
- Cross-Team Collaboration : Actively participate in design reviews, team planning, and code reviews. Collaborate across multiple engineering teams to deliver cohesive platform solutions. Anticipate integration points and proactively manage dependencies and trade-offs.
- Mentorship & Recruiting: Foster a culture of technical excellence and growth. Interview candidates using calibrated evaluation criteria, onboard new hires, and mentor engineers and interns to help them grow technically and professionally.