Tasks
Literature research on multimodal LLMs and code generation
Data preparation (text and model artifacts from vehicle development)
Fine-tuning or adapter-based training of a suitable LLM (e.g., with PEFT)
Modeling and preprocessing of graphical models (e.g., SysML, Simulink) for multimodal input
Evaluation of the generated code artifacts using syntactic and semantic metrics
Comparison with existing code generation approaches
Research tasks:
Analysis of the current state of research on multimodal Large Language Models (LLMs) and their application in automated code generation
Investigation of existing methods for processing and integrating different input modalities (e.g., text, graphics, models)
Analysis of typical systems engineering artifacts (requirements, function models, e.g., SysML or Simulink) in the automotive development process
Identification of suitable training methods (e.g., fine-tuning) PEFT, RAG) for LLMs with a focus on technical application domains
Research on metrics and methods for evaluating the quality of generated code (e.g., syntactical correctness, functional consistency)
Requirements
Degree programs:
Computer Science
Automotive Engineering
Mechanical Engineering
Electrical Engineering
Data Science or comparable degree program
Areas of study:
Software Development and Programming
Artificial Intelligence and Machine Learning
Systems Engineering
Data Science
Expert knowledge:
Fundamentals of machine learning
Understanding of the principles of systems engineering
Experience in data preparation
Initial experience in the automotive industry (e.g., through internships) desirable
IT skills:
Confident use of MS Office
Ideally, solid knowledge of Python
Machine learning and AI frameworks (PyTorch, TensorFlow)
Soft skills:
High level of initiative
Strong analytical skills
Structured working style
Ability to work in a team
Goal orientation
For more detail, salary and company information, use the apply link