Work along side Foundation Model Research team to optimize inference for cutting edge model architectures. Work closely with product teams to build Production grade solutions to launch models serving millions of customers in real time. Build tools to understand bottlenecks in Inference for different hardwares and use cases. Mentor and guide engineers in the organization.
Minimum Qualifications
- Demonstrated experience in leading and driving complex, ambiguous projects.
- Have experience with high throughput services particularly at supercomputing scale.
- Proficient with running applications on Cloud (AWS / Azure or equivalent) using Kubernetes, Docker etc.
- Familiar with GPU programming concepts using CUDA an d with one of the popular ML Frameworks like Pytorch, Tensorflow
Preferred Qualifications
- Proficient in building and maintaining systems written in modern languages (eg: Golang, python)
- Familiar with fundamental Deep Learning architectures such as Transformers, Encoder/Decoder models.
- Familiarity with Nvidia TensorRT-LLM, vLLLM, DeepSpeed, Nvidia Triton Server etc.
- Experience writing custom CUDA kernels using CUDA or OpenAI Triton.
Similar Jobs
- View Job
AIML - Distinguished Engineer, ML Systems Evaluation Engineering
Cupertino - View Job
AIML - ML Engineer, Machine Learning Platform & Infrastructure
Santa Clara - View Job
AIML - Senior ML Engineering Manager, Data & ML Innovation
Cupertino - View Job
AIML - Software Engineer - ML Systems and Evaluation Engineering
Cupertino - View Job
AIML - ML Engineer, ML Systems and Evaluation Engineering Client Tools and Frameworks
Cupertino