Logo
META

Research Scientist, Systems ML - SW/HW Co-Design - Inference

META, Menlo Park, California, United States, 94029


Meta is seeking a Research Scientist to join our Research & Development teams. The ideal candidate will have industry experience working on AI Infrastructure related topics. The position will involve taking these skills and applying them to solve for some of the most crucial & exciting problems that exist on the web. We are hiring in multiple locations. The Kernel team focuses on maximizing the inference performance for Generative AI and Recommendation models by developing high-performance kernels. Our expertise lies in creating specialized kernels that significantly improve the efficiency of these models. We have successfully developed and deployed the first FP8 kernel in Meta's production, as well as FBGEMM TBE. By continuously advancing our kernel optimization capabilities, we enable better user experiences and drive innovation in the field of Generative AI and Recommendation systems. The E2E Performance team is dedicated to optimizing the end-to-end performance of Generative AI and Recommendation models. We employ various parallelism strategies and distributed inference techniques to enhance TTIT and TTFT for LLM and LDM. By relentlessly pursuing performance improvements, we have achieved notable successes such as enabling the utilization of AMD GPUs for GenAI production applications and subsequently optimizing their performance. Our ongoing efforts ensure the continuous betterment of these models' performance, ultimately providing more responsive and seamless experiences for users interacting with Generative AI.

Research Scientist, Systems ML - SW/HW Co-Design - Inference Responsibilities

Apply relevant AI infrastructure and hardware acceleration techniques to build & optimize our intelligent ML systems that improve Meta's products and experiencesDevelop high performance kernels and different parallelism techniques to improve E2E performance.Goal setting related to project impact, AI system design, and infrastructure/developer efficiencyDirectly or influencing partners to deliver impact through deep, thorough data-driven analysisDrive large efforts across multiple teamsDefine use cases, and develop methodology & benchmarks to evaluate different approachesApply in depth knowledge of how the ML infra interacts with the other systems around itMinimum Qualifications

Currently has, or is in the process of obtaining a Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience. Degree must be completed prior to joining Meta.Currently has, or is in the process of obtaining, a PhD degree in Computer Science, Computer Vision, Generative AI, NLP, relevant technical field, or equivalent practical experience. Degree must be completed prior to joining Meta.Specialized experience in one or more of the following machine learning/deep learning domains: Model compression, hardware accelerators architecture, GPU architecture, machine learning compilers, or ML systems, AI infrastructure, high performance computing, performance optimizations, or Machine learning frameworks (e.g. PyTorch), numerics and SW/HW co-designExperience developing AI-System infrastructure or AI algorithms in C/C++ or PythonMust obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment.Preferred Qualifications

Experience or knowledge of training/inference of Large scale AI modelsExperience or knowledge of distributed systems or on-device algorithm developmentExperience or knowledge of recommendation and ranking models