Meta
Meta is hiring: Research Scientist, GenAI - Multimodal Audio (Speech, Sound and
Meta, Bellevue, WA, United States
Research Scientist, GenAI - Multimodal Audio (Speech, Sound and Music)
Apply to this job
Location pin icon
Bellevue, WA •Menlo Park, CA •Seattle, WA •New York, NY •San Francisco, CA + 4 more
- Hide
Apply to this job
The GenAI org at Meta builds industry leading LLM and multimodal generative foundation models, which sets the industry benchmark of open source foundation models and enables many Meta products. The team is working on the industrial leading research on multimodal generative foundation models with a focus on the audio modality (including speech, sound and music). The team is working closely with the language and the vision research teams, and is collaborating with product teams in bringing the results to benefit billions of Meta users around the world.
Research Scientist, GenAI - Multimodal Audio (Speech, Sound and Music) Responsibilities
Minimum Qualifications
Preferred Qualifications
For those who live in or expect to work from California if hired for this position, please click here for additional information.
Locations
2
2
Use ctrl + scroll to zoom the map
Zoom in
Zoom out
Recenter
Data Center
About Meta
Meta builds technologies that help people connect, find communities, and grow businesses. When Facebook launched in 2004, it changed the way people connect. Apps like Messenger, Instagram and WhatsApp further empowered billions around the world. Now, Meta is moving beyond 2D screens toward immersive experiences like augmented and virtual reality to help build the next evolution in social technology. People who choose to build their careers by building with us at Meta help shape a future that will take us beyond what digital connection makes possible today-beyond the constraints of screens, the limits of distance, and even the rules of physics.
$177,000/year to $251,000/year + bonus + equity + benefits
Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate, monthly rate, or annual salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base compensation, Meta offers benefits. Learn more about benefits at Meta.
Equal Employment Opportunity and Affirmative Action
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here .
Meta is committed to providing reasonable support (called accommodations) in our recruiting processes for candidates with disabilities, long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support. If you need support, please reach out to accommodations-ext@fb.com .
Apply to this job
Location pin icon
Bellevue, WA •Menlo Park, CA •Seattle, WA •New York, NY •San Francisco, CA + 4 more
- Hide
Apply to this job
The GenAI org at Meta builds industry leading LLM and multimodal generative foundation models, which sets the industry benchmark of open source foundation models and enables many Meta products. The team is working on the industrial leading research on multimodal generative foundation models with a focus on the audio modality (including speech, sound and music). The team is working closely with the language and the vision research teams, and is collaborating with product teams in bringing the results to benefit billions of Meta users around the world.
Research Scientist, GenAI - Multimodal Audio (Speech, Sound and Music) Responsibilities
- Full life-cycle research on multimodal generative foundation models with a focus on the audio modality, including bringing up ideas
- Designing and implementing models and algorithms
- Collecting and selecting training data, training / tuning / scaling the models, evaluating the performance, open sourcing and publication
- Work together with collaborating teams (e.g. language and vision) to leverage each other and deliver the high-level goals.
Minimum Qualifications
- Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.
- Solid track record of research in the audio (speech, sound, or music) or vision (image or video) domains. Can be publication records or unpublished industrial experience.
- PhD degree in the related field with 3+ years of experience, or BS degree with 5+ years of industrial research experience in the related field.
- Related research fields: audio (speech, sound, or music) generation, text-to-speech (TTS) synthesis, text-to-music generation, text-to-sound generation, speech recognition, speech / audio representation learning, vision perception, image / video generation, video-to-audio generation, audio-visual learning, audio language models, lip sync, lip movement generation / correction, lip reading, etc.
- Proven knowledge in neural networks.
- Experienced in one of the following popular ML frameworks: Pytorch, Tensorflow, JAX.
- Experienced in Python programming language.
- Solid communication skills.
Preferred Qualifications
- Solid publication track record in related fields.
- Solid experience in either of the following: audio dataset curation, model scaling, audio generation model evaluation.
- Experienced in large-scale data processing.
- Experienced in solving complex problems involving trade-offs, alternative solutions, cross functional collaboration, taking into account diverse points of views.
For those who live in or expect to work from California if hired for this position, please click here for additional information.
Locations
2
2
Use ctrl + scroll to zoom the map
Zoom in
Zoom out
Recenter
Data Center
About Meta
Meta builds technologies that help people connect, find communities, and grow businesses. When Facebook launched in 2004, it changed the way people connect. Apps like Messenger, Instagram and WhatsApp further empowered billions around the world. Now, Meta is moving beyond 2D screens toward immersive experiences like augmented and virtual reality to help build the next evolution in social technology. People who choose to build their careers by building with us at Meta help shape a future that will take us beyond what digital connection makes possible today-beyond the constraints of screens, the limits of distance, and even the rules of physics.
$177,000/year to $251,000/year + bonus + equity + benefits
Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate, monthly rate, or annual salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base compensation, Meta offers benefits. Learn more about benefits at Meta.
Equal Employment Opportunity and Affirmative Action
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here .
Meta is committed to providing reasonable support (called accommodations) in our recruiting processes for candidates with disabilities, long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support. If you need support, please reach out to accommodations-ext@fb.com .