Apple
Apple is hiring: Audio Signal Processing and Machine Learning Research Engineer
Apple, Cupertino, CA, United States
Audio Signal Processing and Machine Learning Research EngineerCupertino, California, United StatesSoftware and ServicesImagine what you could do at Apple! Everyday, new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Do you bring passion and dedication to your job? If so, we are looking for individuals like you. The Audio and Media Technologies (AMT) team is at the center of audio & video processing in Apple’s innovative products, including iPhone, iPad, Mac, Apple Watch, AirPods, HomePod, Apple TV, and Vision Pro. AMT’s Audio team provides the audio foundation for various high profile features like phone calls, FaceTime, Siri, spatial audio, and media capture and playback. As part of the Audio Algorithms group, you will research and develop world class communication processing systems across Apple’s product ecosystem.DescriptionThe Audio Algorithms team is seeking a highly skilled and creative engineer interested in advancing speech and audio technologies at Apple. As a member of the team, you will work together with other researchers to develop novel signal processing and machine learning technologies for audio processing. You will research, implement, and optimize these features on Apple’s products as we push the state of the art in communication technology. You’ll collaborate with multiple teams across product development, Acoustics, Audio Software, and others to integrate your ideas into products and build future technologies that will impact billions of Apple customers.Minimum QualificationsResearch experience in audio algorithm development using signal processing and/or machine learning (released product features, patents, papers, etc.)Hands-on experience with developing audio algorithms from initial concept to shipping solutionFluency in C++, Python, common software engineering practices, and version controlExperience developing real-time audio processing softwareKey QualificationsMS/PhD in Computer Science or Electrical EngineeringExperience developing echo cancellation, denoising, source separation, and beam-forming algorithms.Experience developing machine learning pipelines.Familiarity with psycho-acoustics, speech, and language modeling.Pay & BenefitsAt Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $175,800 and $312,200, and your base pay will depend on your skills, qualifications, experience, and location. Apple employees also have the opportunity to become an Apple shareholder through participation in Apple’s discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple’s Employee Stock Purchase Plan. You’ll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses — including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation.Apple is an equal opportunity employer that is committed to inclusion and diversity. We take affirmative action to ensure equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics.#J-18808-Ljbffr