Neuroscience + AI Intern (PhD, Fall 2025)

San Francisco, US

Dolby Laboratories

Dolby entwickelt Audio-, Bild- und Sprachtechnologien für Film, TV, Musik und Spiele. Erleben Sie alles mit beeindruckendem Klang und atemberaubendem Bild

View all jobs at Dolby Laboratories

Apply now Apply later

 

Join the leader in entertainment innovation and help us design the future. At Dolby, science meets art, and high tech means more than computer code. As a member of the Dolby team, you’ll see and hear the results of your work everywhere, from movie theaters to smartphones. We continue to revolutionize how people create, deliver, and enjoy entertainment worldwide. To do that, we need the absolute best talent. We’re big enough to give you all the resources you need, and small enough so you can make a real difference and earn recognition for your work. We offer a collegial culture, challenging projects, and excellent compensation and benefits, not to mention a Flex Work approach that is truly flexible to support where, when, and how you do your best work.

 

The Advanced Technology Group (ATG) is the research division of the company. ATG’s mission is to look ahead, deliver insights, and innovate technological solutions that will fuel Dolby’s continued growth. Our researchers have a broad range of expertise related to computer science and electrical engineering, such as AI/ML, algorithms, digital signal processing, audio engineering, image processing, computer vision, data science & analytics, distributed systems, cloud, edge & mobile computing, computer networking, and IoT.

 

Neuroscience+AI Internship: User Engagement Measurement for Next-Generation Media Creation

 

Multimodal Experiences Lab - Advanced Technology Group

We are seeking exceptional interns to join our cutting-edge research at the intersection of physiological measurement, computational neuroscience, and next-generation media experiences. You will have the opportunity to develop novel approaches to measuring user engagement through cardiovascular dynamics, neural activity, and other biosignal analysis to enable personalized, adaptive media content. As an intern, you will work closely with our team of researchers and scientists to advance the frontier of engagement-aware media systems that leverage AI and foundation models to adapt in real-time to user state and preferences derived from physiological data.

 

What are we looking for in candidates?

Along with your solid technical skills, candidates should demonstrate problem-solving and analytical abilities, good communication and collaboration skills, a curiosity for how and why things work as they do, and a passion for understanding human perception and engagement with media. You have a desire to bring in new ideas and are open to learning from others and working in a team environment focused on transforming the future of entertainment experiences through AI-driven physiological understanding.

 

You may succeed in this role if you are a PhD candidate in neuroscience, biomedical engineering, computer science, or related fields, and you are excited about bridging physiological measurement with AI and media technology to create more engaging and personalized experiences.

 

Example Responsibilities

  • Work collaboratively with our team to design and implement experiments measuring cardiovascular dynamics (heart rate variability, PPG) and autonomic physiology (EDA) during media consumption across different content types and viewing contexts.
  • Develop EEG-based neural signature models for media components and events combining naturalistic media stimuli with AI-based content analysis.
  • Create biosignal transfer learning approaches that establish robust mappings between high-fidelity neural signatures and accessible physiological measures from consumer wearable devices.
  • Build foundation models for physiological data representation that can generalize across individuals, devices, and measurement contexts to enable scalable engagement prediction systems.
  • Implement temporal engagement models to predict user state trajectories and optimize content adaptation timing for sustained engagement across diverse media experiences.
  • Develop multimodal AI systems that integrate physiological signals, content features, and contextual information to predict and enhance user engagement in real-time media applications.
  • Leverage large-scale physiological datasets to train foundation models that capture universal patterns in human engagement responses while preserving individual personalization capabilities.
  • Contribute to the development of research papers, patents, and technical presentations advancing the field of AI-driven and engagement-aware media systems.

 

Requirements

  • Currently pursuing a PhD degree in neuroscience, computational neuroscience, biomedical engineering, computer science, electrical engineering, cognitive science, or a related field.
  • Strong programming and prototyping skills in Python, Matlab, or similar languages with experience in signal processing, time-series prediction and analysis, and AI/ML frameworks (PyTorch, TensorFlow).
  • Familiarity with physiological signal acquisition and analysis, particularly cardiac signals (ECG, PPG, HRV), electrodermal activity (EDA) and EEG measurements.
  • Experience with machine learning techniques and algorithms, particularly deep learning, transfer learning, and foundation models applicable to physiological data and temporal modeling.
  • Understanding of AI model development including data preprocessing, feature engineering, model training, and evaluation for biosignal applications.
  • Understanding of experimental design, hypothesis testing, and collection of perceptual and physiological data in controlled settings.
  • Analytical skills and the ability to manipulate, visualize, and extract meaning from complex physiological and behavioral datasets.
  • Excellent communication and teamwork skills.
  • Ability to work independently and take initiative on complex, interdisciplinary problems involving AI and human physiology.

 

Highly Desirable

  • Experience developing foundation models or large-scale representation learning for physiological or biomedical data.
  • Prior work with transformer architectures, state space models, self-supervised learning, or contrastive learning methods applied to time-series physiological data.
  • Experience with real-time gaming/simulation engines such as Unity or Unreal for creating virtual experimental environments.
  • Knowledge of multimodal AI systems that combine physiological, behavioral, and content-based signals.
  • Prior work with EEG analysis, event-related potentials, or other neuroimaging techniques combined with AI interpretation methods.
  • Understanding of wearable sensor technologies, their data characteristics, and approaches for handling sensor heterogeneity.
  • Experience with edge measurement and ML deployment techniques for real-time physiological monitoring and engagement prediction.
  • Background in human-computer interaction, user experience research, or media psychology with AI integration.

 

Benefits

  • Gain hands-on experience in cutting-edge research combining AI, physiological measurement, and next-generation media technologies.
  • Work alongside experienced engineers, researchers and scientists specializing in AI, neuroscience, perception science, network delivery and media systems.
  • Develop skills in experimental design, multimodal data collection, advanced signal processing, and AI/ML applications to human physiology.
  • Contribute to research that will shape the future of AI-powered personalized, adaptive media experiences.
  • Collaborate with teams developing recomposable media platforms, content intelligence systems, and AI-driven rendering technologies.

 

If you are passionate about understanding human engagement through physiological measurement and applying these insights to create more compelling media experiences, we encourage you to apply for this internship. We welcome applicants from diverse backgrounds and are committed to creating an inclusive and supportive work environment where innovation in human-centered technology thrives.

 

Eligibility

Working towards a PhD degree in neuroscience, biomedical engineering, computer science, or related field; recent graduates within six months of graduation are also eligible to apply. Must be available to work full-time, Monday to Friday, for three months from September 2025 – December 2025.

 

The start dates for this internship are as follows (please note these dates are not flexible):

  • Monday, September 22, 2025 

 

 

 

 

The San Francisco/Bay Area base salary range for this full-time position is $57/hr, which can vary if outside this location, plus bonus, benefits, and some roles may also include equity. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, competencies, experience, market demands, internal parity, and relevant education or training. Your recruiter can share more about the specific salary range and perks and benefits for your location during the hiring process.

 

Dolby will consider qualified applicants with criminal histories in a manner consistent with the requirements of San Francisco Police Code, Article 49, and Administrative Code, Article 12

 

Equal Employment Opportunity:
Dolby is proud to be an equal opportunity employer. Our success depends on the combined skills and talents of all our employees. We are committed to making employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, gender identity, national origin, religion, marital status, family status, medical condition, disability, military service, pregnancy, childbirth and related medical conditions or any other classification protected by federal, state, and local laws and ordinances.

Apply now Apply later
Job stats:  2  1  0
Category: Deep Learning Jobs

Tags: Architecture Classification Computer Science Computer Vision Deep Learning Distributed Systems EDA Engineering Feature engineering Machine Learning Matlab ML models Model training PhD Prototyping Python PyTorch Research TensorFlow Testing

Perks/benefits: Career development Equity / stock options Flex hours Salary bonus Startup environment Team events

Region: North America
Country: United States

More jobs like this