Research Engineer - Multimodal Model
Singapore
ByteDance
ByteDance is a technology company operating a range of content platforms that inform, educate, entertain and inspire people across languages, cultures and geographies.Responsibilities
Established in 2023, the ByteDance Doubao (Seed) Team is dedicated to building industry-leading AI foundation models. We aim to do world-leading research and foster both technological and social progress.
With a long-term vision and a strong commitment to the AI field, the Team conducts research in a range of areas including natural language processing (NLP), computer vision (CV), and speech recognition and generation. It has labs and researcher roles in China, Singapore, and the US.
Leveraging substantial data and computing resources and through continued investment in these domains, our team has built a proprietary general-purpose model with multimodal capabilities. In the market, Doubao models power over 50 ByteDance apps and business lines, including Doubao, Coze, and Dreamina, and was launched to external enterprise clients through Volcano Engine. The Doubao app is the most used AIGC app in China.
Why Join Us
Creation is the core of ByteDance's purpose. Our products are built to help imaginations thrive. This is doubly true of the teams that make our innovations possible.
Together, we inspire creativity and enrich life - a mission we aim towards achieving every day.
To us, every challenge, no matter how ambiguous, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always.
At ByteDance, we create together and grow together. That's how we drive impact - for ourselves, our company, and the users we serve.
Join us.
About the team
Welcome to the GAI-Vision team, where we lead the way in developing foundational models for multi-modal visual understanding and generation. Our mission is to solve the challenge of visual intelligence in AI. We conduct cutting-edge research on areas such as vision and language, large-scale vision models, and generative foundation models. Comprising experienced research scientists and engineers, our team is dedicated to pushing the boundaries of foundation model research and implementing our innovations across diverse application scenarios. We foster a feedback-driven environment to continuously enhance our foundation technologies. Come join us in shaping the future of AI and transforming the product experience for users worldwide.
Responsibilities
- Explore large-scale/ultra-large-scale visual models and perform system optimization. Data construction, instruction fine-tuning, preference alignment, model optimization.
- Conduct cutting-edge research and development in computer vision, natural language processing, machine learning and general artificial intelligence, especially in the areas of multi-modality, vision and language, etc.
- Publish our latest research results, and help to build our brand in the research community.
- Explore vision/multi-modality application models, and contribute to the development of new technologies and products leveraging artificial intelligence.
Qualifications
Minimum Qualifications
- Possess research and practical experience in one or more areas of computer vision, encompassing multi-modal understanding, vision-language models (e.g., video captioning, VQA, Text-to-video retrieval, and other related topics), large-scale training, RLHF, multimodal generation (e.g., text-to-image, image, video, 3D generation and editing), diffusion models, GANs, transformers for generation tasks.
- At least 3 years of working experience in research or with relevant working experiences;
- Experience with vision-language models and apply them in various downstream tasks.
- Possess coding skills in C/C++ and Python.
- Collaborate effectively with team members.
- Ability to work independently.
Preferred Qualifications
- Work with large-scale datasets, and build large-scale datasets to scale up foundation models.
- Demonstrate impactful publications in leading AI conferences (e.g., CVPR, ECCV, ICCV, NeurIPS, ICLR, SIGGRAPH, SIGGRAPH Asia) and journals (e.g., TPAMI, JMLR).
- Achievement as a winner in international academic competitions.
- Proficiency in one of the differentiable programming frameworks such as PyTorch, TensorFlow, JAX, etc.
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: ASR Computer Vision Diffusion models GANs ICLR JAX JMLR Machine Learning NeurIPS NLP Python PyTorch Research RLHF TensorFlow Transformers
Perks/benefits: Conferences
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.