Generative AI Associate – Red Teaming Specialist, Japanese
Remote job
Full Time Entry-level / Junior USD 1K - 45K
Innodata
Data and AI are inextricably linked. Our innovative solutions, serving generative, traditional, and enterprise AI, range from supervised fine-tuning and red teaming, to data collection and annotation, to AI consulting and vision workshops, and...Job Title: Generative AI Associate – Red Teaming Specialist
Location: US – Remote work (excluding AK, CA, CO,NY, WA in US, Puerto Rico)
Language: Japanese-English
Who we are:
Innodata (NASDAQ: INOD) is a leading data engineering company. With more than 2,000 customers and operations in 13 cities around the world, we are an AI technology solutions provider-of-choice for 4 out of 5 of the world’s biggest technology companies, as well as leading companies across financial services, insurance, technology, law, and medicine.
By combining advanced machine learning and artificial intelligence (ML/AI) technologies, a global workforce of subject matter experts, and a high-security infrastructure, we’re helping usher in the promise of AI. Innodata offers a powerful combination of both digital data solutions and easy-to-use, high-quality platforms.
Our global workforce includes over 5,000 employees in the United States, Canada, United Kingdom, the Philippines, India, Sri Lanka, Israel and Germany. We’re poised for a period of explosive growth over the next few years.
About the role:
At Innodata, we’re working with the world’s largest technology companies on the next generation of generative AI and large language models (LLMs). We’re looking for smart, savvy, and curious Red Teaming Specialists to join our team.
This is the role that writers and hackers dream about: you’ll be challenging the next generation of LLMs to ensure their robustness and reliability. We’re testing generative AI to think critically and act safely, not just to generate content.
This isn’t just a job: it’s a once-in-a-lifetime opportunity to work on the frontlines of AI safety and security. There’s nothing more cutting-edge than this. Joining us means becoming an integral member of a global team dedicated to identifying vulnerabilities and improving the resilience of AI systems. You’ll be creatively crafting scenarios and prompts to test the limits of AI behavior, uncovering potential weaknesses and ensuring robust safeguards. You’ll be shaping the future of secure AI-powered platforms, pushing the boundaries of what’s possible. Keen to learn more?
What you’ll be doing:
As a Red Teaming Specialist on our AI Large Language Models (LLMs) team, you will be joining a truly global team of subject matter experts across a wide variety of disciplines and will be entrusted with a range of responsibilities. We’re seeking self-motivated, clever, and creative specialists who can handle the speed required to be on the frontlines of AI security. In return, we’ll be training you in cutting-edge methods of identifying and addressing vulnerabilities in generative AI. Below are some responsibilities and tasks of our Red Teaming Specialist role:
- Complete extensive training on AI/ML, LLMs, Red Teaming, and jailbreaking, as well as specific project guidelines and requirements
- Craft clever and sneaky prompts to attempt to bypass the filters and guardrails on LLMs, targeting specific vulnerabilities defined by our clients
- Collaborating closely with language specialists, team leads, and QA leads to produce the best possible work
- Assist our data scientists to conduct automated model attacks
- Adapt to the dynamic needs of different projects and clients, navigating shifting guidelines and requirements
- Keep up with the evolving capabilities and vulnerabilities of LLMs and help your team’s methods evolve with them
- Hit productivity targets, including for number of prompts written and average handling time per prompt
Requirements
What we need you to bring:
- Bachelor’s degree or above or associates and 1 year of relevant industry experience or high school degree and 2 years of relevant industry experience is preferred
- Excellent writing skills
- Strong understanding of grammar, syntax, and semantics – knowing what "proper” English rules are, as well as when to violate them to better test AI responses
- Ability to adopt different voices and points of view
- Creative thinking
- Strong attentive to detail
- Well-honed internet research skills
- Ability to embrace diverse teams
- Ability to navigates ambiguity with grace
- Adaptability to thrive in a dynamic environment, with the agility to adjust to evolving guidelines and requirements
What we offer:
- Fully remote work environment
- Collaborative culture – and key tools enabling it
- Competitive compensation package
- Health, dental & vision benefits
- Employee Assistance Program (EAP)
- Career development & progression opportunities
- Paid vacation & personal days as well as sick days
Please note: As a Red Teaming Specialist, you’ll push the boundaries of large language models and seek to expose their vulnerabilities. In this work, you may be dealing with material that is toxic or NSFW. Innodata is committed to the health of its workforce and so provides wellness resources and mental health support.
The salary range for this position is between USD $38,000 to $45,000 annually. Actual compensation will be determined based on various factors including qualifications, experience, and internal equity.
Tags: Engineering Generative AI LLMs Machine Learning Research Security Testing
Perks/benefits: Career development Competitive pay Equity / stock options Health care Startup environment Wellness
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.