Technical Governance and Responsible AI Researcher, Cohere For AI
San Francisco
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Applications have closed
- Remote-first
- Website
- @CohereAI 𝕏
- GitHub
- Search
Cohere
Deploy multilingual models, advanced retrieval, and intelligent agents securely and privately — without the risks of ordinary AI.
Why this role?Cohere For AI is a state-of-the-art research lab that sets progress at the frontier of machine learning. We have a proven track record of top-tier publications and a commitment to cross-institutional high-impact collaboration. Learn more about our work here.
In addition to fundamental research, we contribute technical perspectives on questions at the core of AI development. This involves combining tools across fields, such as computer science economics and deep learning, to provide rigorous landscape analysis on open questions about responsible development in AI. The goal of this role is to help cultivate a better understanding of AI futures, and to support effective AI policy.
Our team works to provide a technically grounded perspective to shape advanced recommendations for the governance of artificial intelligence, informing how we can improve risk identification and mitigation throughout the model development and deployment lifecycle.
Please Note: We have offices in Toronto, San Francisco, New York, and London but embrace being remote-first! There are no restrictions on where you can be located for this role.
In addition to fundamental research, we contribute technical perspectives on questions at the core of AI development. This involves combining tools across fields, such as computer science economics and deep learning, to provide rigorous landscape analysis on open questions about responsible development in AI. The goal of this role is to help cultivate a better understanding of AI futures, and to support effective AI policy.
Our team works to provide a technically grounded perspective to shape advanced recommendations for the governance of artificial intelligence, informing how we can improve risk identification and mitigation throughout the model development and deployment lifecycle.
Please Note: We have offices in Toronto, San Francisco, New York, and London but embrace being remote-first! There are no restrictions on where you can be located for this role.
As a technical governance researcher, some of your core responsibilities will include:
- Executing on technical governance research, contributing to reports that provide technical perspectives on the state of AI.
- Run and analyze scientific experiments to advance our understanding of large language models and the ecosystem which large language models exist in.
- Build models, collect data and interpret results about specific aspects relating to the the state of AI, including varied questions like: inference time compute, traceability of open weights and watermarking, parsing evidence to-date to motivate prioritization of risks.
- Launch initiatives to steward public awareness of AI capabilities and their limitations through whitepapers, guides, workshops, gatherings, and courses.
- Communicate scientific findings with a wide audience that includes policymakers.
Some examples of past technical governance projects that have come out of Cohere For AI:
- Understanding of the viability of compute thresholds
- Assessing AI Biorisk - An evidence based assessement on prioritization of Biorisk
- Policy Primer - The AI Language Gap
- Open Problems in Technical AI Governance
- The Data Provenance Initiative: A Large Scale Audit of Dataset Licensing & Attribution in AI
You may be a good fit if you have:
- Technical contributions to governance topics, evidencing rigour and thorough evaluation.
- Executed experiments with rigour and have a strong scientific background. This role will require comfort in translating technical concepts into policy recommendations and discussing technical concepts with a general audience.
- An understanding of global public policy issues particularly with respect to AI policy.
- Ability to identify and analyze public policy developments, put them into context and identify scientific problems underlying policy discourse.
- Computer science degree, economics degree or equivalent technical experience.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Job stats:
10
5
0
Categories:
Deep Learning Jobs
Research Jobs
Tags: AI governance CoHere Computer Science Deep Learning Economics LLMs Machine Learning ML models Research Responsible AI
Region:
North America
Country:
United States
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.
Power BI Developer jobsData Scientist II jobsPrincipal Data Engineer jobsBI Developer jobsBusiness Intelligence Developer jobsStaff Data Scientist jobsPrincipal Software Engineer jobsStaff Machine Learning Engineer jobsJunior Data Analyst jobsDevOps Engineer jobsData Science Intern jobsSoftware Engineer II jobsData Manager jobsData Science Manager jobsStaff Software Engineer jobsLead Data Analyst jobsAI/ML Engineer jobsData Analyst Intern jobsBusiness Data Analyst jobsSr. Data Scientist jobsData Specialist jobsData Engineer III jobsBusiness Intelligence Analyst jobsData Governance Analyst jobsData Analyst II jobs
Consulting jobsMLOps jobsAirflow jobsOpen Source jobsEconomics jobsLinux jobsKafka jobsKPIs jobsGitHub jobsJavaScript jobsTerraform jobsPostgreSQL jobsPrompt engineering jobsBanking jobsRAG jobsNoSQL jobsRDBMS jobsClassification jobsStreaming jobsPhysics jobsComputer Vision jobsScikit-learn jobsData Warehousing jobsGoogle Cloud jobsdbt jobs
GPT jobsHadoop jobsData warehouse jobsLooker jobsScala jobsPandas jobsLangChain jobsDistributed Systems jobsReact jobsR&D jobsOracle jobsBigQuery jobsScrum jobsMicroservices jobsELT jobsCX jobsPySpark jobsIndustrial jobsOpenAI jobsRedshift jobsJira jobsTypeScript jobsSAS jobsRobotics jobsModel training jobs