AI Safety Analyst, Google Photos

Bengaluru, Karnataka, India

Google

Google’s mission is to organize the world's information and make it universally accessible and useful.

View all jobs at Google

Apply now Apply later

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 4 years of experience in data analytics, Trust and Safety, policy, cybersecurity, or related fields.

Preferred qualifications:

  • Master's degree in a relevant technical field.
  • Experience with Machine Learning (ML).
  • Experience in SQL, building dashboards, data collection/transformation, and visualization/dashboards.
  • Experience in a scripting/programming language (e.g. Python).
  • Ability to influence cross-functionally at various levels.
  • Excellent communication, presentation, written and verbal skills.

About the job

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

In this role, you will work directly with Product Managers, Engineers, Policy, Legal team to build and execute a comprehensive approach to push the Artificial Intelligence (AI) model to its limits and build resilience against malicious or unexpected inputs. Your creative thinking, deep expertise, problem-solving and collaboration skills will be invaluable to ensure a safe and responsible deployment of AI in Google Photos. This role also requires evaluation of model output, which may include sensitive, graphic, controversial, or upsetting content

Google Photos is a photo sharing and storage service developed by Google. Photos is one of the most sought after products at Google and is looking for both client-side (web and mobile), with server-side (search, storage, serving) and machine intelligence (learning, computer vision) Software Engineers. We are dedicated to making Google experiences centered around the user.

Responsibilities

  • Develop and implement adversarial test strategy for AI features in Google Photos (Ask Photos, Magic Editor) to proactively identify and mitigate potential risks and avoid unintended outputs.
  • Conduct in-depth research, identify emerging risk areas, abuse vectors, edge cases to craft quality adversarial test datasets and build internal and external partnerships to stay on top of AI safety.
  • Partner with product, engineering, policy, research, central Trust and Safety etc to develop tailored testing approaches, tools and solutions (test accounts etc), execution of tests, analyse model outputs to inform improvement areas and safety mechanisms.
  • Monitor and improve quality, precision, recall of the safety classifiers and manual rating processes.
  • Establish health metrics and feedback loops with stakeholders to evolve the program and report on key insights.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Computer Vision Data Analytics Engineering Machine intelligence Machine Learning Python Research SQL Testing

Region: Asia/Pacific
Country: India

More jobs like this