Senior Applied Researcher - NLP
Berlin Office
Protect AI
Protect AI is the broadest and most comprehensive platform to secure your AI. It enables you to see, know, and manage AI securely, end to end.Protect AI is shaping, defining, and innovating a new category within cybersecurity around the risk and security of AI/ML. Our ML Security Platform enables customers to see, know, and manage security risks to defend against unique AI security threats, and embrace MLSecOps for a safer AI-powered world. This includes a broad set of capabilities including AI supply chain security, Auditable Bill of Materials for AI, ML model scanning, signing, attestation and LLM Security.
Join our team to help us solve this critical need of protecting AI!
RoleAt Protect AI, we're creating the most comprehensive AI security platform in the world. From safeguarding the AI supply chain to scanning ML models and securing Large Language Models (LLM), we use advanced deep learning to protect against the latest threats. Now, we're looking for a talented Senior Applied Researcher in NLP to help us reach our ambitious goals.
This is a unique opportunity to be at the forefront of the AI Security domain, influencing both our cutting-edge initiatives and the broader field with your innovative research and developments. You’ll help build resilient AI technologies that offer robust protection against emerging threats, safeguarding global organizations.
As part of our team, you'll collaborate closely with our product engineers, architects, and CTO. You'll also play a crucial role in improving our open-source models, helping organizations secure their AI applications.
Responsibilities:
Conduct in-depth research, analyze AI systems, and develop novel methodologies and techniques to proactively detect and mitigate security risks, including adversarial attacks, data poisoning, model evasion, harmful behavior, and others.
Develop robust classification models and frameworks using state-of-the-art deep learning techniques for various applications, focusing on security and integrity.
Evaluate and improve the performance of various AI models, including NLP, generative, and classification types, aiming for greater accuracy, efficiency, and scalability.
Contribute to the open-source community by sharing models and algorithms, especially through initiatives like LLM Guard.
Collaborate with cross-functional teams and effectively communicate technical findings and insights to stakeholders.
Stay abreast of AI security and safety research advancements, attend conferences, and actively contribute to the security community through publications and presentations.
Qualifications:
Significant practical experience in building and deploying machine learning, deep learning, and neural networks, from ideation to production, in academia or industry settings.
Advanced knowledge in Deep Learning as applied to Natural Language Processing (NLP) tasks, such as text classification, feature extraction, sentiment analysis, topic modeling, and named entity recognition.
Demonstrated ability to transform cutting-edge research into viable prototypes, with experience in novel NLP models to solve real-world problems.
Strong Python programming skills and familiarity with deep learning frameworks like PyTorch or TensorFlow, including experience with fine-tuning LLMs or other transformer-based models like BERT.
Excellent problem-solving skills, analytical thinking, and meticulous attention to detail, with a passion for working in a dynamic and fast-paced environment as part of a distributed team.
Experience in fast-paced, agile environments, capable of managing uncertainty and ambiguity.
Effective communication skills with the ability to collaborate well in a team-oriented environment.
Preferred qualifications include:
Experience with large datasets and processing frameworks (e.g., Azure Data Lake, HDFS/Hadoop, Spark), or public cloud infrastructures (Azure, AWS, Google Cloud) for NLP model tasks.
Experience in cybersecurity or Trustworthy AI, such as in toxicity detection or algorithmic methods for adversarial attacks and their defense.
Proven track record of conducting research demonstrated through publications, including at top-tier conferences or journals.
Contribution to open-source software projects.
What We Offer:
An exciting, collaborative work environment in a fast-growing startup.
Competitive salary and benefits package.
Excellent medical, dental and vision insurance.
Opportunities for professional growth and development including attending and presenting technical talks at meetups and conferences.
A culture that values innovation, accountability, and teamwork.
Opportunities to contribute to our open source projects with thousands of Github stars and millions of HuggingFace downloads.
Work with a team of talented and well-accomplished peers from AWS, Microsoft and Oracle Cloud.
Work with best in class tools — M4 Macbook Pro, 34” Monitor, modern tech stack and high quality collaboration tools.
No bureaucracy and legacy systems. You are empowered to innovate and do your best work.
Weekly lunch at the office and weekly delivery credits for food delivery services.
Protect AI is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile AWS Azure BERT Classification Deep Learning GCP GitHub Google Cloud Hadoop HDFS HuggingFace LLMs Machine Learning ML models NLP Open Source Oracle Python PyTorch Research Security Spark TensorFlow Topic modeling
Perks/benefits: Career development Competitive pay Conferences Gear Health care Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.