QA Automation Engineer (ML/AI)

Warsaw, Poland

Kyriba

Discover how Kyriba’s Liquidity Performance Platform connects, protects, forecasts, and optimizes your cash flow, data, and financial strategies.

View all jobs at Kyriba

Apply now Apply later

It's fun to work in a company where people truly BELIEVE in what they're doing!
 

We're committed to bringing passion and customer focus to the business.

About Us

Kyriba is a global leader in liquidity performance that empowers CFOs, Treasurers and IT leaders to connect, protect, forecast and optimize their liquidity. As a secure and scalable SaaS solution, Kyriba brings intelligence and financial automation that enables companies and banks of all sizes to improve their financial performance and increase operational efficiency. Kyriba’s real-time data and AI-empowered tools empower its 3,000 customers worldwide to quantify exposures, project cash and liquidity, and take action to protect balance sheets, income statements and cash flows. Kyriba manages more than 3.5 billion bank transactions and $15 trillion in payments annually and gives customers complete visibility and actionability, so they can optimize and fully harness liquidity across the enterprise and outperform their business strategy. For more information, visit www.kyriba.com.

About the role:

We are seeking a Mid-Level QA Automation Engineer with a strong interest in AI/ML systems to join our growing Data Platform development team and help us enhance product reliability and performance.

As a Mid-Level QA Engineer, you’ll play a critical role in the software development life-cycle by designing, implementing, and executing test plans that identify potential issues and maintain product quality. Working closely with developers, product managers, and other QA team members, you’ll help automate, optimize, and troubleshoot test cases for various Data Platform projects. Apart of this, you can work on testing machine learning pipelines, model outputs, and data quality checks, helping ensure our AI-powered features are reliable, accurate, and scalable.

The perfect candidate doesn’t need to fulfill all the requirements listed below, we are looking for talented colleagues passionate about ML/AI-driven systems with a strong desire to learn and grow within a collaborative environment.

Keywords: QA, Automation testing,  Test design, ML, AI, Ai agents, GenAI, MLOps, REST API, Newman, Postman, Java, Python, Data, CI/CD, Jenkins, Git, Databricks

Essential duties and responsibilities:

  • Design and implement detailed test plans, test cases, and scripts to validate software functionality and performance with focus on ML data pipelines and AI agents(including data ingestion, transformation and model inference stages)

  • Conduct functional, regression, integration, and user acceptance testing.

  • Validate outputs from machine learning models and AI agents, including edge cases, model thresholds, and data drift.

  • Build synthetic datasets or validation scripts to test model behavior under various data distributions.

  • Develop and automate API and data validation tests for AI services and endpoints to ensure data accuracy, reliability, and security across services.

  • Work closely with developers, data scientists, MLOps, and product teams to clarify testing requirements, acceptance criteria, and product specifications.

  • Document and log test results, defects, and issues in detail, collaborating with developers to resolve issues.

  • Participate in team meetings and agile ceremonies, such as sprint planning, retrospectives, and daily stand-ups, to stay aligned with project goals.

  • Stay updated on emerging trends and technologies in QA and testing practices to enhance QA processes.

Education, Experience & Skills:

  • 2+ years of experience in software QA or test automation

  • Strong understanding of software testing types: functional, regression, integration, performance. Knowledge of test design techniques (boundary analysis, equivalence class, etc.).

·Fundamental understanding of either Java or Python programming language, experience with automation testing frameworks like TestNG/JUnit, or PyTest.

  • Experience with Python for writing test scripts and interacting with ML services or data pipelines.

  • Proficiency in testing RESTful APIs using tools like Newman, Postman, REST Assured, or similar.

  • Experience working with data-centric applications or testing structured/unstructured datasets.

  • Knowledge of CI/CD practices and tools, such as Jenkins, GitLab CI/CD, or others. Ability to set up and integrate automated tests in CI/CD pipelines.

  • Experience with version control systems, particularly Git.

  • Eagerness to learn and adapt to new tools and technologies.

  • Collaborative team player

  • Intermediate (at least) English level with good verbal and written communication skills.

  • Nice to have:

  • Understanding of ML/AI fundamentals (model lifecycle, overfitting, model evaluation metrics)

  • Familiarity with ML pipelines in tools like Mlflow or similar

  • Exposure to Docker and Kubernetes for test environment management

  • Understanding of model monitoring concepts (e.g., drift, performance decay)

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Agile APIs CI/CD Databricks Data pipelines Data quality Docker Generative AI Git GitLab Java Jenkins Kubernetes Machine Learning MLFlow ML models MLOps Model inference Pipelines Python REST API Security Testing

Region: Europe
Country: Poland

More jobs like this