Data Engineer

London, United Kingdom

Apply now Apply later

About Winton

Winton is a research-based investment management company with a specialist focus on statistical and mathematical inference in financial markets. The firm researches and trades quantitative investment strategies, which are implemented systematically via thousands of securities, spanning the world's major liquid asset classes. Founded in 1997 by David Harding, Winton today manages assets for some of the world’s largest institutional investors.

We employ ambitious professionals who want to work collaboratively at the leading edge of investment management.

 

Winton leverages quantitative analysis and cutting-edge technology to identify and capitalize on opportunities across global financial markets. We foster a collaborative and intellectually stimulating environment, bringing together individuals with Mathematics, Physics and Computer Science backgrounds who are passionate about applying rigorous scientific methods to financial challenges. As a fundamentally data-driven business, our success is heavily linked to the acquisition, processing, and analysis of vast datasets. High-quality, well-managed data forms the critical foundation for our quantitative research, strategy development, and automated trading systems.

As a Data Engineer within our Quantitative Platform team, you will play a pivotal role in building and maintaining the data infrastructure that fuels our research and trading strategies. You will be responsible for the end-to-end lifecycle of diverse datasets – including market, fundamental, and alternative sources – ensuring their timely acquisition, rigorous cleaning and validation, efficient storage, and reliable delivery through robust data pipelines. Working closely with quantitative researchers and technologists, you will tackle complex challenges in data quality, normalization, and accessibility, ultimately providing the high-fidelity, readily available data essential for developing and executing sophisticated investment models in a fast-paced environment. 

Your responsibilities will include:

  • Evaluating, onboarding, and integrating complex data products from diverse vendors, serving as a key technical liaison to ensure data feeds meet our stringent requirements for research and live trading.
  • Designing, implementing, and optimizing robust, production-grade data pipelines to transform raw vendor data into analysis-ready datasets, adhering to software engineering best practices and ensuring seamless consumption by our automated trading systems.
  • Engineering and maintaining sophisticated automated validation frameworks to guarantee the accuracy, timeliness, and integrity of all datasets, directly upholding the quality standards essential for the efficacy of our quantitative strategies.
  • Providing expert operational support for our data pipelines, rapidly diagnosing and resolving critical issues to ensure the uninterrupted flow of high-availability data powering our daily trading activities.
  • Participating actively in team rotations, including on-call schedules, to provide essential coverage and maintain the resilience of our data systems outside of standard business hours.

What we are looking for:

  • 5+ years’ experience building ETL/ELT pipelines using Python and pandas within a financial environment.
  • Strong knowledge of relational databases and SQL.
  • Familiarity with various technologies such as S3, Kafka, Airflow, Iceberg
  • Proficiency working with large financial datasets from various vendors.
  • A commitment to engineering excellence and pragmatic technology solutions.
  • A desire to work in an operational role at the heart of a dynamic data-centric enterprise.
  • Excellent communication and collaboration skills, and the ability to work in a team.

What would be advantageous:

  • Strong understanding of financial markets.
  • Experience working with hierarchical reference data models.
  • Proven expertise in handling high-throughput, real-time market data streams
  • Familiarity with distributed computing frameworks such as Apache Spark
  • Operational experience supporting real time systems.
 

Equal Opportunity Workplace

We are proud to be an equal opportunity workplace. We do not discriminate based upon race, religion, color, national origin, sex, sexual orientation, gender identity/expression, age, status as a protected veteran, status as an individual with a disability, or any other applicable legally protected characteristics.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0
Category: Engineering Jobs

Tags: Airflow Computer Science Data pipelines Data quality ELT Engineering ETL Kafka Mathematics Pandas Physics Pipelines Python RDBMS Research Spark SQL Statistics Trading Strategies

Perks/benefits: Team events

Region: Europe
Country: United Kingdom

More jobs like this