Data Engineer

Toronto, Canada

73 Strings

Empowering financial asset managers: valuations and portfolio monitoring with AI and advanced data intelligence.

View all jobs at 73 Strings

Apply now Apply later

OVERVIEW OF 73 STRINGS:

73 Strings is an innovative platform providing comprehensive data extraction, monitoring, and valuation solutions for the private capital industry. The company's AI-powered platform streamlines middle-office processes for alternative investments, enabling seamless data structuring and standardization, monitoring, and fair value estimation at the click of a button. 73 Strings serves clients globally across various strategies, including Private Equity, Growth Equity, Venture Capital, Infrastructure and Private Credit.

Our 2025 $55M Series B, the largest in the industry, was led by Goldman Sachs, with participation from Golub Capital and Hamilton Lane, with continued support from Blackstone, Fidelity International Strategic Ventures and Broadhaven Ventures.

About the role

We are seeking a Data Engineer with hands-on experience in Azure, Databricks, and API integration. You will design, build, and maintain robust data pipelines and solutions that power analytics, AI, and business intelligence across the organization.

Key Responsibilities

- Develop, optimize, and maintain ETL/ELT pipelines using Azure Data Factory, Databricks, and related Azure services.

- Build scalable data architectures, including data lakes and data warehouses

- Integrate and process data from diverse sources via REST and SOAP APIs

- Design and implement Spark-based data transformations in Databricks using Python, Scala, or SQL

- Ensure data quality, security, and compliance across all pipelines and storage solutions.

- Collaborate with cross-functional teams to understand data requirements and deliver actionable datasets.

- Monitor, troubleshoot, and optimize Databricks clusters and data workflows for performance and reliability.

- Document data processes, pipelines, and best practices.

Required Skills & Qualifications

- Proven experience with Azure cloud services, especially Databricks, Data Lake, and Data Factory.

- Strong programming skills in Python, SQL, and/or Scala.

- Experience building and consuming APIs for data ingestion and integration.

- Solid understanding of Spark architecture and distributed data processing.

- Familiarity with data modeling, data warehousing, and big data best practices.

- Knowledge of data security, governance, and compliance within cloud environments.

- Excellent communication and teamwork skills.

Preferred

- Experience with DevOps tools, CI/CD pipelines, and automation in Azure/Databricks environments[7][6][10].

- Exposure to real-time data streaming (e.g., Kafka) and advanced analytics solutions

Education

- Master’s degree in Computer Science, Engineering, or a related field, or equivalent experience.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: APIs Architecture Azure Big Data Business Intelligence CI/CD Computer Science Databricks Data pipelines Data quality Data Warehousing DevOps ELT Engineering ETL Kafka Pipelines Python Scala Security Spark SQL Streaming

Region: North America
Country: Canada

More jobs like this