Data Engineer

Toronto, Canada

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

73 Strings

Empowering financial asset managers: valuations and portfolio monitoring with AI and advanced data intelligence.

View all jobs at 73 Strings

Apply now Apply later

OVERVIEW OF 73 STRINGS:

73 Strings is an innovative platform providing comprehensive data extraction, monitoring, and valuation solutions for the private capital industry. The company's AI-powered platform streamlines middle-office processes for alternative investments, enabling seamless data structuring and standardization, monitoring, and fair value estimation at the click of a button. 73 Strings serves clients globally across various strategies, including Private Equity, Growth Equity, Venture Capital, Infrastructure and Private Credit.

Our 2025 $55M Series B, the largest in the industry, was led by Goldman Sachs, with participation from Golub Capital and Hamilton Lane, with continued support from Blackstone, Fidelity International Strategic Ventures and Broadhaven Ventures.

About the Role:

We are seeking a client-facing Data Engineer with 3-4 years of hands-on experience in Azure, Databricks, and API integration. In this role, you will develop and maintain data pipelines and solutions that not only support analytics, AI, and business intelligence but also manage projects related to client integrations and technical consulting.

Key Responsibilities:

  • Develop, optimize, and maintain scalable ETL/ELT pipelines using Azure Data Factory, Databricks, and Azure data services, ensuring alignment with client requirements.
  • Design and implement data architectures, including data lakes and data warehouses, to support both internal and client-facing analytics.
  • Integrate and process data from multiple sources via REST and SOAP APIs, assisting clients in understanding integration workflows.
  • Build and optimize Spark-based data transformations in Databricks using Python, Scala, or SQL.
  • Ensure high standards of data quality, security, and compliance across all pipelines and storage solutions, advising clients on best practices.
  • Collaborate with cross-functional teams and client stakeholders to gather requirements and deliver tailored, actionable datasets.
  • Monitor, troubleshoot, and optimize Databricks clusters and workflows for reliability and performance, providing technical support and expertise directly to clients when needed.
  • Document data processes, pipelines, and best practices, creating clear communication materials for both technical and non-technical audiences
  • Work with clients on their integration projects with 73 Strings platform, provide best practices and enable data connectivity.


Requirements:

  • 3-4 years of professional experience with Azure cloud services, especially Databricks, Data Lake, and Data Factory.
  • Strong programming skills in Python and SQL.
  • Hands-on experience building and consuming APIs for data ingestion and integration.
  • Good understanding of Spark architecture and distributed data processing concepts.
  • Familiarity with data modeling, data warehousing, and big data solutions aligned with client needs.
  • Knowledge of data security, governance, and compliance within cloud environments.
  • Excellent communication skills, with experience explaining technical concepts to clients and collaborating effectively in a team environment.

Preferred

  • Experience with DevOps practices, CI/CD pipelines, and automation in Azure/Databricks environments.
  • Exposure to real-time data streaming technologies (e.g., Kafka) and advanced analytics solutions.

Education

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: APIs Architecture Azure Big Data Business Intelligence CI/CD Computer Science Consulting Databricks Data pipelines Data quality Data Warehousing DevOps ELT Engineering ETL Kafka Pipelines Python Scala Security Spark SQL Streaming

Perks/benefits: Startup environment

Region: North America
Country: Canada

More jobs like this