Software Data Engineer (Pune)

Pune, India

Addepar

A platform built to simplify complexity. Addepar empowers investment professionals across the globe with data, insights and cutting-edge technology.

View all jobs at Addepar

Who We Are

Addepar is a global technology and data company that helps investment professionals provide the most informed, precise guidance for their clients. Hundreds of thousands of users have entrusted Addepar to empower smarter investment decisions and better advice over the last decade. With client presence in more than 45 countries, Addepar’s platform aggregates portfolio, market and client data for over $6 trillion in assets. Addepar’s open platform integrates with more than 100 software, data and services partners to deliver a complete solution for a wide range of firms and use cases. Addepar embraces a global flexible workforce model with offices in Silicon Valley, New York City, Salt Lake City, Chicago, London, Edinburgh and Pune.

The Role

Portfolio Data Integration is part of the broader Addepar Platform team. The overall Addepar Platform provides a single source of truth “data fabric” used by the Addepar product set, including a centralized and self-describing repository (a.k.a Data Lake), a set of API-driven data services, an integration pipeline, analytics infrastructure, warehousing solutions, and operating tools. The team has responsibility for all data acquisition, conversion, cleansing, disambiguation, modeling, tooling and infrastructure related to the integration of client portfolio data.

Addepar’s core business relies on the ability to quickly and accurately ingest data from a variety of sources, including 3rd party data providers, custodial banks, data APIs, and even direct user input. Portfolio Data integrations and feeds are a highly critical cross-section of this set, allowing our users to get automatically updated and reconciled information on their latest holdings onto the platform.

As a software data engineer for this team, you will execute the development of new data integrations and maintenance of existing processes in order to expand and improve our data platform. You’ll be adding automation and functionality to our distributed data pipelines by writing PySpark code and integrating it within our Databricks Data Lake. As you gain more experience, you’ll contribute to increasingly challenging engineering projects within our platform with the ultimate goal of dramatically growing the efficiency of data ingestion for Addepar. This is a crucial, highly visible role within the company. Your team is a big component of growing and serving Addepar’s client base with minimal manual effort required from our clients or from our internal data operations team.

What You’ll Do

  • Complete individual project priorities, deadlines, and solutions.
  • Build pipelines that support the ingestion, analysis, and enrichment of financial data in partnership with business data analysts
  • Improve the existing pipeline to increase the throughput and accuracy of data
  • Develop and maintain efficient process controls and accurate metrics to ensure quality standards and organizational expectations are met
  • Partner with members of Product and Engineering to design, test, and implement new processes and tooling features that improve data quality as well as increase operational efficiency
  • Identify areas of automation opportunities and implement improvements
  • Understand data models and schemas, and work with other engineering teams to recommend extensions and changes

Who You Are

  • A computer science degree or equivalent experience
  • Minimum 2+ years of professional software engineering experience
  • Competency with relevant programming languages (Java, Python)
  • Familiarity with relational databases and data pipelines
  • Experience and interest in data modeling
  • Knowledge of financial concepts (e.g., stocks, bonds, etc.) is encouraged but not necessary
  • Passion for the finance and technology space and solving previously intractable problems at the heart of investment management
  • Experience with Data Lake or Data Platforms like Databricks, Snowflake,etc will be highly preferred
  • Experience with any public cloud (AWS preferred). 

Our Values 

  • Act Like an Owner - Think and operate with intention, purpose and care. Own outcomes.
  • Build Together - Collaborate to unlock the best solutions. Deliver lasting value. 
  • Champion Our Clients - Exceed client expectations. Our clients’ success is our success. 
  • Drive Innovation - Be bold and unconstrained in problem solving. Transform the industry. 
  • Embrace Learning - Engage our community to broaden our perspective. Bring a growth mindset. 

In addition to our core values, Addepar is proud to be an equal opportunity employer. We seek to bring together diverse ideas, experiences, skill sets, perspectives, backgrounds and identities to drive innovative solutions. We commit to promoting a welcoming environment where inclusion and belonging are held as a shared responsibility.

We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

PHISHING SCAM WARNING: Addepar is among several companies recently made aware of a phishing scam involving con artists posing as hiring managers recruiting via email, text and social media. The imposters are creating misleading email accounts, conducting remote “interviews,” and making fake job offers in order to collect personal and financial information from unsuspecting individuals. Please be aware that no job offers will be made from Addepar without a formal interview process. Additionally, Addepar will not ask you to purchase equipment or supplies as part of your onboarding process. If you have any questions, please reach out to TAinfo@addepar.com.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  1  0
Category: Engineering Jobs

Tags: APIs AWS Computer Science Databricks DataOps Data pipelines Data quality Engineering Finance Java Pipelines PySpark Python RDBMS Snowflake

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this