Research Data Engineer
Boston, MA
GMO
GMO partners with sophisticated institutions, financial intermediaries, and families to provide innovative solutions to meet their long-term investment needs.Founded in 1977, GMO is a global investment manager committed to delivering superior long-term investment performance and advice to our clients. We offer strategies and solutions where we believe we are positioned to add the greatest value for our investors. These include multi-asset class, equity, fixed income, and alternative offerings. We manage approximately $65bn for a client base that includes many of the world’s most sophisticated institutions, financial intermediaries, and private clients. Industry-wide, we are well known for our focus on valuation-based investing, willingness to take bold positions when conditions warrant, and candid and academically rigorous thought leadership. Jeremy Grantham, GMO’s Co-Founder and Long-Term Investment Strategist, is renowned as an expert in identifying speculative investment bubbles and also as a leading climate investor and advocate. GMO is privately owned and employs over 430 people worldwide. We are headquartered in Boston, with additional offices in Europe, Asia and Australia. Our company-wide culture emphasizes commitment to clients, intellectual curiosity, and open debate. We celebrate and respect our differences, while embracing and valuing what each of us brings to work, as we know that diverse teams in an inclusive, caring environment achieve higher engagement and better client results.
Please follow the prompts included in this job posting to apply. The application window for this role is anticipated to remain open until the job is filled, or as otherwise determined by GMO.
Position Overview GMO has undertaken a strategic, firm-wide initiative to build a next generation research and investment platform. Leveraging a blend of traditional RDBMS and Big Data technologies, this initiative will provide a collaboration platform to help our investment teams continue to set the standard for investment performance for our clients. We are seeking an experienced Engineer to join the Research Data Engineering team to maintain the existing on-premises platform and help build out the cloud-based future platform and its leading-edge capabilities.
Required Skills and Experience:
- 5+ years of data warehouse engineering experience (SQL Server, T-SQL)
- 3 – 5 years working experience with Apache Spark/Databricks
- Expertise in data warehousing concepts such as persistent staging, SCD, dimensional modeling and time series analysis
- Strong experience with relevant programming languages and tools such as Scala, Python, C#, .NET, unit test frameworks, CI/CD
- Production experience implementing efficient ETL and ELT processes for large data sets, preferably with market and trading data related providers
- Experience with workflow orchestration tools, particularly Apache Airflow, including designing, implementing, and maintaining DAGs to automate complex data pipelines
- Direct experience developing and optimizing containerized applications using Docker, with knowledge in designing and deploying Kubernetes resources (e.g., deployments, services) to support scalable and efficient application workflows
- Proven ability to build Big Data solutions using Python and Spark or equivalent scale-out solutions
- Production experience with public clouds such as Azure (preferred) or AWS
- Familiarity with at least one NoSQL database (e.g., MongoDB, Cassandra, HBase, CouchDB, BigTable, DynamoDB, CosmosDB)
- Must be meticulous and able to prioritize work and effectively manage multiple tasks within given deadlines and parameters
Preferred Experience:
- Prior employment at an asset management company
- Complex time series and multi-dimensional data
- On-prem to cloud migration projects
- Data Science (machine learning / deep learning / artificial intelligence)
- Advanced degree in computer science/engineering, math, science, or related field
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow AWS Azure Big Data Bigtable Cassandra CI/CD Computer Science Databricks Data pipelines Data warehouse Data Warehousing Deep Learning Docker DynamoDB ELT Engineering ETL HBase Kubernetes Machine Learning Mathematics MongoDB NoSQL Pipelines Python RDBMS Research Scala Spark SQL T-SQL
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.