Data Engineer
Singapore
Terrascope
The Easiest Platform to Measure and Decarbonize Land, Nature, and the Net-Zero EconomyTerrascope is a leading decarbonisation software platform designed specifically for the Land, Nature (LAN), and the Net-Zero Economy (NZE). As the easiest-to-use platform for these sectors, our comprehensive solution blends deep industry expertise with advanced climate science, data science, and machine learning. Terrascope enables companies to effectively manage emissions across their supply chains.
Our integrated platform offers solutions for Product and Corporate Carbon Footprinting, addressing Scope 3 and land-based emissions, SBTi FLAG & GHG Protocol LSR reporting, and supporting enterprise decarbonisation goals.
Publicly launched in June 2022, Terrascope works with customers across sectors, from agriculture, food & beverages, manufacturing, retail and luxury, to transportation, real estate, and TMT.
Terrascope is globally headquartered in Singapore and operates in major markets across APAC, North America, and EMEA. Terrascope is a partner of the Monetary Authority of Singapore’s ESG Impact Hub, a CDP Gold Accredited software provider, has been independently assured by Ernst & Young, and a signatory of The Climate Pledge to achieve Net Zero by 2040.
We are seeking a Senior Data Engineer to design, build, and optimize our data infrastructure, ensuring scalable and reliable data pipelines for ingestion, transformation, and analytics.
This role is a hybrid between Data Engineering —focused on building robust data pipelines, optimizing data models, and managing infrastructure—and Analytics Engineering, where you’ll shape business-ready datasets, support BI tools, and guide the organization’s data modeling strategy.
This role is ideal for candidates who thrive in a startup environment, are passionate about data architecture and analytics, and are eager to solve real-world sustainability challenges.
Key Responsibilities : Data Engineering
- Build and optimize scalable data pipelines for ingestion, transformation, and storage.
- Work with structured and unstructured data, handling diverse file formats such as CSV, Excel, JSON, and PDFs.
- Extract, parse, and integrate emission factor databases (EFDBs) from external sources, ensuring they are structured for ingestion, efficient retrieval and analysis.
- Leverage on scheduler tools such as Apache Airflow, Dagster, or equivalent to manage and automate data workflows.
- Ensure data integrity, consistency, and governance while working with high-volume datasets.
- Develop and optimize queries in MongoDB and SQL for high-performance querying.
- Manage and deploy data infrastructure on cloud providers such as AWS.
- Consolidate data models written in Node.js and Python across different systems, ensuring a single source of truth for internal applications.
Analytics Engineering
- Design and implement robust data models to support analytics, business intelligence (BI), and data science.
- Advise on data modeling strategies to optimize performance, maintainability, and scalability.
- Enable BI reporting and self-service analytics by preparing analytics-ready datasets.
- Work with BI tools such as Tableau, GoodData, Power BI, Looker, Metabase, or equivalent to build visualizations and dashboards.
- Optimize query performance, materialized views, and aggregation strategies for efficient reporting.
- Collaborate closely with data scientists, Product, Implementations and Sales teams to provide actionable insights.
- Ensure that datasets are properly indexed and structured for fast and efficient access.
What We Are Looking For
- 5 to 8 years of experience in Data Engineering and/or Analytics Engineering roles.
- Strong knowledge of Python (including pandas library) and/or Node.js, SQL.
- Proven experience with SQL (PostgreSQL) and/or NoSQL databases (MongoDB or AWS DocumentDB), including indexing, partitioning, and query optimization.
- Experience with data modeling (designing database schemas, tables, entity-relationship diagrams, etc)
- Strong data structures and algorithms knowledge and understanding of time and space complexity.
- Experience working with scheduler tools like Apache Airflow, Dagster, or similar frameworks.
- Ability to parse and process unstructured data, including PDFs, Excel, and other file formats.
- Experience working with large-scale data systems and optimizing query performance.
- Hands-on experience with AWS, GCP, or Azure, for data storage, compute, databases and security.
- Possess knowledge in DevOps and Infrastructure-as-Code (Terraform, GitOps, Kubernetes, CI/CD tools like GitHub Actions, Argo CD).
- Passion or willingness to learn about sustainability and carbon emissions data.
Nice to Have
- Experience with other NoSQL and SQL databases beyond MongoDB.
- Hands-on experience with streaming data architectures such as Kafka.
- Exposure to distributed computing frameworks such as Spark.
- Knowledge and practical experience in AWS EC2, S3, Glue, IAM and CloudWatch.
- Experience in data security, governance, and compliance best practices.
- Background in carbon accounting methodologies or sustainability-related data processing.
- Experience working in a SaaS-product company and/or startups, and comfortable with change and ambiguity.
We're committed to creating an inclusive environment for our strong and diverse team. We value diversity and foster a community where everyone can be their authentic self.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: AI governance Airflow Architecture AWS Azure Business Intelligence CI/CD CSV Dagster Data pipelines DevOps EC2 Engineering Excel GCP GitHub JSON Kafka Kubernetes Looker Machine Learning Metabase MongoDB Node.js NoSQL Pandas Pipelines PostgreSQL Power BI Privacy Python Security Spark SQL Streaming Tableau Terraform Unstructured data
Perks/benefits: Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.