Lead Data Engineer - Remote US

Work From Home - USA

Convera

Convera helps over 26,000 businesses manage FX risk and streamline cross-border payments—securing more value in every transaction.

View all jobs at Convera

Apply now Apply later

Lead Data Engineer – Remote

We are seeking an experienced Lead Data Engineer to oversee the development and utilization of data systems. This role reports to the Sr. Manager – Data Engineering and works with our dynamic team in the Foreign Exchange payments processing industry. The ideal candidate is responsible for defining and implementing the enterprise data architecture strategy and ensuring robust data governance across the organization. This role requires a deep understanding of business processes, technology, data management, and regulatory compliance. The successful candidate will work closely with business and IT leaders to ensure that the enterprise data architecture supports business goals, and that data governance policies and standards are adhered to across the organization. Your responsibilities will include working closely with data engineers, analysts, cross-functional teams, and other stakeholders to ensure that our data platform meets the needs of our organization and supports our data-driven initiatives. It also includes building a new data platform, integrating data from various sources, and ensuring data availability for various application and reporting needs. Additionally, the candidate should have experience working with AI/ML technologies and collaborating with data scientists to meet their data requirements.

In your role as a Lead Data Engineer, you will:

  • Architect and Develop Data Solutions: Lead the end-to-end design and development of robust data pipelines and data architectures using AWS tools and platforms, including AWS Glue, S3, RDS, Lambda, EMR, and Redshift.
  • Optimize ETL Processes: Design and optimize ETL workflows to facilitate the efficient extraction, transformation, and loading of data between diverse source and target systems, including data warehouses, data lakes, and both internal and external platforms.
  • Collaborate and Design Data Models: Partner with stakeholders and business units to develop data models that align with business needs, analytical requirements, and industry standards.
  • Data Integration and Architecture Maintenance: Collaborate with internal and external teams to design, implement, and maintain data integration solutions, ensuring high data integrity, consistency, and accuracy.
  • Implementation and Troubleshooting: Oversee the implementation of data solutions from initial concept through to production. Troubleshoot and resolve complex technical issues to ensure data pipeline stability and high performance.
  • Leadership and Mentorship: Provide guidance and leadership to engineering teams, promoting a culture of continuous improvement, knowledge sharing, and technical excellence. Mentor junior engineers and foster their professional growth.
  • Innovation and Strategy: Drive technical innovation by staying abreast of industry trends and emerging technologies. Influence technical strategies and decisions to align with organizational goals and objectives.
  • Documentation and Best Practices: Develop and maintain comprehensive documentation for data architectures, pipelines, and processes. Establish and enforce best practices for data engineering and quality assurance.

Required Qualifications

  • Minimum of 10+ years of experience in enterprise data architecture, data management and data governance, or a related field.
  • Expert Level Python or PySpark for complex data engineering tasks 
  • Advance SQL – performance tuning, query optimization, window functions, CTEs 
  • Bash Scripting 
  • Deep Understanding of data lakes, lake houses and warehouse architecture 
  • Experience with schema evolution, partitioning and metadata management  
  • Building and optimizing large scale ETL/ELT pipelines by leveraging tools like Apache airflow,dbt, spark, kafka 
  • Expertise in AWS (S3, Glue, EMR, Redshift, DynamoDB, Lambda) 
  • Designing and implementing data quality frameworks  
  • Strong understanding of data governance, lineage and compliance 
  • CI/CD for data pipelines and version control Git/Github.  

Preferred Qualifications

Snowflake and AWS Certification are good to have. 

  • Data Modeling experience
  • Bachelor's degree or equivalent work experience in Computer Science, Engineering, or a related field

About Convera

Convera is a global leader in commercial payments that powers international business by moving money with ease. We provide tech-led payment solutions to help more than 26,000 customers globally grow with confidence, from small businesses to CFOs and treasurers. As experts in foreign exchange, risk and compliance, with an unrivaled regulatory footprint, Convera’s financial network spans more than 140 currencies and 200 countries and territories.

Our teams care deeply about the value we deliver to our customers, which makes Convera a rewarding place to work. This is an exciting time for our organization as we expand our team with growth-minded, results-oriented people who are looking to move fast in an innovative environment. 

As a truly global company with employees in over 20 countries, we are passionate about diversity; we seek and celebrate people from different backgrounds, lifestyles, and unique points of view. We want to work with the best people and ensure we foster a culture of inclusion and belonging.  

We offer an abundance of competitive perks and benefits including: 

  • Competitive salary
  • Opportunity to earn a bonus (dependent on performance)
  • Great career growth and development opportunities in a global organization 
  • Corporate benefits 

There are plenty of amazing opportunities at Convera for talented, creative problem solvers who never settle for good enough and are looking to transform Business to Business payments.  

Apply now!

 

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0

Tags: Airflow Architecture AWS AWS Glue CI/CD Computer Science Data governance Data management Data pipelines Data quality dbt DynamoDB ELT Engineering ETL Git GitHub Kafka Lambda Machine Learning Pipelines PySpark Python Redshift Snowflake Spark SQL

Perks/benefits: Career development Competitive pay Startup environment

Regions: Remote/Anywhere North America
Country: United States

More jobs like this