KR99 Senior Data Engineer

Remote

Cybernetic Controls Limited

In summary, Cybernetic Controls provides the best recruitment services. One of the Leading Onshore & Offshore Recruitment agencies in the UK.

View all jobs at Cybernetic Controls Limited

Apply now Apply later

KR99 Senior Data Engineer

Department: Engineering

Employment Type: Full Time

Location: Remote

Reporting To: Lead Data Engineer


Description

Overview At Cybernetic Controls Limited (CCL), we are committed to global leadership in providing innovative digital solutions that empower businesses to reach their full potential. As a remote-first company, we believe in empowering our employees to work in a way that best suits their individual needs, fostering a culture of flexibility and trust. Since our founding in 2020, we have successfully delivered high-quality resources to our clients in the FinTech sector across various business areas. Read more on the Cybernetic Controls website.
 Our Client: We are a multi-award winning RegTech company on a mission to transform the quality of regulatory reporting in the financial services industry. We’ve combined regulatory expertise with advanced technology to develop our market-leading quality assurance services. Unique in being able to fully assess data quality, our services are used by some of the world’s largest investment banks, asset managers, hedge funds and brokers, helping them to reduce costs, improve quality and increase confidence in their regulatory reporting. Job summary Our client is seeking a Senior Data Engineer to join our fast-growing team. The successful candidate will join the testing team to work on ETL and development tasks. This is an exciting and challenging opportunity to build out new pipelines combining and processing large amounts of structured data from a variety of sources with the power of PySpark at your fingertips.

Key Responsibilities

  • Architect and build pipelines using AWS cloud computing solutions that make data available with robustness, maintainability, efficiency, scalability, availability and security. 
  • Develop Python and/or Spark (Preferably PySpark, but Spark-Scala is also good to have) code that implements complex data transformations. 
  • Design and maintain databases and APIs for storage and transmission of data between applications. 
  • Monitor pipelines in production (and develop tools to facilitate this). 
  • Work collaboratively with other team members (brainstorming, troubleshooting, and code review). 
  • Liaise with other development teams to ensure the integrity of data pipelines.

Skills, Knowledge & Expertise

Skills:
  • Excellent Python and PySpark programming (including Pandas/PySpark dataframes, web and database connections) 
  • Excellent understanding of ETL processes within Amazon Web Services (AWS) 
  • Apache Spark, AWS Glue, Athena, S3, Step Functions, Lake Formation 
  • Software development lifecycle best practices 
  • Test-driven development 
  • Serverless computing (AWS Lambda, API Gateway, SQS, SNS, EventBridge, S3, etc.) 
  • SQL and NoSQL database design and management (DynamoDB, MySQL) 
  • Strong SQL coding skills (Spark-SQL, Presto SQL, MySQL, etc.) 
  • Infrastructure as code (CloudFormation) 
  • Experience in Shell Scripting (preferably Linux) 
  • Version control with Git/Github 
  • Agile principles, processes and tools
  • Excellent written and verbal communication skills. 
Experience: 
  • Designing, deploying and managing complex production data pipelines that interact with a range of data sources (file systems, web, database, users)
  • Strong experience with Amazon Web Services (AWS) 
  • 5 years’ work in data engineering field
  • At least 2 years' experience with PySpark and AWS data tools (particularly, Glue)
Knowledge: 
  • Data modeling, data pipeline architecture, Big Data implementation. 
  • Software development lifecycle best practices. 
  • Financial knowledge would be an asset. Qualifications/Training: 
  • Bachelor’s degree or equivalent in Computer Science or a related subject. 

What you’ll get in return

  • Competitive salary package 
  • Private healthcare contribution 
  • Annual pay review 
  • Regular team socials 
  • Working within a culture of innovation and collaboration 
  • Opportunity to play a key role in a pioneering growth company
  • Company Laptop will be provided
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Agile APIs Architecture Athena AWS AWS Glue Big Data CloudFormation Computer Science Data pipelines Data quality DynamoDB Engineering ETL FinTech Git GitHub Lake Formation Lambda Linux MySQL NoSQL Pandas Pipelines PySpark Python Scala Security Shell scripting Spark SQL Step Functions TDD Testing

Perks/benefits: Competitive pay Gear Startup environment

Region: Remote/Anywhere

More jobs like this