Senior Data Engineer IND (Remote)

Cambourne, United Kingdom of Great Britain and Northern Ireland

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

RemoteStar

Hire the best person for the job, no matter where they are. Hire the right way, hire remote.

View all jobs at RemoteStar

Apply now Apply later

Our Client : is a leading revenue intelligence platform, combining automation and human research to deliver 95% data accuracy across their published contact data. With a growing database of 5 million+ human-verified contacts and over 70 million machine-processed contacts, they offer one of the largest collections of direct dial contacts in the industry. Their dedicated research team re-verifies contacts every 90 days, ensuring exceptional data accuracy and quality.


Location: Remote (Pan India)
Shift Timings: 2:00 PM – 11:00 PM IST
Reporting To: CEO or assigned Lead by Management. 


Responsibility : 

  • Design and build scalable data pipelines for extraction, transformation, and loading (ETL) using the latest Big Data technologies. 
  • Identify and implement internal process improvements like automating manual tasks and optimizing data flows for better performance and scalability. 
  • Partner with Product, Data, and Engineering teams to address data-related technical issues and infrastructure needs. 
  • Collaborate with machine learning and analytics experts to support advanced data use cases.     
 Key Requirements :
  • Bachelor’s degree in Engineering, Computer Science, or a relevant technical field.
  • 10+ years of recent experience in Data Engineering roles.
  • Minimum 5 years of hands-on experience with Apache Spark, with strong understanding of Spark internals.
  • Deep knowledge of Big Data concepts and distributed systems.
  • Proficiency in coding with Scala, Python, or Java, with flexibility to switch languages when required.
  • Expertise in SQL, and hands-on experience with PostgreSQL, MySQL, or similar relational databases.
  • Strong cloud experience with Databricks, including Delta Lake. 
  • Experience working with data formats like Delta Tables, Parquet, CSV, JSON.
  • Comfortable working in Linux environments and scripting.
  • Comfortable working in an Agile environment.
  • Machine Learning knowledge is a plus. 
  • Must be capable of working independently and delivering stable, efficient and         reliable software. 
  • Experience supporting and working with cross-functional teams in a dynamic environment.



Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Big Data Computer Science CSV Databricks Data pipelines Distributed Systems Engineering ETL Java JSON Linux Machine Learning MySQL Parquet Pipelines PostgreSQL Python RDBMS Research Scala Spark SQL

Regions: Remote/Anywhere Europe
Country: United Kingdom

More jobs like this