Data Engineer (FDE)

Maharashtra, Pune, India

Codvo.ai

Codvo AI delivers strategic enterprise solutions that transform your data into measurable value. We help businesses accelerate growth through custom AI implementations that adapt and scale with your needs.

View all jobs at Codvo.ai

Apply now Apply later

Data Integration Engineer :
We are looking for a motivated and an experienced data engineer to join a growing data science team.. As a
senior data engineer, you will be required to work closely with customers, data scientists and solution architects
to build robust production ready data infrastructure and data pipelines to help scale the machine learning and
analytics solutions. In this role, you will drive the development of data engineering solutions from initial
experimentation to production level deployment. You will also work with data science leadership to develop
internal tools for rapid data ingestion and integration of customer data into selected cloud platforms and other
SaaS solutions.
Responsibilities:
• Collaborate and work in a global data science team to develop scalable and robust data integration
infrastructure.
• Engage directly with customers and partners to design and develop data requirements based on functional
requirements.
• Build custom data integration pipelines from existing source systems into cloud platforms such as AWS,
Microsoft Azure etc.
• Enable data ingestion, pre-processing, custom data wrangling from filesystems, databases, queues and streams
to enable rapid prototyping.
• Work with customers to develop custom data handlers and connectors as needed.
• Perform a variety of data loads & data transformations.
• Improve database and application performance with fine tuning.
• Work with other project teams for data integrations and data lake requirements.
• Automate processes for better stability and performance of application.
Qualifications/Requirements
• BS or MS degree in Computer Science, Engineering, IT.
• Minimum 5+ years of professional experience in data engineering, databases and and/or business analytics.
• Experience reading and parsing industrial P&ID and PFD documents.
• Solid background and experience in SQL, Python and/or Java/Scala
• Minimum 3 years’ experience in ETL and data pipeline design for heterogenous data such as time series,
streams, queues and text.
• Familiarity with containers and container services like Docker, Docker registry and Kubernetes.
• Knowledge of test-driven development, agile software development methodologies and tools .
• Excellent verbal and written communication skills to work in a global team and with customers on a regular
basis.
Location : Bangalore (Remote till covid)
Work timings : 2.30 pm- 11.30 pm
Notice Period : Immediate-30 day
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile AWS Azure Business Analytics Computer Science Data pipelines Docker Engineering ETL Industrial Java Kubernetes Machine Learning Pipelines Prototyping Python Scala SQL TDD

Region: Asia/Pacific
Country: India

More jobs like this