Data Engineer- Python, Spark, SQL
Mumbai, India
NECSWS
NEC Software Solutions builds software and services that deliver better outcomes, keeping people safer, healthier and better connected.Company Description
Our philosophy is to understand our customers’ business first before we get to the technology.
This approach leads to clever software; streamlining old processes, saving money and delivering positive change.
Our technology has helped the NHS screen millions of babies for hearing loss, ensures hundreds of housing providers are managing their homes efficiently and helps officers in over a dozen different police forces to make better decisions at the frontline.
Based in the UK but working around the world, our 2,000 employees help improve the services that matter most.
We are now part of the NEC corporation, a leader in the integration of IT and network technologies that benefit businesses and people worldwide – this brings in new opportunities without limits for growth and innovation.
Job Description
Role: Data Engineer
Experience: 7-10years
Location: Mumbai Preferred, Open to PAN India
Job Summary:
Skills:
- Experience with programing in Python, Spark and SQL
- Prior experience in AWS services (Such as AWS Lambda, Glue, Step function, Cloud Formation, CDK)
- Knowledge of building bespoke ETL solutions
- Data modelling, and T-SQL for managing business data and reporting
- Capable of technical deep-dives into code and architecture
- Ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management
- Experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms.
- Effective communication skills
Responsibilities:
- Build data pipelines: Architecting, creating and maintaining data pipelines and ETL processes in AWS
- Support and Transition: Support and optimize our current desktop data tool set and Excel analysis pipeline to a transformative Cloud based highly scalable architecture.
- Work in an agile environment: within a collaborative agile cross-functional product team using Scrum and Kanban
- Collaborate across departments: Work in close relationship with data science teams and with business (economists/data) analysts in refining their data requirements for various initiatives and data consumption requirements
- Educate and train: Required to train colleagues such as data scientists, analysts, and stakeholders in data pipelining and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases
- Participate in ensuring compliance and governance during data use: To ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives.
- Work within, and encourages a Devops culture and Continuous Delivery process
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Architecture AWS Data governance Data pipelines DevOps ETL Excel Kanban Lambda Machine Learning ML models Pipelines Python Scrum Spark SQL T-SQL
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.