Data Engineer

Gurgaon, India

Apply now Apply later

Responsibilities :

- Design, Develop, Maintain and own data engineering projects built from requirements by different stakeholders, including but not limited to product teams, data scientists, data analysts and BI.
- Fulfill any requirement related to data pipelines, data warehouses and databases required by the application.
- Create ingestion pipelines to collect data from different sources and push to targets.
- Design and Create SQL, tables, procedures, functions etc.
- Operate and Maintain existing Hadoop Cluster.
- Write Spark Tasks/Applications mainly in Java.
- Evaluate, test and update data applications according to changing requirements.
- Monitor and report data systems health and security.
- Support internal teams to use our systems, troubleshoot & identify issues, and resolve them.
- Follow best coding practices and participate in the code review process.
- Use efficiently the development environment, compiling tools, version control and bug tracking systems for all projects.

Technical Competencies :

Must Have :
- 2-4 years of experience in Data Engineering.
- Good Knowledge/Experience in Java, C# and .net core.
- Good Knowledge/Experience Hadoop Stack, Oozie, Spark, Kafka
- Hands-on in writing SQL queries, Stored Procedures and Database code
- Good Knowledge with concepts, design and mechanics of traditional databases, data warehouses and data lakes
- Good Knowledge with ETL, ELT, Transformation, Processing, Computation etc.
- Experience with relational databases such as MySQL, PostgreSQL, MariaDB.
- Good Knowledge/Experience with Git, CI/CD, and writing unit tests.
- Good Knowledge/Experience working on an agile development project.
- Good Knowledge/Experience with Jira and the Atlassian tool suite.
- Good Knowledge/Experience in Linux Shell Scripting

Nice to Have :

- Experience with NoSQL databases such as Aerospike, DynamoDB, Redis, Elasticsearch, Cassandra, MongoDB
- Good knowledge of Snowflake databases, schemas, tables, procedures, functions, views, stages, pipes etc.
- Knowledge on one of the ETL tools
- Experience with performance and scalability issues.
- Knowledge of Python is a plus
- Non Technical Competencies
- Good English and communication skills.
- Positive and 'can do' attitude.
- Able to work independently as well as in a team.
- Able to take up challenges and come up with innovative solutions.
- We highly appreciate self-starters, initiators and contributors.

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0
Category: Engineering Jobs

Tags: Agile Cassandra CI/CD Data pipelines DynamoDB Elasticsearch ELT Engineering ETL Git Hadoop Java Jira Kafka Linux MariaDB MongoDB MySQL NoSQL Oozie Pipelines PostgreSQL Python RDBMS Security Shell scripting Snowflake Spark SQL

Region: Asia/Pacific
Country: India

More jobs like this