Associate Consultant-Data Engineer
Bangalore, Karnataka, India
- Assess, capture, and translate complex business issues and solution requirements into structured technical tasks for the data engineering team, including rapid learning of industry standards and development of effective work stream plans
- Design, build, launch, optimize and extend full-stack data and business intelligence solutions spanning extraction, storage, complex transformation and visualization layers
- Support build of big data environments that enable analytics solutions on a variety of big data platforms, including assessing the usefulness of new technologies and advocating for their adoption
- Continuously focus on opportunities to improve the processes and efficiencies of the data pipelines
- Continuously focus on improving data quality and reliability of existing pipelines
- Work with a variety of stakeholders and team to build and modify data pipelines as required that can meet the business needs
- Create data access tools for the analytics and data scientist team to leverage the data and develop new solutions
Essential skills required
- Education / professional qualifications
- Bachelor’s in computer science engineering or equivalent or relevant experience
- Certification in cloud technologies especially Azure, would be good to have
- Prior Experience:
- 2-3 years of development experience building and maintaining ETL /ELT pipelines that operate on a variety of sources, such as APIs, FTP sites, cloud-based blob stores, databases (relational and non-relational)
- Experience working with operational programming tasks, such as version control, CI/CD, testing and quality assurance
- Experience with Apache data projects (Hadoop, Spark, Hive, Airflow), or cloud platform equivalents (Databricks, Azure Data Lake Services, Azure Data Factory) and in one or more of the following programming languages: Python, Scala, R, Java, Golang, Kotlin, C, or C++)
- Experience with SDLC methodologies, particularly Agile and project management tools, preferably Azure DevOps
- Experience in managing small teams and ensuring collective business objectives are met
Behavioral / team skills
- Personal drive and positive work ethic to deliver results within tight deadlines and in demanding situations
- Flexibility to adapt to a variety of projects, working hours and work environments and manage multiple projects
- Excellent written and verbal communication skills
- Team player: self-driven and ability to work independently
has context menu
has context menu
- Assess, capture, and translate complex business issues and solution requirements into structured technical tasks for the data engineering team, including rapid learning of industry standards and development of effective work stream plans
- Design, build, launch, optimize and extend full-stack data and business intelligence solutions spanning extraction, storage, complex transformation and visualization layers
- Support build of big data environments that enable analytics solutions on a variety of big data platforms, including assessing the usefulness of new technologies and advocating for their adoption
- Continuously focus on opportunities to improve the processes and efficiencies of the data pipelines
- Continuously focus on improving data quality and reliability of existing pipelines
- Work with a variety of stakeholders and team to build and modify data pipelines as required that can meet the business needs
- Create data access tools for the analytics and data scientist team to leverage the data and develop new solutions
Essential skills required
- Education / professional qualifications
- Bachelor’s in computer science engineering or equivalent or relevant experience
- Certification in cloud technologies especially Azure, would be good to have
- Prior Experience:
- 2-3 years of development experience building and maintaining ETL /ELT pipelines that operate on a variety of sources, such as APIs, FTP sites, cloud-based blob stores, databases (relational and non-relational)
- Experience working with operational programming tasks, such as version control, CI/CD, testing and quality assurance
- Experience with Apache data projects (Hadoop, Spark, Hive, Airflow), or cloud platform equivalents (Databricks, Azure Data Lake Services, Azure Data Factory) and in one or more of the following programming languages: Python, Scala, R, Java, Golang, Kotlin, C, or C++)
- Experience with SDLC methodologies, particularly Agile and project management tools, preferably Azure DevOps
- Experience in managing small teams and ensuring collective business objectives are met
Behavioral / team skills
- Personal drive and positive work ethic to deliver results within tight deadlines and in demanding situations
- Flexibility to adapt to a variety of projects, working hours and work environments and manage multiple projects
- Excellent written and verbal communication skills
- Team player: self-driven and ability to work independently
- Assess, capture, and translate complex business issues and solution requirements into structured technical tasks for the data engineering team, including rapid learning of industry standards and development of effective work stream plans
- Design, build, launch, optimize and extend full-stack data and business intelligence solutions spanning extraction, storage, complex transformation and visualization layers
- Support build of big data environments that enable analytics solutions on a variety of big data platforms, including assessing the usefulness of new technologies and advocating for their adoption
- Continuously focus on opportunities to improve the processes and efficiencies of the data pipelines
- Continuously focus on improving data quality and reliability of existing pipelines
- Work with a variety of stakeholders and team to build and modify data pipelines as required that can meet the business needs
- Create data access tools for the analytics and data scientist team to leverage the data and develop new solutions
Essential skills required
- Education / professional qualifications
- Bachelor’s in computer science engineering or equivalent or relevant experience
- Certification in cloud technologies especially Azure, would be good to have
- Prior Experience:
- 2-3 years of development experience building and maintaining ETL /ELT pipelines that operate on a variety of sources, such as APIs, FTP sites, cloud-based blob stores, databases (relational and non-relational)
- Experience working with operational programming tasks, such as version control, CI/CD, testing and quality assurance
- Experience with Apache data projects (Hadoop, Spark, Hive, Airflow), or cloud platform equivalents (Databricks, Azure Data Lake Services, Azure Data Factory) and in one or more of the following programming languages: Python, Scala, R, Java, Golang, Kotlin, C, or C++)
- Experience with SDLC methodologies, particularly Agile and project management tools, preferably Azure DevOps
- Experience in managing small teams and ensuring collective business objectives are met
Behavioral / team skills
- Personal drive and positive work ethic to deliver results within tight deadlines and in demanding situations
- Flexibility to adapt to a variety of projects, working hours and work environments and manage multiple projects
- Excellent written and verbal communication skills
- Team player: self-driven and ability to work independently
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow APIs Azure Big Data Business Intelligence CI/CD Computer Science Databricks Data pipelines Data quality DevOps ELT Engineering ETL Golang Hadoop Java Pipelines Python R Scala SDLC Spark Testing
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.