Data Engineer (Azure, NoSQL and Big Data Technologies)
Bengaluru - GTP, India
Synechron
Synechron is an innovative global consulting firm delivering industry-leading digital solutions to transform and empower businesses.Purpose: The data engineer’s role is delivery-focused, driving data pipeline and data product delivery through data architecture, modeling, design, and development of professional-grade solutions on-premise and/or Microsoft Azure cloud. The data engineer will partner with data scientists and statisticians across Elanco global business functions to prepare and transform their data into data products that further drive scientific and/or business knowledge discovery, insights, and forecasting. This role will involve working with teams spread globally across different time zones and requires a sound understanding of data management, data governance, master data management, and knowledge graph representation.
Role & Responsibilities:
Data Engineering Expertise:
- Provide data engineering subject matter expertise and hands-on data capture, ingestion, curation, and pipeline development expertise on Azure.
- Develop and deliver highly performant, scalable, robust, and secure Azure cloud data pipelines enabling AI/ML/Deep-Learning, Advanced Analytics, Enterprise Collaboration, Micro-services, IoT, and Server-less Compute.
Technical Solutions:
- Provide expert data PaaS on Azure storage, big data platform services, server-less architectures, Azure SQL DB, NoSQL databases, and secure, automated data pipelines.
- Participate in data/data-pipeline architectural discussions to build cloud-native solutions or migrate existing data applications from on-premise to Azure platform.
- Perform current state “AS-IS” and future state “To-Be” analysis.
Community and Collaboration:
- Participate and help develop data engineering community of practice as a global go-to expert panel/resource.
- Work collaboratively and use sound judgment in developing robust solutions while seeking guidance on complex problems.
Continuous Improvement:
- Develop and evolve new or existing data engineering methods and procedures to create possible alternative, agile solutions to moderately complex problems.
- Stay abreast with new and emerging data engineering technologies, tools, methodologies, and patterns on Azure and other major public clouds.
Technical Skills:
Category-wise:
Programming and Frameworks:
- Proficiency in programming languages such as PowerShell, C#, Java, Python, Scala, SQL, and Big Data technologies (Hadoop, Spark/SparkSQL, Hive).
- Experience with streaming technologies like Kafka, EventHub.
Azure Cloud Services:
- Experience with Azure native data/big-data tools and services including – Storage BLOBS, ADLS, Azure SQL DB, COSMOS DB, NoSQL, SQL Data Warehouse, Data Bricks, Stream Analytics, and PowerBI.
- Knowledge of distributed systems.
Automation and DevOps:
- Hands-on experience with Azure Data Factory, Windows/Linux virtual machines, Docker Container and Kubernetes Cluster Management, Autoscaling, Azure Functions, Server-less Architecture, ARM Templates, Logic Apps, and Data Factory.
- Familiarity with Azure network security group, key management service, etc.
Day-to-Day Activities:
- Developing and maintaining data pipelines and data products.
- Collaborating with data scientists and statisticians to transform data into actionable insights.
- Participating in architectural discussions and performing “AS-IS” and “To-Be” analysis.
- Staying updated with the latest advancements in data engineering technologies.
- Working independently and collaboratively on new database technologies and integrated services in Azure and GCP.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related discipline.
Experience:
- Minimum of 4-7 years of IT experience with at least 2 years of Azure experience.
- Proven experience in data pipeline and data product design, development, and delivery on Azure.
- Sound problem-solving skills in developing data pipelines using Data Bricks, Stream Analytics, and PowerBI.
Soft Skills:
- Strong communication and leadership skills.
- Excellent interpersonal and collaboration skills.
- Ability to work under pressure and meet tight deadlines.
- Positive attitude and strong work ethic.
- A commitment to continuous learning and professional development.
SYNECHRON’S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Architecture Azure Big Data Computer Science Cosmos DB Databricks Data governance Data management Data pipelines Data warehouse DevOps Distributed Systems Docker Engineering GCP Hadoop Java Kafka Kubernetes Linux Machine Learning NoSQL Pipelines Power BI Python Scala Security Spark SQL Streaming
Perks/benefits: Career development Flex hours
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.