Lead Data Engineering
Rosario, Argentina
PwC
We are a community of solvers combining human ingenuity, experience and technology innovation to help organisations build trust and deliver sustained outcomes.Line of Service
TaxIndustry/Sector
Not ApplicableSpecialism
Transfer PricingManagement Level
ManagerJob Description & Summary
A career in our STEM line of Service, within Technology Strategy services, will provide you with the opportunity to help organisationsdevelop strategies that transform their technology capabilities and solve their most critical challenges. We focus on building technology enabled and agile operating models, planning their new enterprise architecture into a differentiating capability system that helps them win in the market, leveraging digital analytics to enhance the customer experience and optimising business operations, and using modern management techniques such as robotic process automation and next generation sourcing strategies to help our clients get fit for growth.
Key Responsibilities:
- Lead the design and development of data architectures using Databricks and Python.
- Implement data solutions leveraging Microsoft technologies such as Azure and SQL Server.
- Guide the team in optimizing data processing with Docker.
- Collaborate with cross-functional teams, including international teams from regions like India, Mexico, and Argentina, to deliver data-driven insights.
- Ensure data quality and security across platforms.
- Stay updated with industry trends and best practices.
- Design, implement, and optimize ETL/ELT workflows using Azure Databricks and Python.
- Handle ingestion and transformation of structured, semi-structured, and unstructured data into Azure Data Lake or Synapse Analytics.
- Develop scalable solutions for batch and real-time data processing.
- Collaborate with data architects, analysts, and stakeholders to define technical requirements and data solutions.
- Contribute to the architecture of modern data platforms leveraging Azure components such as Azure Data Factory, Data Lake, and Synapse Analytics.
- Optimize Databricks notebooks and Spark jobs for cost and performance efficiency.
- Ensure robust monitoring and troubleshooting of production data workflows.
- Implement data quality, lineage, and governance frameworks using Azure Purview or similar tools.
- Ensure compliance with data security and privacy standards.
- Automate workflows and deployments using Azure DevOps or GitHub Actions.
- Support continuous integration and delivery of data solutions.
All qualified applicants will receive consideration for employment at PwC without regard to ethnicity; creed; color; religion; national origin; age; disability; sexual orientation; gender identity or expression; genetic predisposition or carrier status; marital; or any other status protected by law. PwC is proud to be an inclusive organization and equal opportunity employer.
Qualifications:
- Proven experience in data engineering with a focus on Databricks and Python.
- 5+ years of experience as a Data Engineer or related role.
- At Least 2 years of experience as a Lead.
- Strong proficiency in the Microsoft technology stack.
- Experience with Docker for containerization.
- Excellent leadership and teamwork skills.
- Strong problem-solving abilities and attention to detail.
- Experience using orchestrator tools like Airflow.
- Experience of REST APIs frameworks like FastAPI or similar.
- Certifications in Azure or Databricks.
- Experience with Google Cloud Platform and Apache technologies.
- Familiarity with business intelligence tools like Power BI.
- Proven experience with Azure Databricks and Apache Spark.
- Advanced proficiency in Python for data processing and automation.
- Strong experience with Azure Data Factory, Data Lake, and Synapse Analytics.
- Upper intermediate level of english
Skills:
- Expertise in SQL for querying and performance tuning.
- Familiarity with CI/CD tools like Azure DevOps or GitHub Actions.
- Understanding of data governance tools such as Azure Purview.
- Knowledge of data visualization tools (e.g., Power BI) is a plus.
- Experience with Big Data tools and streaming technologies (Kafka, Event Hub) is desirable.
Additional
- Understand the importance of have a correct information management
- Knowledge of Information Security and Data Protection
- Correct Information Security Management
Education (if blank, degree and/or field of study not specified)
Degrees/Field of Study required:Degrees/Field of Study preferred:Certifications (if blank, certifications not specified)
Required Skills
Apache Spark, CI/CD, Data Engineering, Docker Container, ETL Tools, Microsoft Azure, Python (Programming Language), RESTful APIs, Structured Query Language (SQL)Optional Skills
Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Base Erosion and Profit Shifting (BEPS), Business Tax, Coaching and Feedback, Communication, Consolidated Tax Returns, Corporate Structuring, Creativity, Economic Translation, Embracing Change, Emotional Regulation, Empathy, Financial Modeling, Financial Statement Analysis, Financial Structuring, Inclusion, Intellectual Curiosity, International Taxation, Learning Agility, Legal Document Review, Macroeconomics (Economics), Optimism {+ 16 more}Desired Languages (If blank, desired languages not specified)
EnglishTravel Requirements
Not SpecifiedAvailable for Work Visa Sponsorship?
NoGovernment Clearance Required?
NoJob Posting End Date
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow APIs Architecture Azure Big Data Business Intelligence CI/CD CX Databricks Data governance Data quality Data visualization DevOps Docker Economics ELT Engineering ETL FastAPI GCP GitHub Google Cloud Kafka Power BI Privacy Python Robotics RPA Security Spark SQL STEM Streaming Unstructured data
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.