Data Engineer Ssr (Python-SQL-Snowflake-AWS)
Córdoba, Córdoba Province, Argentina
Darwoft
Darwoft is an industry-leading custom software development company specialized in mobile and web app UX and development.As a Data Engineer, you will contribute to the development and maintenance of our data infrastructure using technologies like Snowflake, Amazon S3, and Airflow. You will work within a team of data engineers to assist in designing and implementing data solutions that facilitate scalable data ingestion, storage, and processing. Your efforts will help maintain the quality, efficiency, and reliability of our data operations.
Key Responsibilities:
Development (70%)
Assist in the development and maintenance of scalable and efficient data pipelines using Python, SQL, and Snowflake, Snowpark. Support complex ETL processes and frameworks to manage data transformation, data structures, metadata, dependency, and workload management.
Contribute to enhancing our data ingestion processes using AWS Lambda functions and Snowpipe for real-time data processing and analytics.
Collaborate with data scientists and analysts to help build datasets that support business needs.
Create proof of concepts to support new innovation for the CareJourney product and understand new features of the underlying (Snowflake, AWS, Airflow, etc).
Create views and other layers of abstraction for data consumption both internal and external to CareJourney. This includes logical views, materialized views and secure views to support reports and data sharing.
Code Review (15%)
Participate in code review sessions to ensure code quality, adherence to best
practices, and maintainability.
Learn from technical guidance and mentorship provided by senior team members and strive for continuous improvement.
Agile Participation (10%)
Support the data engineering team in agile practices such as sprint planning,
daily stand-ups, retrospectives, and sprint reviews.
Help maintain clear communication between project stakeholders and the team,
ensuring alignment on goals and deliverables.
Release Support (5%)
Assist in coordinating with project managers and IT stakeholders to plan release
schedules.
Support the staging and deployment of data pipeline releases to ensure smooth rollouts with minimal downtime.
Required Skills & Qualifications:
At least 5 years of experience in a Data Engineer role or similar.
Proficient with Snowflake (or other cloud databases), Apache Airflow, and event driven orchestration patterns.
Strong programming skills in Python and Advanced SQL used within transformation data pipelines.
Understanding of Stored Procedures, Functions and when they are appropriate to use.
Experience with AWS services, especially Lambda functions and data streaming technologies.
Understanding of ELT tools and data orchestration processes.
Familiarity with data modeling, data access, and data storage techniques.
Knowledge of agile methodologies.
Excellent problem-solving abilities and willingness to learn.
Strong communication and organizational skills.
Desired Skills & Qualifications:
Working knowledge of American health care data.
Experience with data privacy rules and regulations such as HIPAA and GDPR.
Understanding of Snowflakes Snowpark development framework.
Familiarity with Snowflakes data sharing capabilities.
Education:
Bachelors degree in Computer Science, Engineering, or related field.
¡Postulate ahora y súmate a la querida Darwoft!
Envía tu CV a: talento@darwoft.com
Consultas? Sigue al Recruiter
https://www.linkedin.com/in/hernanvietto/
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow AWS Computer Science DataOps Data pipelines ELT Engineering ETL Lambda Pipelines Privacy Python Snowflake SQL Streaming
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.