Semi Senior Data Engineer

Bogotá, Colombia

Apply now Apply later

📊 Data is the new gold, and we need a skilled Data Engineer to help us mine it! If you love building scalable data solutions, optimizing pipelines, and working with cloud technologies, this role is for you. Join a dynamic team where innovation, automation, and performance are at the heart of everything we do.

Required Qualifications:

🎓 Education: Degree in Systems Engineering, Computer Science, Data Science, Industrial Engineering, or related fields.
📌 Experience: 4+ years in designing, developing, and optimizing data pipelines.

🗣 Language: Advanced English (B2+) required for effective communication in an international environment.

Technical Expertise:

Programming: Strong proficiency in Python (Pandas, NumPy, PySpark) and SQL (Snowflake, PostgreSQL, MySQL, SQL Server).
Data Pipelines & ETL: Hands-on experience in designing, developing, and maintaining scalable ETL processes and data ingestion/transformation workflows.
Databases: Experience with relational and NoSQL databases (MongoDB, Cassandra).
Cloud & Big Data: Experience with AWS (S3, BigQuery, Snowflake) and familiarity with big data frameworks (Hadoop, Spark is a plus).
DevOps & Orchestration: Experience with containerization (Docker, Git) and workflow automation tools like Airflow, Cron Jobs.
Optimization & Performance: Strong knowledge of query optimization, database performance tuning, and best practices in data modeling.
CI/CD Pipelines: Experience in building and maintaining CI/CD pipelines for data solutions.

Key Responsibilities:

📌 Data Pipeline Development: Design, develop, and optimize scalable and efficient data pipelines.
📌 ETL Optimization: Maintain and improve ETL processes for data ingestion, transformation, and storage.
📌 Data Quality & Validation: Implement data quality checks to ensure accuracy and consistency.
📌 Collaboration: Work closely with data scientists, analysts, and engineers to ensure smooth data flow.
📌 Performance Tuning: Optimize SQL queries for scalability and efficiency.
📌 Cloud Data Solutions: Leverage AWS, GCP, or Azure for data storage and processing.
📌 Automation & Monitoring: Automate workflows using Python scripting and monitor data pipelines for reliability and performance.

Soft Skills:

💡 Teamwork – Ability to collaborate effectively in a dynamic environment.
🎯 Problem-Solving – Proactive approach to identifying and solving data-related challenges.
Work Under Pressure – Ability to handle deadlines and ensure smooth operations.
📢 Communication – Strong assertive communication skills to interact with cross-functional teams.
🔍 Accountability & Responsibility – Ownership of tasks and commitment to objectives.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Airflow AWS Azure Big Data BigQuery Cassandra CI/CD Computer Science Data pipelines Data quality DevOps Docker Engineering ETL GCP Git Hadoop Industrial MongoDB MySQL NoSQL NumPy Pandas Pipelines PostgreSQL PySpark Python Snowflake Spark SQL

Region: South America
Country: Colombia

More jobs like this