Data Engineer
MEX Work-at-Home
- Remote-first
- Website
- @concentrix 𝕏
- Search
Concentrix
Concentrix is a global technology and services leader that powers the world’s best brands, today and into the future.Job Title:
Data EngineerJob Description
Key Responsibilities:
- Data Pipeline Development:
Design, build, and manage scalable, reliable, and efficient ETL/ELT data pipelines to ingest, process, and store large datasets from various data sources. - Data Integration:
Integrate data from different sources such as APIs, relational databases, NoSQL databases, flat files, and streaming data into centralized data lakes or data warehouses. - Database Management & Optimization:
Implement and maintain data storage solutions such as relational databases (e.g., PostgreSQL, MySQL), NoSQL databases (e.g., MongoDB, Cassandra), and cloud-based data warehouses (e.g., Amazon Redshift, Google BigQuery). - Data Quality & Validation:
Ensure data quality and integrity through the design and implementation of data validation, cleansing, and enrichment processes. Identify and address data discrepancies and inconsistencies. - Collaboration with Teams:
Collaborate with data scientists, analysts, and software engineers to understand data requirements, and deliver data solutions that meet analytical and operational needs. - Performance Tuning:
Optimize data pipelines and data storage systems for maximum efficiency, scalability, and performance. Proactively monitor system performance and troubleshoot issues. - Data Governance & Security:
Work closely with the data governance and security teams to ensure that data solutions comply with organizational standards, privacy laws (e.g., GDPR, CCPA), and security policies. - Cloud Data Engineering:
Design and manage data workflows in cloud environments such as AWS, Azure, or Google Cloud, utilizing cloud-native services for data processing and storage. - Automation & Scheduling:
Automate data workflows and implement scheduling tools (e.g., Airflow, Cron) to ensure timely data delivery for reports, dashboards, and analytics. - Documentation:
Create and maintain technical documentation for data pipelines, data models, and data management processes to ensure knowledge sharing and reproducibility.
Requirements:
- Education:
Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Technology, or a related field. - Experience:
- Minimum of 3-5 years of experience in data engineering, ETL development, or database management.
- Experience with cloud platforms and cloud-native data processing tools.
- Skills & Expertise:
- Proficiency with SQL for querying and manipulating data.
- Experience with data integration tools (e.g., Apache Airflow, Talend, Informatica) and ETL/ELT processes.
- Strong programming skills in Python, Java, or Scala for data processing.
- Knowledge of big data technologies (e.g., Hadoop, Spark, Kafka).
- Experience with data warehousing solutions (e.g., Snowflake, Amazon
#ConcentrixCatalyst
Location:
MEX Work-at-HomeLanguage Requirements:
Time Type:
If you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow APIs AWS Azure Big Data BigQuery Cassandra Computer Science Data governance Data management Data pipelines Data quality Data Warehousing ELT Engineering ETL GCP Google Cloud Hadoop Informatica Java Kafka MongoDB MySQL NoSQL Pipelines PostgreSQL Privacy Python RDBMS Redshift Scala Security Snowflake Spark SQL Streaming Talend
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.