Data Engineer - PySpark / Spark

Guadalajara, MX

IBM

For more than a century, IBM has been a global technology innovator, leading advances in AI, automation and hybrid cloud solutions that help businesses grow.

View all jobs at IBM

Apply now Apply later

Introduction
In this role, you’ll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world.​ Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.

Your Role and Responsibilities
Day-to-day troubleshooting of forecasting systems, mainly working through data anomalies that cause inaccurate forecasts or prevent forecasts’ generation.
Collaborate with the data science team to enhance existing forecasting systems for the trade floors.
Create dynamic object-oriented methods, full stack solutions, and integrations to existing code solutions.
Develop individual Python classes, methods, functions that support the data flow of existing and new projects.
Work on code additions to seamlessly support projects for data flows, including logging and support, with little to no supervision.
Experience in modifying packages, testing, and repository instances to support CI/CD.

MXCON24

Required Technical and Professional Expertise
1. PySpark and Spark: Proficiency in PySpark, including the Spark DataFrame API and RDD (Resilient Distributed Datasets) programming model. Knowledge of Spark internals, data partitioning, and optimization techniques is advantageous.
2. Data Manipulation and Analysis: Ability to manipulate and analyze large datasets using PySpark’s DataFrame transformations and actions. This includes filtering, aggregating, joining, and performing complex data transformations.
3. Distributed Computing: Understanding of distributed computing concepts, such as parallel processing, cluster management, and data partitioning. Experience with Spark cluster deployment, configuration, and optimization is valuable.
4. Data Serialization and Formats: Knowledge of different data serialization formats like JSON, Parquet, Avro, and CSV. Familiarity with handling unstructured data and working with NoSQL databases like Hadoop HBase or Apache Cassandra.
5. Data Pipelines and ETL: Experience in building data pipelines and implementing Extract, Transform, Load (ETL) processes using PySpark. Understanding of data integration, data cleansing, and data quality techniques.

Preferred Technical and Professional Expertise
NA


Key Job Details
Role:Data Engineer – PySpark / Spark Location:Guadalajara, MX Category:Software Engineering Employment Type:Full-Time Travel Required:No Travel Contract Type:Regular Company:(0390) IBM de Mexico Comercializacion y Servicios Req ID:740188BR

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: APIs Avro Cassandra CI/CD Consulting CSV Data pipelines Data quality Engineering ETL Hadoop HBase JSON NoSQL Parquet Pipelines PySpark Python Spark Testing Unstructured data

Region: North America
Country: Mexico

More jobs like this