aijobs.net

Data Engineer - AWS, PySpark, DevOps

Bengaluru, Maruthi Onyx - TESCO TSA, India
Apply

INR 1500K-2000K (estimate) Entry-level Full Time Found 12d ago

Tasks
Perks/Benefits
Skills/Tech-stack

AWS Cloud | Abinitio ETL | Alation | Apache Airflow | Athena | CSV | Change Management | Cloud Architecture | DBT | Data Analysis | Data Governance | Data Ingestion | Data Lakes | Data Marts | Data Modeling | Data Pipelines | Data Security | Data Transformation | Data Warehousing | Dataframes | ETL Development | Glue | Iceberg | Immuta | JSON | Lake Formation | Lambda | Lineage tools | NoSQL | Orchestration tools | PL/SQL | Parquet | PySpark | RDD | Reusability planning | Risk Management | S3 | SQL | Snowflake | Snowflake Tasks | SparkSQL | Stakeholder Engagement | Technical documentation | Transformation strategies

Education

Bachelor's Degree | Master's Degree

Roles

Data Engineer | Engineer

Regions

Asia/Pacific

Countries

India

States

Karnataka, IN

Cities

Bengaluru, Karnataka, IN

Apply
Language: en | Views: 1 | Clicks: 0

Related jobs