Data Engineer
Fremont, CA
Meta
Giving people the power to build community and bring the world closer together- Partner with leadership, engineers, program managers and data scientists to understand data needs
- Apply proven expertise and build high-performance scalable data warehouses
- Design, build and launch efficient & reliable data pipelines to move and transform data (both large and small amounts)
- Securely source external data from numerous partners
- Intelligently design data models for optimal storage and retrieval
- Deploy inclusive data quality checks to ensure high quality of data
- Optimize existing pipelines and maintain of all domain-related data pipelines
- Ownership of the end-to-end data engineering component of the solution
- Support on-call shift as needed to support the team
- Design and develop new systems in partnership with software engineers to enable quick and easy consumption of data
- BS/MS in Computer Science or a related technical field
- 5+ years of Python or other modern programming language development experience
- 5+ years of SQL and relational databases experience
- 5+ years experience in custom ETL design, implementation and maintenance
- 3+ years of experience with workflow management engines (i.e. Airflow, Luigi, Prefect, Dagster, Digdag, Google Cloud Composer, AWS Step Functions, Azure Data Factory, UC4, Control-M)
- 3+ years experience with Data Modeling
- Experience working with cloud or on-premises Big Data/MPP analytics platform (i.e. Netezza, Teradata, AWS Redshift, Google BigQuery, Azure Data Warehouse, or similar)
- 2+ years experience working with enterprise DE tools and experience learning in-house DE tools
- Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.
- Experience with more than one coding language
- Experience designing and implementing real-time pipelines
- Experience with data quality and validation
- Experience with SQL performance tuning and end-to-end process optimization
- Experience with anomaly/outlier detection
- Experience with notebook-based Data Science workflow
- Experience with Airflow
- Experience querying massive datasets using Spark, Presto, Hive, Impala, etc.
- Experience building systems integrations, tooling interfaces, implementing integrations for ERP systems (Oracle, SAP, Saleforce etc).
Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate, monthly rate, or annual salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base compensation, Meta offers benefits. Learn more about benefits at Meta.
Tags: Airflow Architecture AWS Azure Big Data BigQuery Computer Science Dagster Data pipelines Data quality Data warehouse Engineering ETL GCP Google Cloud MPP Oracle Physics Pipelines Python RDBMS Redshift Spark SQL Step Functions Teradata VR
Perks/benefits: Career development Equity / stock options Health care Salary bonus
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.