Senior Data Engineer II

Seattle, Washington, United States

Compass

Buy, sell, and rent smarter with Compass. Partner with a local real estate agent to find the home or apartment that’s right for you.

View all jobs at Compass

Apply now Apply later

At Compass, our mission is to help everyone find their place in the world. Founded in 2012, we’re revolutionizing the real estate industry with our end-to-end platform that empowers residential real estate agents to deliver exceptional service to seller and buyer clients.

At Compass, we’re on a mission to help everyone find their place in the world. Since 2012, we’ve been transforming the real estate industry with our end-to-end technology platform, empowering residential real estate agents to deliver outstanding service to their clients. Our culture thrives on interpersonal connectivity, collaborative impact, and bold, innovative solutions.

Data is the foundation of Compass technologies. Our team is responsible for architecting, building, and maintaining a unified, scalable, and cost-effective analytics platform, including a data lake, data warehouse, data pipelines, and operational tools to support data stakeholders across the company.

As a data engineer, you will be responsible for building, optimizing, and maintaining scalable data pipelines using distributed computing on the cloud. You are a data expert who understands and optimizes data systems from the ground up. You will collaborate with data analysts and scientists to support data initiatives and ensure consistent, optimal data delivery architecture for ongoing projects. You must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.

Responsibilities

  • Data Architecture: Develop and maintain scalable, secure, and high-performance data architectures to support business needs and ensure the organization’s data ecosystem operates effectively.
  • Pipeline Development: Design, implement, and optimize complex data pipelines for real-time and batch processing using technologies such as Spark, Kafka, and cloud-based ETL tools.
  • Data Quality: Implement a robust data quality framework to ensure the highest quality of data on the platform.
  • Data Operations: Automate manual processes, monitor data systems, and resolve data quality issues.

Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
  • 5+ years of experience in engineering and maintaining large-scale data pipelines and distributed systems.
  • Advanced knowledge and 3+ years of programming experience with big data processing frameworks such as Apache Spark and Kafka.
  • 5+ years of programming experience using languages such as Python, Java, C#, or Scala.
  • 5+ years of experience developing on cloud platforms and tools (e.g., AWS Glue, GCP Dataflow, Azure Data Factory).
  • Strong SQL skills and experience with both relational and non-relational databases.
  • Experience with version control systems and CI/CD pipelines (e.g., Git).

Desirable Skills

  • Proven expertise in Spark and Databricks technologies.
  • Experience with machine learning workflows and LLMs.
  • Strong stakeholder management and communication skills.
  • Strong problem-solving skills and ability to work independently in a fast-paced environment.
  • Knowledge of data governance, security, and compliance best practices.

Compensation: The base pay range for this position is $168k -$180k annually; however, base pay offered may vary depending on job-related knowledge, skills, and experience. Bonuses and restricted stock units may be provided as part of the compensation package, in addition to a full range of benefits. Base pay is based on market location. Minimum wage for the position will always be met.

Perks that You Need to Know About:

Participation in our incentive programs (which may include where eligible cash, equity, or commissions). Plus paid vacation, holidays, sick time, parental leave, marriage leave, and recharge leave; medical, tele-health, dental and vision benefits; 401(k) plan; flexible spending accounts (FSAs); commuter program; life and disability insurance; Maven (a support system for new parents); Carrot (fertility benefits); UrbanSitter (caregiver referral network); Employee Assistance Program; and pet insurance.

  Do your best work, be your authentic self. At Compass, we believe that everyone deserves to find their place in the world — a place where they feel like they belong, where they can be their authentic selves, where they can thrive.  Our collaborative, energetic culture is grounded in our Compass Entrepreneurship Principles and our commitment to diversity, equity, inclusion, growth and mobility. As an equal opportunity employer, we offer competitive compensation packages, robust benefits and professional growth opportunities aimed at helping to improve our employees' lives and careers.

Notice for California Applicants

Los Angeles County Fair Chance Notice

Apply now Apply later
Job stats:  0  0  0
Category: Engineering Jobs

Tags: Architecture AWS AWS Glue Azure Big Data CI/CD Computer Science Databricks Dataflow Data governance DataOps Data pipelines Data quality Data warehouse Distributed Systems Engineering ETL GCP Git Java Kafka LLMs Machine Learning Maven Pipelines Python RDBMS Scala Security Spark SQL

Perks/benefits: Career development Competitive pay Equity / stock options Fertility benefits Flex vacation Health care Insurance Medical leave Parental leave Salary bonus Startup environment

Region: North America
Country: United States

More jobs like this