DataOps Engineer
Remote Texas, Remote Texas, TX, United States
Greystar
At Greystar, we offer apartments in desirable locations near shopping, dining, and workplaces. Browse through our wide selection of apartments for rent and find your dream home today.ABOUT GREYSTAR
Greystar is a leading, fully integrated global real estate company offering expertise in property management, investment management, development, and construction services in institutional-quality rental housing. Headquartered in Charleston, South Carolina, Greystar manages and operates over $300 billion of real estate in nearly 250 markets globally with offices throughout North America, Europe, South America, and the Asia-Pacific region. Greystar is the largest operator of apartments in the United States, manages over 1,000,000 units/beds globally, and has a robust institutional investment management platform comprised of nearly $78 billion of assets under management, including over $35 billion of development assets. Greystar was founded by Bob Faith in 1993 to become a provider of world-class service in the rental residential real estate business. To learn more, visit www.greystar.com.
JOB DESCRIPTION SUMMARY
Job Summary: The DataOps Engineer is responsible for managing, monitoring and supporting data pipelines and systems. This role requires deep knowledge of data operations, data engineering, data management and support methodology. Programming knowledge and development activities utilizing PySpark and Object Oriented techniques are a key success factor to this role as is the ability to troubleshoot, prioritize requests, and provide excellent customer support. The DataOps Engineer will monitor, troubleshoot and resolve operational items related to data ingestion and integration. The DataOps Engineer will perform release management activities, will assist with code reviews and process improvement activities and will develop, enhance or refactor code, processes, integrations as required. The ability to create or update documentation is important in this role. The ideal candidate will be a highly skilled collaborator, have a high degree of attention to detail, and have deep knowledge and experience regarding investigation and resolution of process failures or questions from stakeholders. They will likely have deep understanding and experience of data engineering processes and methodologies.JOB DESCRIPTION
Essential Responsibilities:
- Manage, monitor and support data pipelines written in PySpark, ADF and Boomi to support data integration, transformation, and storage.
- Ensure data quality, consistency, and accuracy within the design and development of the data pipelines.
- Implement and maintain data security solutions on the data models based on the legal, compliance, and business needs.
- Perform deployments and deliver operational support activities related to the development, deployment, operation, and support aspects related to the data pipelines – ensuring accuracy and timely delivery of the data to the business.
- Collaborate with cross-functional teams, including BI/Analytics, data scientists, business analysts, product owners and business stakeholders related to the operational processes – driving customer satisfaction.
- Ensure data quality and integrity through implementation and execution of data validation and monitoring processes.
- Troubleshoot and resolve data-related issues by serving as the L2 support for tickets. Collaborate with the Data Engineering team, Product Owners and business stakeholders as needed.
- Document operational processes and keep documentation updated. Ensure that appropriate documentation exists for pipelines and integrations.
- Generate a variety of operational reports such service ticket metrics, pipeline performance, data quality reports, and so-on.
- Stay current with industry trends and advancements in data engineering and data operations. Apply best practices to improve data management and operational data processes. Mentor team members to continuously upskill every member of the team.
Qualifications:
- Bachelor's or advanced degree in Computer Science or Computer Engineering.
- 5+ years of experience in data engineering, with a focus on designing, developing and supporting data pipelines and systems.
- Proficiency in programming and coding with PySpark, Python, and SQL.
- Expertise in Databricks and other databases.
- Strong understanding of data modeling, data quality, and data governance practices.
- Strong understanding of troubleshooting and operational support practices.
- Experience with data integration, ETL processes, and data warehousing.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills, with the ability to work effectively in a team environment.
Preferred Qualifications:
- Experience with cloud platforms such as Azure, AWS, or Google Cloud.
- Knowledge of MLOps and DataOps practices.
- Familiarity with Agile software development methodologies.
#LI-DNI
Additional Compensation:
Many factors go into determining employee pay within the posted range including business requirements, prior experience, current skills and geographical location.
Corporate Positions: In addition to the base salary, this role may be eligible to participate in a quarterly or annual bonus program based on individual and company performance.
Onsite Property Positions: In addition to the base salary, this role may be eligible to participate in weekly, monthly, and/or quarterly bonus programs.
Robust Benefits Offered*:
Competitive Medical, Dental, Vision, and Disability & Life insurance benefits. Low (free basic) employee Medical costs for employee-only coverage; costs discounted after 3 and 5 years of service.
Generous Paid Time off. All new hires start with 15 days of vacation, 4 personal days, 10 sick days, and 11 paid holidays. Plus your birthday off after 1 year of service! Additional vacation accrued with tenure.
For onsite team members, onsite housing discount at Greystar-managed communities are available subject to discount and unit availability.
6-Week Paid Sabbatical after 10 years of service (and every 5 years thereafter).
401(k) with Company Match up to 6% of pay after 6 months of service.
Paid Parental Leave and lifetime Fertility Benefit reimbursement up to $10,000 (includes adoption or surrogacy).
Employee Assistance Program.
Critical Illness, Accident, Hospital Indemnity, Pet Insurance and Legal Plans.
Charitable giving program and benefits.
*Benefits offered for full-time employees. For Union and Prevailing Wage roles, compensation and benefits may vary from the listed information above due to Collective Bargaining Agreements and/or local governing authority.
Greystar will consider for employment qualified applicants with arrest and conviction records.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile AWS Azure Computer Science Databricks Data governance Data management DataOps Data pipelines Data quality Data Warehousing Engineering ETL GCP Google Cloud MLOps Pipelines PySpark Python Security SQL
Perks/benefits: 401(k) matching Career development Competitive pay Fertility benefits Flex vacation Health care Insurance Medical leave Paid sabbatical Parental leave Salary bonus Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.