Data Engineer (Remote)
Michigan, Virtual Address
Applications have closed
As a Data Engineer, you will design and support enterprise-wide data engineering architecture for the customer solutions organization, enabling teams to deliver data-driven solutions across Styker. You'll gather requirements, build ETL pipelines, and create documentation for data assets, while troubleshooting and leveraging industry-leading tools to solve complex problems. This role will be pivotal in transitioning from legacy systems to cloud platforms like Databricks, providing an exciting opportunity to implement cutting-edge technology and best practices.
This is a fully remote role. Candidates located in the Eastern Time Zone or with availability to work Eastern Time Zone hours are preferred.
What you will do
- Understand and capture stakeholder requirements, timing, and scope in Azure DevOps.
- Support collaboration efforts with partners across functions.
- Participate in presentations and communications to the business and stakeholders.
- Support problem solving, root cause analysis, identify potential solutions, and evaluate them against requirements.
- Participate in requirements gathering documentation, needs assessments, and development/maintenance of technical documentation for key systems and data assets with guidance.
- Able to participate in discussions on identifying opportunities in data architecture and movement to enable business opportunities with key stakeholders.
- Consistently and frequently communicate project status and updates.
- Participate in the building of project roadmaps.
What you need
Require
- Bachelor's Degree or higher in computer science, data analytics, mathematics, statistics, data science or related field and/or equivalent applicable data engineering & architecture work experience.
- Competent in least one programming language central to Data Engineering (e.g. SQL/Python/Spark/R/Scala).
- Experience in object-oriented programming, data structures, and workflow optimization, including pipelines and algorithms.
- Experience with cloud-native tools for data storage, distributed computing, BI, and infrastructure as code (e.g., Apache Spark, Azure, Databricks), as well as ETL/ELT and pipeline orchestration.
Preferred
- Master's Degree or PhD in Computer Science or data discipline.
- Knowledge in in DataOps, DevOps, SecOps, and Agile/DevOps methodologies, including version control using GitHub/GitLab and infrastructure as code.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Architecture Azure Computer Science Data Analytics Databricks DataOps DevOps ELT Engineering ETL GitHub GitLab Mathematics OOP PhD Pipelines Python R Scala Spark SQL Statistics
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.