Data Engineer (Remote)

Michigan, Virtual Address

Work Flexibility: Remote or Hybrid or Onsite

As a Data Engineer, you will design and support enterprise-wide data engineering architecture for the customer solutions organization, enabling teams to deliver data-driven solutions across Styker. You'll gather requirements, build ETL pipelines, and create documentation for data assets, while troubleshooting and leveraging industry-leading tools to solve complex problems. This role will be pivotal in transitioning from legacy systems to cloud platforms like Databricks, providing an exciting opportunity to implement cutting-edge technology and best practices.

This is a fully remote role. Candidates located in the Eastern Time Zone or with availability to work Eastern Time Zone hours are preferred.

What you will do

  • Understand and capture stakeholder requirements, timing, and scope in Azure DevOps.
  • Support collaboration efforts with partners across functions.
  • Participate in presentations and communications to the business and stakeholders.
  • Support problem solving, root cause analysis, identify potential solutions, and evaluate them against requirements.
  • Participate in requirements gathering documentation, needs assessments, and development/maintenance of technical documentation for key systems and data assets with guidance.
  • Able to participate in discussions on identifying opportunities in data architecture and movement to enable business opportunities with key stakeholders.
  • Consistently and frequently communicate project status and updates.
  • Participate in the building of project roadmaps.

What you need

Require

  • Bachelor's Degree or higher in computer science, data analytics, mathematics, statistics, data science or related field and/or equivalent applicable data engineering & architecture work experience.
  • Competent in least one programming language central to Data Engineering (e.g. SQL/Python/Spark/R/Scala).
  • Experience in object-oriented programming, data structures, and workflow optimization, including pipelines and algorithms.
  • Experience with cloud-native tools for data storage, distributed computing, BI, and infrastructure as code (e.g., Apache Spark, Azure, Databricks), as well as ETL/ELT and pipeline orchestration.

Preferred

  • Master's Degree or PhD in Computer Science or data discipline.
  • Knowledge in in DataOps, DevOps, SecOps, and Agile/DevOps methodologies, including version control using GitHub/GitLab and infrastructure as code.

Travel Percentage: 10%

Stryker Corporation is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, ethnicity, color, religion, sex, gender identity, sexual orientation, national origin, disability, or protected veteran status. Stryker is an EO employer – M/F/Veteran/Disability.

Stryker Corporation will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor’s legal duty to furnish information.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  6  1  0
Category: Engineering Jobs

Tags: Agile Architecture Azure Computer Science Data Analytics Databricks DataOps DevOps ELT Engineering ETL GitHub GitLab Mathematics OOP PhD Pipelines Python R Scala Spark SQL Statistics

Regions: Remote/Anywhere North America
Country: United States

More jobs like this