Engineer 3, Data Engineering - 9190
PA - Philadelphia, 1701 John F Kennedy Blvd, United States
Comcast
Comcast NBCUniversal creates incredible technology and entertainment that connects millions of people to the moments and experiences that matter most.Job Summary
Job Description
DUTIES: Design and develop new software and web applications using Scala; use AWS services including S3, Athena, Glue and Identity Access Management (IAM); work with Apache Spark and PySpark; develop data pipelines within Databricks; use SQL to conduct data analysis and ensure quality; use Github; use Big Data Storage Systems, including Hadoop Distributed File System (HDFS) and Amazon S3, and best practices, including data partitioning and sharding, replication and redundancy, compression and data encoding, data lifecycle management, and security and access control; use Python; support applications under development and customize current applications; assist with the software update process for existing applications, and roll-outs of software releases; analyze, test, and assist with the integration of new applications; document all development activity; research, write, and edit documentation and technical requirements, including software designs, evaluation plans, test results, technical manuals, and formal recommendations and reports; monitor and evaluate competitive applications and products; review literature, patents, and current practices relevant to the solution of assigned projects; collaborate with project stakeholders to identify product and technical requirements; conduct analysis to determine integration needs; work with the Quality Assurance team to determine if applications fit specification and technical requirements. Position is eligible for 100% remote work.
REQUIREMENTS: Bachelor’s degree, or foreign equivalent, in Computer Science, Engineering, or related technical field, and two (2) years of experience developing software using Scala; using AWS services including S3, Athena, Glue and Identity Access Management (IAM); working with Apache Spark and PySpark; and developing data pipelines within Databricks; of which one (1) year of experience includes using SQL to conduct data analysis and ensure quality; using Github; using Big Data Storage Systems, including Hadoop Distributed File System (HDFS); and using Python.
Skills
Amazon Web Services (AWS), Apache Spark, Data Pipelines, Hadoop Distributed File System (HDFS), Scala (Programming Language)We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That's why we provide an array of options, expert guidance and always-on tools that are personalized to meet the needs of your reality—to help support you physically, financially and emotionally through the big milestones and in your everyday life.
Please visit the benefits summary on our careers site for more details.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Athena AWS Big Data Computer Science Data analysis Databricks Data pipelines Engineering GitHub Hadoop HDFS Pipelines PySpark Python Research Scala Security Spark SQL
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.