Data Engineer
Remote
Full Time Senior-level / Expert Clearance required USD 113K - 131K
About Us:
eSimplicity is modern digital services company that work across government, partnering with our clients to improve the lives and ensure the security of all Americans—from soldiers and veteran to kids and the elderly, and defend national interests on the battlefield. Our engineers, designers and strategist cut through complexity to create intuitive products and services that equip Federal agencies with solutions to courageously transform today for a better tomorrow for all Americans.
Role Overview:
We are seeking a highly skilled Data Engineer III to help evaluate and design robust data integration solutions for large-scale, disparate datasets spanning multiple platforms and infrastructure types, including cloud-based and potentially undefined or evolving environments. This role is critical in identifying optimal data ingestion, normalization, and transformation strategies while collaborating with cross-functional teams to ensure data accessibility, reliability, and security across systems.
Responsibilities:
- Assess, design, and implement solutions for integrating large-scale datasets from disparate systems, including unknown or undefined data environments.
- Collaborate with end users and data stakeholders to understand requirements and educate them on Spark-based processing in Databricks, using SQL, Python, and/or R.
- Partner with data engineers and data scientists to extract and transform data from external sources using APIs, Kafka, or Kinesis, and build automated, resilient data pipelines.
- Write comprehensive unit, integration, and functional tests for critical data workflows.
- Create and deliver clear, concise technical presentations and documentation to both technical and non-technical audiences.
- Stay informed on the latest cloud technologies, data integration patterns, and industry best practices to drive innovation and process improvements.
- Manage deliverables and project timelines, ensuring high-quality outcomes.
- Provide timely updates to leadership and contribute to planning and prioritization discussions.
Required Qualifications:
- All candidates must pass public trust clearance through the U.S. Federal Government. This requires candidates to either be U.S. citizens or pass clearance through the Foreign National Government System which will require that candidates have lived within the United States for at least 3 out of the previous 5 years, have a valid and non-expired passport from their country of birth and appropriate VISA/work permit documentation.
- Bachelor’s degree in Computer Science, Software Engineering, Data Science, Statistics, or related technical field.
- 10+ years of experience in software/data engineering, including data pipelines, data modeling, data integration, and data management.
- Expertise in data lakes, data warehouses, data meshes, data modeling and data schemas (star, snowflake…).
- Strong expertise in SQL, Python, and/or R, with applied experience in Apache Spark and large-scale processing using PySpark or Sparklyr.
- Experience with Databricks in a production environment.
- Strong experience with AWS cloud-native data services, including S3, Glue, Athena, and Lambda.
- Strong proficiency with GitHub and GitHub Actions, including test-driven development.
- Proven ability to work with incomplete or ambiguous data infrastructure and design integration strategies.
- Excellent analytical, organizational, and problem-solving skills.
- Strong communication skills, with the ability to translate complex concepts across technical and business teams.
- Proven experience working with petabyte-level data systems.
Desired Qualifications:
- Experience working with healthcare data, especially CMS (Centers for Medicare & Medicaid Services) datasets.
- CMS and Healthcare Expertise: In-depth knowledge of CMS regulations and experience with complex healthcare projects; in particular, data infrastructure related projects or similar.
- Demonstrated success providing support within the CMS OIT environment, ensuring alignment with organizational goals and technical standards.
- Demonstrated experience and familiarity with CMS OIT data systems (e.g. IDR-C, CCW, EDM, etc.)
- Experience with cloud platform services: AWS and Azure.
- Experience with streaming data (Kafka, Kinesis, Pub/Sub).
- Familiarity with data governance, metadata management, and data quality practices.
Working Environment:
eSimplicity supports a remote work environment operating within the Eastern time zone so we can work with and respond to our government clients. Expected hours are 9:00 AM to 5:00 PM Eastern unless otherwise directed by your manager.?
Occasional travel for training and project meetings. It is estimated to be less than 25% per year.?
Benefits:
We offer highly competitive salaries and full healthcare benefits.?
Equal Employment Opportunity:
eSimplicity is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender, age, status as a protected veteran, sexual orientation, gender identity, or status as a qualified individual with a disability.?
Tags: APIs Athena AWS Azure Computer Science Databricks Data governance Data management Data pipelines Data quality Engineering GitHub Kafka Kinesis Lambda Pipelines PySpark Python R Security Snowflake Spark SQL Statistics Streaming TDD
Perks/benefits: Competitive pay
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.