Senior Engineer (Data Operations)

Atlanta, GA or Remote

Rialtic

Rialtic enhances payment accuracy, reduces administrative costs, and empowers healthcare organizations. Request a demo to optimize your payment process.

View company page

Senior Engineer (Data Stack Focused)

at Rialtic, Inc.

Atlanta or Remote

About Rialtic

Rialtic is an enterprise software platform empowering health insurers and healthcare providers to run their most critical business functions. Founded in 2020 and backed by leading investors including Oak HC/FT, F-Prime Capital, Health Velocity Capital and Noro-Moseley Partners, Rialtic's best-in-class payment accuracy product brings programs in-house and helps health insurance companies gain total control over processes that have been managed by disparate and misaligned vendors. Currently working with leading healthcare insurers and providers, we are tackling a $1 trillion problem to reduce costs, increase efficiency and improve quality of care. For more information, please visit www.rialtic.io.

The Role:

We’re looking for a data-stack-focused engineer to join our core platform team. If you’re excited by the chance to deal with “big data,” healthcare is the place to be. Rialtic works with the largest healthcare organizations in the United States. Our goal is to improve the healthcare revenue cycle and reduce administrative waste, making the system more efficient for everyone. We’re built on a modern, cloud-first stack, but our clients are often on legacy systems, so the challenges of data extraction, data mapping, and data processing are significant… but there’s a huge opportunity to advance the state of the art. 

We tackle challenges that are common to healthcare companies and healthcare data. Interface and interoperability standards exist (e.g. X12 EDI / HL7 / FHIR) but nobody really follows them to the letter. Many of our interfaces involve legacy systems that predate those standards. While we create templates and establish best practices, every implementation is unique in some way, as we must adapt to the business processes and underlying assumptions of each client. We often fix inconsistencies in the data we receive, or have to make determinations about which information is the most current or relevant across disparate systems over time. Our data pipelines run 24x7 with a mixture of batch and near real-time / API-driven endpoints. You can’t work with PHI in lower environments, but properly de-identifying data removes a lot of the information that is needed to do accurate analysis (try writing a measure that is sensitive to the date of service and the age of the patient when the only data you’re allowed to use has had everything stripped except for the year of the event and the patient’s age can’t be specified numerically because they are elderly and live in a sparsely populated area … and oh, by the way, we had to truncate the ZIP code too). Our ability to parse, validate, process, write code against, and manage enormous volumes of data while performing complex analyses quickly and accurately is critical to our success. 

If that sounds like a fun challenge, then you should apply for this position!

During any given week in this role, you might:

  • Work with clients and prospective clients to define and implement a data mapping strategy for healthcare claims and related data (both initial/historical data loads and ongoing data flows);
  • Write and test pipeline components, DAGs, and documentation for ETL/ELT, data validation, observability, and error reporting;
  • Partner with our cloud/SRE team to understand the performance characteristics and storage requirements for our data lake, data warehouses, and in/outbound file storage;
  • Assist our infosec team in documenting the provenance and classification of data sets and metadata, including our HIPAA-compliant data de-identification strategy and process;
  • Implement and test improvements to slow-running queries, refactor and propose schema changes, migrations, and entirely new tables/data stores for our transactional, operational, and analytical data; 
  • Participate with internal and external stakeholders to understand the business logic and other requirements (such as refresh latency) for our Web-based payment integrity solution, client data warehouse exports, and one-time/ad-hoc analysis needs;
  • Pilot a new tool (either something you helped build in SQL, Python, Go, or other languages, or a modern data stack tool from an open-source project or a third-party vendor) to help improve the automation and reliability of our data processing infrastructure; 
  • Serve as a peer reviewer for a colleague’s code, participate in an engineering architecture specification review, work with the product management team to refine a set of requirements or break a story down into concrete tasks for implementation.
  • Monitor and manage ongoing batch and real-time data operations and troubleshoot issues for clients that include some of the largest healthcare organizations in the world.

Our tech stack includes (but is not limited to) languages and technologies like Golang, Python, SQL, shell scripts, AWS EC2, Athena, Aurora / PostgreSQL, Kafka / MSK, Kubernetes, SQLite, Airflow, Spark, and more! Part of what our ideal candidate brings to the table is an opinion about what a modern data stack looks like and what belongs or doesn’t belong in it (along with a willingness to be adaptable, of course).

You have:

  • 5+ years of hands-on experience as a data-focused software developer, including experience modeling, building, and evolving ETL/ELT data pipelines, warehouses/lakes, and other components of the data stack. (You understand core concepts pertaining to the modeling, tuning, and maintenance of the data stack, regardless of which specific databases or tools you’ve worked with.)
  • Highly proficient with SQL and would not be offended if someone described one or more of your past roles as a “data engineer.” (You have an opinion on whether or not NULL was a good idea. You know the difference between a relational database, a document-oriented database, and a pure key-value store, and when it makes sense to use them.)
  • 3+ years of meaningful coding experience with Python. (Our code for analyzing healthcare data and generating actionable insights is primarily written in Python, and it’s one of the most popular “second languages” among the folks on the team, so many internal tools are also written in Python.)
  • 2+ years of meaningful coding experience with a compiled language such as Golang, C#, or C. (Golang is our primary platform language, but we can teach you, and we believe that anyone who has become proficient in at least one programming language can gain proficiency in other languages. We are going to ask you about pointers, though.)
  • Excellent listening and interpersonal skills, and you consider yourself a lifelong learner. You’re motivated to learn new tools, explore unfamiliar data sets, synthesize information to see the big picture while effectively managing the details, and share your insights with others. (You aren’t afraid of writing or reading documentation and specifications – we’re a remote-first team, so the quality of our asynchronous communications makes a big difference in our effectiveness.)
  • If you have experience with any of the following, that would be great, but none of these are expectations or requirements: R, Pandas, Numpy, or other dataframe-based or stats/data focused languages and tools; AWS (particularly with regards to Athena, Aurora, MSK, or self-managed PostgreSQL or Kafka); Docker and/or Kubernetes; observability tools like Datadog, Prometheus, or other things in the Open Telemetry pantheon; Spark / Pyspark, Airflow, or similar streaming data and process orchestration technologies; “modern data stack” tools like dbt, Databricks, Snowflake, and the like; healthcare experience generally and experience with healthcare EDI in particular, including things like HIPAA, techniques for securely dealing with PHI, data de-identification or statistical data generation tools, specific standards like ANSI X12, HL7 FHIR, and related topics; experience with testing and test automation; exposure to queueing technologies such as SQS or RabbitMQ; have lived through HITRUST and/or SOC2 certification, or have experience specifically with data security; and whatever else sounds like it belongs on this list that we either forgot to mention or you’re going to teach us about! 

Rialtic Values

  • High Integrity
    • Do the right thing. Provide candid feedback. Be humble and respectful.
  • Customer Value Comes First
    • Delivering value to our customers is our North Star.
  • Work as One Team
    • Collaborative, inclusive environment to advance our mission.
  • Be Bold & Accountable
    • Speak up. Take accountability. Continually improve.
  • Pursuit of Excellence
    • Innovate, iterate and chase the best possible outcomes.
  • Take Care of Yourself & Others
    • Prioritize the health and wellbeing of yourself and your teammates

Rialtic Benefits

  • Freedom to work from wherever you work best and home office stipend to make it happen
  • Competitive compensation and meaningful equity
  • 401k with company matching
  • Flexible PTO and wellness stipend
  • Comprehensive health plans with generous contribution to premiums
  • Mental and physical wellness support through TalkSpace, Teladoc and One Medical subscriptions

We are headquartered in Atlanta, but we are remote friendly.

*USA Based*

Don’t meet every single requirement?   Studies have shown that women and people of color are less likely to apply to jobs unless they meet every single qualification.

At Rialtic, we have built a total rewards philosophy that includes fair, equitable, competitive compensation that is performance and skillset based.

Our strategy is based on robust market research, including external advisory and salary sources specializing in national compensation, and thoughtful input from every level of our organization. It is a combination of a cash salary, equity, benefits, wellbeing, and opportunity.

Rialtic is an equal opportunity employer. All applicants will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. We will not have access to your personal Equal Employment Opportunity Commission information during the interview process.   Rialtic is committed to providing access, equal opportunity, and reasonable accommodation for individuals with disabilities in employment, its services, programs, and activities.   To request a reasonable accommodation, please let us know in your application or email us at Please take the necessary steps to allow list the Rialtic (@Rialtic) and Greenhouse (@Greenhouse.io) domains so that you receive all emails related to your application process. Also, please make sure to check your spam folder as emails from LifeLabs and/or Greenhouse can be marked as spam. Here are some common allow list solutions to fix this problem.

TO ALL RECRUITMENT AGENCIES:
Rialtic does not accept agency resumes. Please do not forward resumes to Rialtic employees or any other company location. Rialtic is not responsible for any fees related to unsolicited resumes and will not pay fees to any third-party agency or company that does not have a signed agreement with the Company for this specific role.

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow APIs Architecture Athena AWS Big Data Classification Databricks DataOps Data pipelines Data warehouse dbt Docker EC2 ELT Engineering ETL Golang HL7 Kafka Kubernetes Market research NumPy Open Source Pandas Pipelines PostgreSQL PySpark Python R RabbitMQ RDBMS Research Security Snowflake Spark SQL Statistics Streaming Testing

Perks/benefits: 401(k) matching Competitive pay Equity Flex hours Flex vacation Health care Home office stipend Insurance Team events Wellness

Regions: Remote/Anywhere North America
Country: United States
Job stats:  12  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.