Senior Data Engineer

Remote U.S.

Orita

Orita combines advanced AI with easy-to-use tools to help brands maximize the value of their current customer lists. Optimize email audiences, improve email deliverability, and rescue lost revenue with Orita's suite of products.

View all jobs at Orita

Apply now Apply later

About Orita 

Direct-to-consumer brands pay us, in order to market less. Well, technically, they pay us to market more effectively. And, strangely (!), that often means marketing a lot less.

How? We use a lot of math and a lot of machine learning to decide which people on their subscriber lists actually want to hear from them right now.

And that’s where you come in. We need a talented Data Engineer to help us handle massive amounts of data efficiently and reliably. You'll play a crucial role in unifying our data pipeline with an event taxonomy and normalization layer, ensuring our machine learning models have high-quality data to work with. You’ll have a huge impact on our product, our culture, and building something great from the ground up. It will be a lot of fun.

You will be on a team of engineers, reporting to our Director of Data, Will Goldstein (he’s been doing data science and data engineering for the last decade). We look like a SaaS company, but we’re really building the data science and machine learning company to support the world’s best brands. You will work closely with our talented teams to enhance our data infrastructure and drive innovation in our data processing capabilities.

As a Data Engineer, you will:

  • Design and build scalable and reliable data pipelines to handle large volumes of data from various sources.

  • Unify our data pipeline by developing an event taxonomy and normalization layer for consistent and accurate data.

  • Develop and maintain workflows using Airflow and dbt.

  • Collaborate with data scientists and machine learning engineers to facilitate seamless data integration for model training and deployment.

  • Optimize data retrieval and develop data models for storage and analysis.

  • Ensure data quality, integrity, and security throughout the data lifecycle.

  • Implement ETL/ELT processes and data integration solutions.

  • Contribute to feature engineering efforts to enhance model performance.

  • Set up and analyze A/B testing in big data environments.

  • Monitor and troubleshoot data pipelines and workflows to maintain optimal performance.

Your Ideal Background:

Please apply even if you don’t meet every requirement.

  • Proven experience as a Data Engineer, with a strong track record of delivering successful projects in data-intensive environments.

  • 5+ years of experience in data engineering.

  • Expertise in Python and SQL for data processing and manipulation.

  • Hands-on experience with big data tools.

  • Comfort with async Python.

  • Proficiency with DAG tools such as Airflow, dbt, or Dagster.

  • Willingness to work on reporting one day, and deep infrastructure the next.

  • Experience in building and optimizing data pipelines, architectures, and data sets.

  • Strong understanding of data modeling, data warehousing, and ETL/ELT development.

  • Familiarity with cloud platforms like AWS, GCP, or Azure, and experience deploying data solutions in the cloud.

  • Familiarity with dev-ops best practices, container technologies and continuous integration.

  • Knowledge of best practices in data governance, data quality, and data security.

  • Strong analytical and problem-solving skills with keen attention to detail.

  • Excellent communication skills, with the ability to explain complex technical concepts to non-technical stakeholders.

  • Strong sense of responsibility and ownership; you own projects end-to-end with a bias for solving problems and shipping impactful features into production that are well-tested.

  • Intellectual curiosity. You enjoy iterating and improving systems, always seeking better ways to solve complex problems.

Bonus points for experience in the following:

  • Background in data science or machine learning.

  • Experience with A/B testing frameworks in big data environments.

  • Knowledge of MLOps practices and tools.

  • Experience working in the ecommerce or marketing technology space.

  • Infrastructure experience, particularly with Google Cloud Platform.

  • Building reliable integration platforms with third-party APIs and services.

Where you’ll work:

Remotely, with occasional in-person meetings. Bonus points if you’re based in or around New York City.

Orita is an Equal Opportunity Employer and does not discriminate on the basis of an individual's sex, age, race, color, creed, national origin, alienage, religion, marital status, pregnancy, sexual orientation, or affectional preference, gender identity and expression, disability, genetic trait or predisposition, carrier status, citizenship, veteran or military status and other personal characteristics protected by law. All applications will receive consideration for employment without regard to legally protected characteristics.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: A/B testing Airflow APIs Architecture AWS Azure Big Data Dagster Data governance Data pipelines Data quality Data Warehousing dbt E-commerce ELT Engineering ETL Feature engineering GCP Google Cloud Machine Learning Mathematics ML models MLOps Model training Pipelines Python Security SQL Testing

Perks/benefits: Career development

Regions: Remote/Anywhere North America
Country: United States

More jobs like this