Senior Data Engineer
Remote US
Full Time Senior-level / Expert USD 110K - 130K
🌎 Location: Remote within the US/Canada
⬆ Reporting Into: VP, AI Operations
💰Compensation: $110,000-$130,000 base salary, based on qualifications and experience.
About Ceros
At Ceros, we believe that creating powerful digital experiences is essential for helping organizations tell their stories and differentiate their brands. In a world where buyers are inundated with endless digital noise, it’s critical for brands to deliver experiences that inspire, educate, and make a lasting impact on the audiences that matter most.
Our no-code content creation platform empowers businesses to transform the buying journey with rich, interactive experiences that capture the attention of buyers and drive enduring business outcomes. The platform makes it faster, easier, and more cost-effective to create premium digital experiences, allowing businesses to scale it across their go-to-market programs.
Today, Ceros powers some of the most engaging experiences on the web. In 2023 alone, interactive content created with Ceros — from websites to landing pages and pitch decks to case studies— was viewed more than half a billion times, with over 3 million comments added using MarkUp, our visual collaboration tool.
Working at Ceros means making an impact at scale. Our platform supports leading brands including Workday, Colliers, Getty, McKesson, and McKinsey. Ceros is backed by top-tier investors, including Sumeru Equity Partners, Grotech Ventures, and Greycroft.
The Role
Ceros is a dynamic and innovative organization focused on optimizing internal operations across Sales, Customer Success, and Finance through data-driven solutions. Our Ops Engineering team builds and maintains the data infrastructure and pipelines that power business intelligence, reporting, and automation.
We are a small but highly technical team with a strong foundation of experienced software and data engineers. As we expand our data capabilities, we are looking for a self-sufficient Senior Data Engineer to take ownership of our data warehouse, ETL processes, and data integrations.
This role will be responsible for building and maintaining scalable, efficient data pipelines that integrate data from multiple business systems. While this role will collaborate with our AI Ops software engineers, the primary focus will be on data infrastructure, ETL development, performance optimization, and governance to ensure our data is reliable, well-structured, and accessible for analysis and reporting.
We prioritize scalability, performance, and automation to streamline data operations and support informed decision-making across the company.
Key Responsibilities
- Own and lead the management of AWS Redshift, ensuring optimal performance, disk usage, and cost efficiency.
- Design and maintain scalable ETL pipelines using AWS Glue, Lambda, and Matillion to integrate data from Mixpanel, CRM platforms, and customer engagement tools.
- Optimize SQL-based data transformations and Redshift queries to improve performance and reliability.
- Automate data offloading and partition management, leveraging AWS services like S3 and external schemas.
- Ensure version control and documentation of all Redshift queries, ETL processes, and AWS configurations through a centralized GitHub repository.
- Develop monitoring and alerting for data pipelines using CloudWatch and other observability tools to ensure high availability and early issue detection.
- Implement and maintain data quality checks and governance processes to ensure accuracy and consistency across foundational tables.
- Collaborate with AI engineers and business stakeholders to enhance data accessibility and reporting for internal teams.
- Maintain and optimize BI dashboards in Metabase and HubSpot, ensuring accuracy and efficiency of business reporting.
- Manage key integrations between Redshift and external platforms, including Mixpanel, HubSpot, and Census, optimizing data accessibility and performance.
- Administer AWS infrastructure supporting Redshift, ensuring efficient resource utilization, IAM security, and cost management.
- Automate repetitive data tasks using Python and scripting to enhance data processes and improve team efficiency.
Practical stuff we anticipate you having
Must-Have:
- 5+ years of experience in data engineering, focusing on AWS Redshift and ETL pipeline development.
- Strong expertise in SQL performance tuning, schema management, and query optimization.
- Experience designing and maintaining ETL pipelines using AWS Glue, Matillion, or similar tools.
- Proficiency in JavaScript/TypeScript, with experience building custom ETL workflows and integrations.
- Hands-on experience with Python for data automation and scripting.
- Strong understanding of data warehousing best practices, ensuring high-quality, scalable data models.
- Experience with data monitoring and alerting tools such as AWS CloudWatch and New Relic.
- Ability to work independently in a fast-paced environment, collaborating across teams to support data-driven initiatives.
Nice-to-Have:
- Experience integrating customer data platforms (e.g., Mixpanel, Segment) and CRM tools.
- Familiarity with BI and visualization tools (Looker, Tableau, or similar).
- Knowledge of event-driven architectures and streaming data solutions.
- Background in supporting AI-driven automation through data engineering.
Why Join Us?
- Take ownership of critical data infrastructure in a small, highly technical team.
- Work on cutting-edge AI and automation initiatives that directly impact business operations.
- Autonomy to design and optimize scalable ETL and data warehousing solutions.
- Collaborate with a highly skilled, AI-driven engineering team that values innovation and technical excellence.
- Fully remote role with flexible work hours (significant overlap with US East Coast business hours required).
- Competitive salary and opportunities for career growth as we expand our data capabilities.
Key Things to Know
- We want you to start ASAP
- This is a full-time position that requires EST working hours
- This is a remote-first role
Benefits
📈 Stock options
🏥 Premium health insurance*
🏦 401K matching*
👶 Paid parental leave: 16 weeks for primary caregivers, 4 weeks for secondary caregivers
🌴 Flexible vacation days
🤒 Paid Sick days
💵 Stipend for your home office setup
💻 Excellent gear (Macbook Air, external monitor, etc.)
👩💻👨💻 Stipend towards experiences in which Cerosians can collaborate, educate, and create social connections with one another
🏢 Unlimited access to co-working spaces around the globe
*Varies based on location
At Ceros, we are deeply committed to the recruitment, retention, and growth of diverse talent; uniting people from unique backgrounds in our shared passion for unlocking creativity through technology.
As an equal opportunity employer, we prohibit any unlawful discrimination against a job applicant on the basis of their race, color, religion, veteran status, parental status, gender identity or expression, transgender status, sexual orientation, national origin, age, disability or genetic information. We respect the laws enforced by the EEOC and are dedicated to going above and beyond in fostering diversity across our company.
Pay range varies depending on qualifications and experienceBase Salary$110,000—$130,000 USDTags: Architecture AWS AWS Glue Business Intelligence Content creation DataOps Data pipelines Data quality Data warehouse Data Warehousing Engineering ETL Finance GitHub HubSpot JavaScript Lambda Looker Matillion Metabase Pipelines Python Redshift Security SQL Streaming Tableau TypeScript
Perks/benefits: Career development Competitive pay Equity / stock options Flex hours Flex vacation Gear Health care Home office stipend Insurance Parental leave Startup environment Unlimited paid time off
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.