Senior Software/Data Platform Engineer
Ottawa, Canada
Anaplan
Accelerate decision-making and scenario planning with Anaplan's AI-driven platform. Connect teams, optimize supply chains, and improve demand forecasting.At Anaplan, we are a team of innovators who are focused on optimizing business decision-making through our leading scenario planning and analysis platform so our customers can outpace their competition and the market.
What unites Anaplanners across teams and geographies is our collective commitment to our customers’ success and to our Winning Culture.
Our customers rank among the who’s who in the Fortune 50. Coca-Cola, LinkedIn, Adobe, LVMH and Bayer are just a few of the 2,400+ global companies that rely on our best-in-class platform.
Our Winning Culture is the engine that drives our teams of innovators. We champion diversity of thought and ideas, we behave like leaders regardless of title, we are committed to achieving ambitious goals and we have fun celebrating our wins.
Supported by operating principles of being strategy-led, values-based and disciplined in execution, you’ll be inspired, connected, developed and rewarded here. Everything that makes you unique is welcome; join us and be your best self!
At Anaplan, we are seeking a strong Senior Software/Data Platform Engineer to join our team in Ottawa, Canada, as a member of a global engineering organization with locations across the US, UK, Israel, and India.
Are you a creative problem solver who can both give and receive feedback? Do you lead with inclusion, collaboration, and openness? Do you have strong experience with high-scale B2B and B2C platforms?
As a Senior Software/Data Platform Engineer, you'll craft and help build the next generation of the Anaplan Platform. You're comfortable with both full-stack system design for the hybrid cloud as well as hands-on distributed systems programming, with a good knowledge of JVM and Linux internals.
This role is a full-time, immediate position. If you're ready to roll up your sleeves and seek unique problems that no one is solving in the tech space yet, keep reading.
Your Impact:
- Develop product capabilities for scalable data pipelines, data integration, and data management workflows using technologies such as Apache Spark, Unity Catalog, Apache Hive, Databricks, Apache Iceberg, Delta Tables, and Python.
- Design and implement highly scalable distributed systems and shared services infrastructure in a hybrid cloud environment using Python, Java11, Kotlin, Kubernetes, and Docker.
- Provide technical leadership on data architecture, cloud engineering, and DevOps best practices.
- Collaborate with cross-functional teams—including product management, data engineering, UX design, data science, and customer success—to deliver user-centric, impactful solutions.
- Partner with management and engineering infrastructure teams to estimate, monitor, and optimize cloud infrastructure and platform costs.
- Mentor and guide engineers through technical leadership, including reviewing code changes to ensure adherence to best practices, performance standards, and code quality.
- Lead performance optimization, security enforcement, and reliability improvements.
- Stay up-to-date with advancements in cloud, data, and AI technologies to drive innovation and continuous improvement.
- Use data-driven approaches to measure the quality of our platform and be a strong enthusiast for security and observability.
- Implement code to the desired specification following the standard methodologies, including tests (TDD) and documentation.
- Serving as a member of the team and ensuring design and implementation are at their best.
- Apply your judgment to determine what to defer and what problem needs to be solved now, adopting a pragmatic, business-oriented approach to evolving solutions over time.
- Influence architecture by sharing your knowledge and expertise.
Your Qualifications:
- 10+ years of software engineering experience, with 5+ years in data-focused roles.
- Proven experience designing and implementing cloud-native data solutions in AWS, Azure, and/or GCP.
- Strong expertise in distributed data processing frameworks like Apache Spark.
- Deep understanding of data lakehouse architectures and tools like Databricks, Delta Lake, Apache Iceberg, and Unity Catalog.
- Experience building data products that support BI tools, dashboards, and analytics, including support for ad hoc querying, semantic layers, and AI-enhanced insights.
- Experience building and/or using ETL, ELT, data quality, data management platforms, and solutions
- Proficient in Python; experience with additional languages such as Scala or Java is a plus.
- Hands-on experience with data integration, transformation, and cataloging tools.
- Familiarity with modern data governance and security practices for enterprise systems.
- Strong problem-solving and communication skills.
- Building and maintaining a SaaS product at scale.
- Containers and container-orchestration tools like Docker and Kubernetes.
- Monitoring and metrics infrastructure (Grafana, Prometheus).
- Public Cloud (e.g., AWS / GCP/ Azure).
- Monitoring and metrics infrastructure (Prometheus, ELK).
- BS/MS/Ph.D. in Computer Science or related technical field, or equivalent practical experience.
Technologies you'll work with:
- Programming languages: Python, Jupyter Notebook, Java
- Data Tools: Apache Spark, Databricks, Unity Catalog, Delta Lake
- Persistence: MySQL, CRDB, Redis
- Container and Orchestration: Kubernetes/Docker
- Public cloud: AWS, GCP, Azure
- CI/CD/Build/Deploy: GitHub, Gradle, Maven, Jenkins, Harness, Artifactory
- Telemetry: Grafana, SignalFx, Splunk, Prometheus
#LI-SP1
Our Commitment to Diversity, Equity, Inclusion and Belonging
Build your career in a place that thrives on diversity, equity, inclusion, and belonging. We believe in a hiring and working environment where all people are respected and valued, regardless of gender identity or expression, sexual orientation, religion, ethnicity, age, neurodiversity, disability status, citizenship, or any other aspect which makes people unique. We hire you for who you are, and we want you to bring your authentic self to work every day!
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, perform essential job functions, and receive equitable benefits and all privileges of employment. Please contact us to request accommodation.
Fraud Recruitment Disclaimer
It has come to our attention that fraudulent and fictitious job opportunities are being circulated on the Internet. Prospective candidates are being contacted by certain individuals, mainly through telephone calls, emails and correspondence, claiming they are representatives of Anaplan. The main purpose of these correspondences and announcements is to obtain privileged information from individuals.
Anaplan does not:
- Extend offers to candidates without an extensive interview process with a member of our recruitment team and a hiring manager via video or in person.
- Send job offers via email. All offers are first extended verbally by a member of our internal recruitment team whenever possible, and then followed up via written communication.
All emails from Anaplan would come from an @anaplan.com email address. Should you have any doubts about the authenticity of an email, letter or telephone communication purportedly from, for, or on behalf of Anaplan, please send an email to people@anaplan.com before taking any further action in relation to the correspondence.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture AWS Azure CI/CD Computer Science Databricks Data governance Data management Data pipelines Data quality DevOps Distributed Systems Docker ELK ELT Engineering ETL GCP GitHub Grafana Java Jenkins Jupyter Kubernetes Linux Maven MySQL Pipelines Python Scala Security Spark Splunk TDD UX
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.