Data Engineer
København, Denmark
Secomea
Access, control, and protect your OT environments remotely with Secomea, the all-in-one Secure Remote Access solution purpose-built for cyber-physical systems.Data Engineer
About Secomea: Secomea is a leader in Operational Technology (OT) secure remote access (SRA) for industrial control and critical infrastructure. We are at the forefront of one of the most exciting business opportunities today, characterized by a dynamic ecosystem of Machine Builders, System Integrators, and OT customers. As cyber security and compliance become top priorities, we are committed to protecting the factory floor.
Join us at an exciting time of growth as we scale our industrial B2B SaaS platform. We’re looking for a passionate Data Engineer who thrives on building scalable, high-impact data systems that power real business outcomes.
In this role, you’ll shape and maintain modern data pipelines using Databricks and Microsoft Azure, while optimizing our Lakehouse architecture for performance, efficiency, and cost.
You'll collaborate with talented teams across Product, Engineering, and Analytics to ensure that clean, reliable data fuels everything from day-to-day operations to strategic AI initiatives.
You’ll play a key role in automating business processes, enabling self-service analytics, and supporting the deployment of machine learning models. If you're excited to make a tangible impact in a data-driven company that’s growing fast, this is your opportunity.
What You’ll Do
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows using Databricks
- Optimize and maintain our Lakehouse architecture for performance and cost-efficiency in Microsoft Azure and Databricks
- Collaborate closely with Product, Engineering, and Analytics teams to ensure reliable and timely data availability
- Build out data models that support self-service analytics and reporting
- Implement best practices for data governance, quality, and security
- Participate in code reviews, architecture decisions, and continuous improvement initiatives
- Identify and implement improvements to data reliability, efficiency, and quality across our pipelines and data architecture
- Automate key data-driven business processes to improve operational efficiency and support the organization’s ability to scale
- Support our AI and machine learning initiatives by ensuring access to high-quality, well-structured data and helping deploy models into production environments
Tech Stack
- Cloud: Microsoft Azure (Data Lake Storage)
- Processing & Modeling: Databricks (PySpark, Delta Lake)
- Visualization: Microsoft Power BI
- Orchestration: Databricks Workflows
- Languages: Python, SQL
- Version Control & CI/CD: GitHub
Requirements
- 3+ years of experience as a Data Engineer or in a similar role
- Hands-on experience with Microsoft Azure and Databricks
- Strong proficiency in Python and SQL
- Experience designing and building large-scale, distributed data processing systems
- Familiarity with data warehousing and dimensional modeling
- Knowledge of data governance principles and best practices
- Excellent communication skills and ability to collaborate with both technical and non-technical stakeholders
Nice to Have
- Knowledge of CI/CD and DevOps practices for data
- Experience in B2B SaaS or working with product-led growth organizations
- Familiarity with BI tools (e.g., Power BI, Looker, Tableau)
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture Azure CI/CD Databricks Data governance Data pipelines Data Warehousing DevOps ELT Engineering ETL GitHub Industrial Looker Machine Learning ML models Pipelines Power BI PySpark Python Security SQL Tableau
Perks/benefits: Career development Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.