Staff Data Product Architect

Cambridge, United Kingdom

GE Vernova

The Energy to Change the World. We are GE Vernova. We are helping to accelerate the path to more reliable, affordable, and sustainable energy. With a passion for innovation, we deliver a diverse portfolio of leading technologies we are working...

View all jobs at GE Vernova

Apply now Apply later

Job Description SummaryJoin us at GE Vernova Grid Software to be part of the team leading the digital transformation of the energy market. As the world’s energy sector moves away from fossil fuels toward renewable energy sources, industrial companies are challenged with addressing this transition in transformative ways. Digitization will be key to making power-generating assets more efficient and the electric grid more secure and resilient. Our Geospatial products play a critical role in this transformation by supporting the design, modelling and maintenance of electric, gas and telecommunication networks. For more information on our strategy, check out GridOS overview (https://www.gevernova.com/software/products/gridos).

You will be a part of our Grid Software Engineering team, an Agile organization with a flexible working environment, where we are always looking to innovate our products and the processes and technologies we use. Our current focus is on leveraging our long history of Geospatial experience and expertise building client-server products, and evolving those products and tech stacks to modern cloud-based mapping and analytics micro-services. We are seeking to hire people who are passionate about technology, enjoy solving challenging problems and value the positive impact it makes to our customers. We are looking to grow our current team to meet these customer needs and will use your technical expertise and problem-solving abilities to innovate complex solutions.

As a Data Architect with a focus on building a backend data product, you will work closely with your product development peers in fast-paced Agile development teams, responsible for designing, developing, and delivering a data product that integrates into the broader GridOS Data Fabric. You will focus on managing the data ingestion process, ensuring efficient data flow into the data product. Your expertise in schema design and query optimization will ensure data is structured efficiently and queried with optimal performance.

Job Description

Roles and Responsibilities

In this role you will:

  • Architect the data product to be scalable, performant, and well-integrated with the GridOS Data Fabric.
  • Lead the design and implementation of data ingestion pipelines for real-time and batch data.
  • Design and implement data models and schemas that support optimal data organization, consistency, and performance.
  • Ensure that schema design and query performance are optimized to handle increasing data volumes and complexity.
  • Ensure data governance, security, and quality standards are met.
  • Monitor the performance of data pipelines, APIs, and queries, and optimizing for scalability and reliability.
  • Collaborate with cross-functional teams to ensure the data product meets business and technical requirements.
  • Design APIs (REST, GraphQL, etc.) for easy, secure access to the data.
  • Participate in the data domain technical and business discussions relative to future architect direction.
  • Gather and analyse data and develops architectural requirements at project level.
  • Researches and evaluates emerging data technology, industry and market trends to assist in project development activities.
  • Coach and mentor team members.
     

Education Qualification

Bachelor's Degree in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) with advanced experience.

Desired characteristics

  • Proven experience as a Data Product Architect or Data Engineer with a focus on building data products and APIs.
  • Strong experience in designing and implementing data ingestion pipelines using technologies like Kafka or ETL frameworks
  • Hands-on experience in designing and exposing APIs (REST, GraphQL, gRPC, etc) for data access and consumption
  • Expertise in data modeling, schema design, and data organization to ensure data consistency, integrity, and scalability.
  • Experience with query optimization techniques to ensure fast and efficient data retrieval while balancing performance with data complexity.
  • Strong knowledge of data governance practices, including metadata management, data lineage, and compliance with regulatory standards (e.g. GDPR).
  • Familiarity with cloud platforms (e.g., AWS, Google Cloud, Azure) and leveraging cloud-native data services (e.g., S3, Redshift, BigQuery, Azure Data Lake).
  • In-depth knowledge of data security practices (RBAC, ABAC, encryption, authentication) to ensure secure data access and protection.
  • Experience working with data catalogs, data quality practices, and implementing data validation techniques.
  • Familiarity with data orchestration tools (e.g., Apache Airflow, NiFi).
  • Expertise in optimizing and maintaining high-performance APIs and data pipelines at scale.
  • Strong understanding of data federation and data virtualization principles for seamless data integration and querying across multiple systems.
  • Familiarity with microservices architecture and designing APIs that integrate with distributed systems.
  • Excellent communication skills with the ability to work effectively with cross-functional teams, including data engineers, product managers, and business stakeholders.
  • Ability to consult customer on alignment of outcomes and desired technical solutions at an enterprise level.
  • Ability to analyse, design, and develop a software solution roadmap and implementation plan based upon a current vs. future state of the business.

Additional Information

Relocation Assistance Provided: No

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Agile Airflow APIs Architecture AWS Azure BigQuery Computer Science Data governance Data pipelines Data quality Distributed Systems Engineering ETL GCP Google Cloud GraphQL Industrial Kafka Mathematics Microservices NiFi Pipelines Redshift Security STEM

Perks/benefits: Flex hours Team events

Region: Europe
Country: United Kingdom

More jobs like this