AI Data Engineer

Boston, Massachusetts, United States; Knoxville, Tennessee, United States; Remote; Tysons, Virginia, United States

RegScale

Future-proof your Cyber GRC and streamline your governance, risk, and compliance program with RegScale’s Continuous Controls Monitoring platform.

View all jobs at RegScale

Apply now Apply later

RegScale is a continuous controls monitoring (CCM) platform purpose-built to deliver fast and efficient GRC outcomes. We help organizations break out of the slow and expensive realities that plague legacy GRC tools by bridging security, risk, and compliance through controls lifecycle management. By leveraging CCM, organizations experience massive process improvements like 90% faster certification times, and 60% less audit prep time. Today’s expansive security and compliance requirements can only be met with a modern, CCM based approach, and RegScale is the leader in that space.     Position: We are looking for a forward-thinking AI Data Engineer to join our growing AI team. In this pivotal role, you will be responsible for designing scalable data architectures, ensuring data integrity across systems, and curating high-quality datasets that power our generative AI and machine learning initiatives. You’ll collaborate closely with data scientists, product teams, and engineering stakeholders to enable reliable, secure, and efficient data operations for AI development and deployment.   This role is ideal for someone who thrives at the intersection of data engineering and natural language processing, with a deep understanding of data governance, privacy, and the unique demands of building datasets for AI systems.   Key Responsibilities:
  • Data Architecture & Schema Design: Design, implement, and manage robust data schemas and pipelines tailored for AI workflows across systems and integrations, including the core application, model training, fine-tuning, and evaluation.
  • Database Design & Data Modeling: Design and maintain scalable, efficient, and AI-optimized data models and database architectures (relational and NoSQL) to support data ingestion, transformation, and retrieval for generative AI and application needs.
  • Dataset Curation: Lead the creation, organization, and versioning of datasets used in model development (structured and unstructured), including data labeling and augmentation workflows.
  • Metadata & Lineage: Develop and maintain data and metadata tracking systems for datasets and AI models, enabling traceability, reproducibility, and responsible AI practices.
  • Data Governance & Security: Enforce data privacy, compliance (e.g., GDPR, HIPAA), and security best practices throughout the data lifecycle.
  • Cross-functional Collaboration: Work closely with data scientists to understand data needs for fine-tuning and experimentation; partner with product teams to ensure data alignment with application requirements.
  • Quality & Validation: Implement automated validation, lineage tracking, and quality assurance mechanisms to ensure data reliability at scale.
  • Tooling & Automation: Build or integrate tools to support data versioning, synthetic data generation, and performance monitoring.
  • Documentation & Standards: Define and promote best practices for dataset documentation, data contracts, and data lineage to ensure consistency and usability across teams.
Minimum Qualifications:
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Science, or a related field.
  • Proficiency in Python, SQL, and ETL.
  • Deep understanding of structured and unstructured data handling.
  • Strong grasp of data modeling, metadata systems, and schema evolution.
  • Experience implementing data governance, security, and privacy controls in regulated environments.
  • Familiarity with tools like DVC, MLflow, Hugging Face Datasets, or custom dataset/metadata management systems.
Preferred Qualifications:
  • Experience supporting generative AI applications or LLM fine-tuning workflows.
  • Familiarity with synthetic data generation and data augmentation strategies.
  • Working knowledge of cloud platforms (AWS, GCP, Azure) and infrastructure tools like Docker.
  • Exposure to data contracts and API-based data delivery for downstream AI applications.
  • Knowledge of responsible AI, FAIR data principles, or machine learning compliance frameworks.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0

Tags: APIs Architecture AWS Azure Computer Science Data governance DataOps Docker Engineering ETL GCP Generative AI LLMs Machine Learning MLFlow ML models Model training NLP NoSQL Pipelines Privacy Python Responsible AI Security SQL Unstructured data

Perks/benefits: Career development

Regions: Remote/Anywhere North America
Country: United States

More jobs like this