Data Research - Database Engineer
Mumbai, MH, India
Forbes Advisor
Forbes is a global media company, focusing on business, investing, technology, entrepreneurship, leadership, and lifestyle.Company Description
Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most.
The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs.
A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role.
Job Description
Key Responsibilities
Design, build, and maintain scalable and secure relational and cloud-based database systems.
Migrate data from spreadsheets or third-party sources into databases (PostgreSQL, MySQL, BigQuery).
Create and maintain automated workflows and scripts for reliable, consistent data ingestion.
Optimize query performance and indexing to improve data retrieval efficiency.
Implement access controls, encryption, and data security best practices to ensure compliance.
Monitor database health and troubleshoot issues proactively using appropriate tools.
Collaborate with full-stack developers and data researchers to align data architecture with application needs.
Uphold data quality through validation rules, constraints, and referential integrity checks.
Keep up-to-date with emerging technologies and propose improvements to data workflows.
Leverage tools like Python (Pandas, SQLAlchemy, PyDrive), and version control (Git).
Support Agile development practices and CI/CD pipelines where applicable.
Required Skills and Experience
Strong SQL skills and understanding of database design principles (normalization, indexing, relational integrity).
Experience with relational databases such as PostgreSQL or MySQL.
Working knowledge of Python, including data manipulation and scripting (e.g., using Pandas, SQLAlchemy).
Experience with data migration and ETL processes, including integrating data from spreadsheets or external sources.
Understanding of data security best practices, including access control, encryption, and compliance.
Ability to write and maintain import workflows and scripts to automate data ingestion and transformation.
Experience with cloud-based databases, such as Google BigQuery or AWS RDS.
Familiarity with cloud services (e.g., AWS Lambda, GCP Dataflow) and serverless data processing.
Exposure to data warehousing tools like Snowflake or Redshift.
Experience using monitoring tools such as Prometheus, Grafana, or the ELK Stack.
Good analytical and problem-solving skills, with strong attention to detail.
Team collaboration skills, especially with developers and analysts, and ability to work independently.
Proficiency with version control systems (e.g., Git).
Strong communication skills — written and verbal.
Preferred / Nice-to-Have Skills
Bachelor’s degree in Computer Science, Information Systems, or a related field.
Experience working with APIs for data ingestion and third-party system integration.
Familiarity with CI/CD pipelines (e.g., GitHub Actions, Jenkins).
Python experience using modules such as gspread, PyDrive, PySpark, or object-oriented design patterns.
Experience in Agile/Scrum teams or working with product development cycles.
Experience using Tableau and Tableau Prep for data visualization and transformation.
Why Join Us
● Monthly long weekends — every third Friday off
● Wellness reimbursement to support your health and balance
● Paid parental leave
● Remote-first with flexibility and trust
● Work with a world-class data and marketing team inside a globally recognized brand
Qualifications
5+ Years exp in Database Engineering.
Additional Information
Perks:
Day off on the 3rd Friday of every month (one long weekend each month)
Monthly Wellness Reimbursement Program to promote health well-being
Monthly Office Commutation Reimbursement Program
Paid paternity and maternity leaves
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile APIs Architecture AWS BigQuery CI/CD Computer Science Dataflow Data governance Data quality Data visualization Data Warehousing ELK Engineering ETL Finance GCP Git GitHub Grafana Jenkins Lambda MySQL Pandas Pipelines PostgreSQL PySpark Python RDBMS Redshift Research Scrum Security Snowflake SQL Tableau
Perks/benefits: Career development Parental leave Wellness
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.