Senior Data Scientist

Hong Kong - Remote

Intellectsoft

Trusted IT software development company. 17 years of innovation, user-centric designs, agile methods, and support for businesses and startups.

View all jobs at Intellectsoft

Apply now Apply later

Intellectsoft is a software development company delivering innovative solutions since 2007. We operate across North America, Latin America, the Nordic region, the UK, and Europe.We specialize in industries like Fintech, Healthcare, EdTech, Construction, Hospitality, and more, partnering with startups, mid-sized businesses, and Fortune 500 companies to drive growth and scalability. Our clients include Jaguar Motors, Universal Pictures, Harley-Davidson, Qualcomm, and London Stock Exchange.Together, our team delivers solutions that make a difference. Learn more at www.intellectsoft.net

Requirements

  • 7+ years of experience in data science, machine learning, and statistical modeling.
  • Strong programming skills in Python, SQL, and Spark.
  • Hands-on experience with MLflow, PyTorch, Spark MLlib, and FastAPI for model development and deployment.
  • Understanding of distributed computing and big data processing using Apache Spark and ClickHouse.
  • Proficiency in feature engineering, data preprocessing, and model tuning for large-scale datasets.
  • Experience in building and deploying ML models in production environments using TorchServe, FastAPI, or similar frameworks.
  • Knowledge of deep learning architectures (CNNs, RNNs, transformers) and their practical applications.
  • Strong grasp of MLOps best practices, including CI/CD for ML models, model monitoring, and retraining pipelines.
  • Understanding of real-time analytics and event-driven architectures for processing streaming data.
  • Experience working with SQL and NoSQL databases such as PostgreSQL, ClickHouse, and Delta Lake.
  • Strong ability to collaborate with data engineers, architects, and business analysts to ensure ML models align with business objectives.
  • Knowledge of A/B testing methodologies and causal inference techniques for evaluating model effectiveness.
  • Familiarity with cloud services (AWS, GCP, or Azure) for scalable model training and deployment.

Responsibilities:

  • Design the architecture for the open-source-based data analytics platform.
  • Develop scalable data models, data pipelines, and data lakes.
  • Ensure integration of various data sources, including Kafka, NiFi, Apache Airflow, and Spark.
  • Implement modern data platform components like Apache Iceberg, Delta Lake, ClickHouse, and PostgreSQL.
  • Define and enforce data governance, security, and compliance best practices.
  • Optimize data storage, access, and retrieval for performance and scalability.
  • Collaborate with data scientists, engineers, and business analysts to ensure platform usability.

Benefits

  • 35 absence days per year for work-life balance
  • Udemy courses of your choice
  • English courses with native-speaker
  • Regular soft-skills trainings
  • Excellence Сenters meetups
  • Online/offline team-buildings
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Data Science Jobs

Tags: A/B testing Airflow Architecture AWS Azure Big Data Causal inference CI/CD Data Analytics Data governance Data pipelines Deep Learning Engineering FastAPI Feature engineering FinTech GCP Kafka Machine Learning MLFlow ML models MLOps Model training .NET NiFi NoSQL Open Source Pipelines PostgreSQL Python PyTorch Security Spark SQL Statistical modeling Statistics Streaming Testing Transformers

Perks/benefits: Career development

Regions: Remote/Anywhere Asia/Pacific
Country: Hong Kong

More jobs like this