Data Scientist

San Diego, CA, US

Wiliot

Wiliot is a Sensing as a Service platform powered by IoT Pixels

View all jobs at Wiliot

Apply now Apply later

Description

Wiliot was founded by the team that invented one of the technologies at the heart of 5G. Their next vision was to develop an IoT sticker, a computing element that can power itself by harvesting radio frequency energy, bringing connectivity and intelligence to everyday products and packaging—things previously disconnected from the IoT. This revolutionary mixture of cloud and semiconductor technology is being used by some of the world’s largest consumer, retail, food, and pharmaceutical companies to change the way we make, distribute, sell, use, and recycle products. 

Our investors include Softbank, Amazon, Alibaba, Verizon, NTT DoCoMo, Qualcomm, and PepsiCo. 

We are growing fast and need people that want to be part of the journey, commercializing Sensing as a Service and enabling “Intelligence for Everyday Things.” 

Wiliot is seeking an experienced Data Scientist to join our team in one of our key locations: San Francisco, New York, or Dallas. This role will focus on developing, deploying, and optimizing machine learning models that power Wiliot’s core intelligence platform. You will work closely with engineering, product, and customer-facing teams to derive insights from IoT data and deliver high-impact ML solutions at scale. 

Responsibilities

  • ML Model Development: Design, build, and validate machine learning models to support applications such as anomaly detection, states of inventory, and supply chain behavior on streaming and batch IoT data. 
  • Data Preparation & Feature Engineering: Collaborate with data engineers to prepare high-quality datasets, develop scalable feature pipelines, and manage training data lifecycle. 
  • Model Deployment: Implement and operationalize models using MLOps best practices. This includes packaging models, tracking experiments, and monitoring performance in production. 
  • Collaboration & Enablement: Work closely with engineering and product teams to align model development with real-world use cases. Enable business and technical stakeholders to leverage insights through accessible tools and visualizations. 
  • Streaming & Real-time Analytics: Contribute to the development of real-time intelligence features using tools such as Spark Structured Streaming, Kafka, and other big data frameworks. 
  • Tooling & Automation: Build internal tools and workflows to improve experimentation speed and reproducibility. Support automation of model training, evaluation, and retraining processes. 
  • Innovation & Research: Stay up-to-date with developments in the machine learning, AI, and IoT space. Evaluate and apply new techniques to enhance model accuracy and performance. 


Requirements

Education: 

  • Bachelor’s or master’s degree in Computer Science, Statistics, Machine Learning, or a related field. 

Experience: 

  • 3–5 years of experience in data science roles, preferably in a technology or IoT-focused company. 
  • Proven experience developing and deploying machine learning models in production environments. 
  • Hands-on experience with Apache Spark (PySpark or Scala) for large-scale data processing. 
  • Experience working with time series or sensor data, particularly in a streaming or real-time context. 

Technical Skills: 

  • Proficient in Python and common ML libraries (e.g., scikit-learn, XGBoost, TensorFlow, PyTorch). 
  • Strong SQL skills and familiarity with data storage formats such as Parquet and Delta. 
  • Experience with cloud platforms such as AWS, GCP, or Azure. 
  • Exposure to ML lifecycle tools like MLflow, SageMaker, or Vertex AI. 
  • Familiarity with version control systems such as Git and containerized development (e.g., Docker). 

Additional Skills (Bonus): 

  • Experience with Java and/or Scala. 
  • Familiarity with streaming data tools such as Kafka, Spark Structured Streaming, or Flink. 
  • DevOps/MLOps experience, including CI/CD, model monitoring, and reproducibility best practices. 
  • Exposure to Databricks or Airflow for workflow orchestration. 
  • Understanding of modern software design patterns (e.g., microservices, functional programming). 
  • Strong communication skills to bridge technical and non-technical domains. 
  • Ability to manage multiple projects and prioritize in a fast-paced environment. 


#LI-Hybrid

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Data Science Jobs

Tags: Airflow AWS Azure Big Data CI/CD Computer Science Databricks DevOps Docker Engineering Feature engineering Flink GCP Git Java Kafka Machine Learning Microservices MLFlow ML models MLOps Model deployment Model training Parquet Pharma Pipelines PySpark Python PyTorch Research SageMaker Scala Scikit-learn Spark SQL Statistics Streaming TensorFlow Vertex AI XGBoost

Region: North America
Country: United States

More jobs like this