Senior Data Engineer

Hyderabad, India

Ninja Van

Ninja Van is Southeast Asia’s leading logistics provider, with the highest service coverage over 6 countries in the region. Experience the joy of hassle-free deliveries by shipping with Ninja Van today.

View all jobs at Ninja Van

Apply now Apply later

Ninja Van is a late-stage logtech startup that is disrupting a massive industry with innovation and cutting edge technology. Launched 2014 in Singapore, we have grown rapidly to become one of Southeast Asia's largest and fastest-growing express logistics companies. Since our inception, we’ve delivered to 100 million different customers across the region with added predictability, flexibility and convenience. Join us in our mission to connect shippers and shoppers across Southeast Asia to a world of new possibilities. 
More about us: - We process 250 million API requests and 3TB of data every day.- We deliver more than 2 million parcels every day.- 100% network coverage with 2600+ hubs and stations in 6 SEA markets (Singapore, Malaysia, Indonesia, Thailand, Vietnam and Philippines), reaching 500 million consumers.- 2 Million active shippers in all e-commerce segments, from the largest marketplaces to the individual social commerce sellers.- Raised more than US$500 million over five rounds.
We are looking for world-class talent to join our crack team of engineers, product managers and designers. We want people who are passionate about creating software that makes a difference to the world. We like people who are brimming with ideas and who take initiative rather than wait to be told what to do. We prize team-first mentality, personal responsibility and tenacity to solve hard problems and meet deadlines. As part of a small and lean team, you will have a very direct impact on the success of the company.
This role will lead the design, development and implementation of data solutions to business problems. The Data Engineer will be expected to perform duties such as: evaluating the performance of current data solutions, designing and implementing cloud and hybrid data solutions. Ability to adapt and learn new technologies per business requirements is also needed.

Requirements

  • Minimum 7 years experience working with one or more languages commonly used for data operations including SQL, Python, Scala and R
  • Experience designing, using and maintainingrelational databases such as PostgreSQL, MySQL and SQL Server
  • Experience working with NoSQL databases such as Redis, MongoDB
  • Familiarity with HTTP, HTML, Javascript and Networking
  • Excellent problem-solving skills and ability to learn through scattered resources
  • Thorough understanding of the responsibilities and duties of a data engineer, as well as established industry standards/best practices and documentation guidelines
  • Outstanding communication skills, and the ability to stay self-motivated and work with little or no supervision.

Added Advantage, if you meet any of these requirements

  • Experience running large scale web scrapes
  • Familiarity with techniques and tools for crawling, extracting and processing data (e.g. Scrapy, Pandas, Mapreduce, SQL, BeautifulSoup, Selenium, etc)
  • Experience with cloud-based data technologies
  • Experience with distributed systems utilizing tools such as Apache Hadoop, Spark or Kafka.

Responsibilities

  • Lead the design, development and implementation of data architecture, pipelines and solutions using industry best practices
  • Performs ETL, ELT operations and administration of data and systems securely and in accordance with enterprise data governance standards
  • Design and implement web scraping workflows
  • Monitor, maintain and optimize data pipelines proactively to ensure high service availability
  • Work with Data Scientists and ML Engineers to understand mathematical models and optimize data solutions accordingly
  • Create scripts and programs to automate data operations.
Tech StackBackend: Play (Java 8+), Golang, Node.js, Python, FastAPIFrontend: AngularJS, ReactJSMobile: Android, Flutter, React NativeCache: Hazelcast, RedisData storage: MySQL, TiDB, Elasticsearch, Delta LakeInfrastructure monitoring: Prometheus, GrafanaOrchestrator: KubernetesContainerization: Docker, ContainerdCloud Provider: GCP, AWSData pipelines: Apache Kafka, Spark Streaming, Maxwell/Debezium, PySpark, TiCDCWorkflow manager: Apache AirflowQuery engines: Apache Spark, Trino
Submit a job applicationBy applying to the job, you acknowledge that you have read, understood and agreed to our Privacy Policy Notice (the “Notice”) and consent to the collection, use and/or disclosure of your personal data by Ninja Logistics Pte Ltd (the “Company”) for the purposes set out in the Notice. In the event that your job application or personal data was received from any third party pursuant to the purposes set out in the Notice, you warrant that such third party has been duly authorised by you to disclose your personal data to us for the purposes set out in the the Notice.
Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Angular APIs Architecture Data governance DataOps Data pipelines Distributed Systems Docker E-commerce Elasticsearch ELT ETL GCP Golang Hadoop Java JavaScript Kafka Machine Learning MongoDB MySQL Node.js NoSQL Pandas Pipelines PostgreSQL Privacy PySpark Python R React Scala Selenium Spark SQL Streaming

Perks/benefits: Startup environment

Region: Asia/Pacific
Country: India

More jobs like this