Senior Data Engineer

Trivandrum, India

Applications have closed

Armada

Armada’s platform combines connectivity, compute, and real-world AI to solve your toughest challenges right where your data is generated.

View all jobs at Armada

About the Company
Armada is an edge computing startup that provides computing infrastructure to remote areas where connectivity and cloud infrastructure is limited, as well as areas where data needs to be processed locally for real-time analytics and AI at the edge. We’re looking to bring on the most brilliant minds to help further our mission of bridging the digital divide with advanced technology infrastructure that can be rapidly deployed anywhere.
About the JobWe are seeking a highly motivated Senior Data Engineer to join our Data Platform team for our Edge Computing AI Platform.  As a Data Engineer in our Data Platform team, you will be responsible for helping us shape the future of data ingestion, processing, and analysis, while maintaining and improving existing data systems. 
If you are a highly motivated individual with a passion for cutting-edge AI, cloud, edge, and infrastructure technology and are ready to take on the challenge of defining and delivering a new computing and AI platform, we would love to hear from you.
Key Responsibilities 
  •  Build new tools and services that support other teams’ data workflows, ingestion, processing, and distribution.
  • Design, discuss, propose, and implement to our existing data tooling and services.  
  • Collaborate with a diverse group of people, giving and receiving feedback for growth.  
  • Execute on big opportunities and contribute to building a company culture rising to the top of the AI and Edge Computing industry.  
Preferred Qualifications
  • 6+ years of experience in software development.
  • Experience with data modeling, ETL/ELT processes, streaming data pipelines.
  • Familiarity with data warehousing technologies like Databricks/Snowflake/BigQuery/Redshift and data processing platforms like Spark; working with data warehousing file formats like Avro and Parquet.  
  • Strong understanding of Storage (Object Stores, Data Virtualization) and Compute (Spark on K8S, Databricks, AWS EMR and the like) architectures used by data stack solutions and platforms.
  • Experience with scheduler tooling like Airflow. 
  • Experience with version control systems like Git and working using a standardized git flow.  
  • Strong analytical and problem-solving skills, with the ability to work independently and collaboratively in a team environment.
  • Professional experience developing data-heavy platforms and/or APIs. 
  • A strong understanding of distributed systems and how architectural decisions affect performance and maintainability.
  • Bachelor’s degree in computer science, Electrical Engineering, or related field.
Additional Skills and Experience
  • Experience analyzing ML algorithms that could be used to solve a given problem and ranking them by their success probability.  
  • Proficiency with a deep learning framework such as TensorFlow or Keras.
  • Understanding of MLOps practices and practical experience with platforms like Kubeflow / Sagemaker.
Our Company is an equal opportunity employer that is committed to diversity and inclusion in the workplace. We prohibit discrimination and harassment of any kind based on race, color, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other protected characteristic as outlined by federal, state, or local laws. This policy applies to all employment practices within our organization, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. Our Company makes hiring decisions based solely on qualifications, merit, and business needs at the time.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0
Category: Engineering Jobs

Tags: Airflow APIs Architecture Avro AWS BigQuery Computer Science Databricks Data pipelines Data Warehousing Deep Learning Distributed Systems ELT Engineering ETL Git Keras Kubeflow Kubernetes Machine Learning MLOps Parquet Pipelines Redshift SageMaker Snowflake Spark Streaming TensorFlow

Perks/benefits: Career development Startup environment

Region: Asia/Pacific
Country: India

More jobs like this