Intermediate Data Engineer
Remote
This is a remote position.
At Softgic we work with the coolest, with those who build, with those who love what they do, with those who have 100 in attitude, because that's our #Cooltura. Join our purpose of making life easier with technology and be part of our team as a Data Engineer.Compensation:
USD 20 - 25/hour.
Location:
Remote (for México, Guatemala, Colombia, Perú, Chile, Argentina, Paraguay, Brasil, Honduras, Jamaica, Dominican, Belice, Spain, United States, Canadá, Kenya, South Africa, India, and Filipinas residents).
Mission of Softgic:
In Softgic S.A.S. we work for the digital and cognitive transformation of our clients, aware that quality is an essential factor for us, we incorporate the following principles into our policy:
- Deliver quality products and services.
- Achieve the satisfaction of our internal and external clients.
- Encourage in our team the importance of training to grow professionally and personally through development plans.
- Comply with the applicable legal and regulatory requirements.
- Promote continuous improvement of the quality management system.
- You have 3+ years of experience in data framework and pipeline development.
- You are proficient in Scala.
- You are beginner in Apache Spark.
- English - Native or fully fluent.
This vacancy is 100% On-site in: Colombia, Guatemala, Mexico, Peru, Chile, Belize, United States, Canada, Spain, Dominican Republic, Jamaica, Honduras, Brazil, Paraguay, Argentina, South Africa, Kenya, India, Philippines.
We are seeking a Data Engineer to help transform our data infrastructure, migrating from relational databases to a modern big data architecture. You will play a key role in defining event-driven data feeds, improving automation, and enhancing observability, alerting, and performance.
We’ve built a strong data engineering team to date, but have a lot of work ahead of us,
including:
- Migrating from relational databases to a streaming and big data architecture, including a complete overhaul of our data feeds.
- Defining streaming event data feeds required for real-time analytics and reporting.
- Leveling up our platform, including enhancing our automation, test coverage, observability, alerting, and performance.
- Build our next generation data warehouse.
- Build our event stream platform.
- Translate user requirements for reporting and analysis into actionable deliverables.
- Enhance automation, operation, and expansion of real-time and batch data environment.
- Manage numerous projects in an ever-changing work environment.
- Extract, transform, and load complex data into the data warehouse using cutting-edge technologies.
- Build processes for topnotch security, performance, reliability, and accuracy.
- Provide mentorship and collaborate with fellow team members.
Requirements
Qualifications:- Bachelor’s or Master’s degree in Computer Science, Information Systems, Operations
- Research, or related field required.
- 3+ years of experience building data pipelines.
- 3+ years of experience building data frameworks for unit testing, data lineage tracking, and automation.
- Fluency in Scala is required.
- Working knowledge of Apache Spark.
- Familiarity with streaming technologies (e.g., Kafka, Kinesis, Flink).
Nice-to-Haves:
- Experience with Machine Learning.
- Familiarity with Looker a plus.
- Knowledge of additional server-side programming languages (e.g. Golang, C#, Ruby).
Benefits
- We're certified as a Great Place to Work.
- Opportunities for advancement and growth.
- Paid time off.
- Formal education and certifications support.
- Benefits with partner companies.
- Referral program.
- Flexible working hours.
Job stats:
0
0
0
Category:
Engineering Jobs
Tags: Architecture Big Data Computer Science Data pipelines Data warehouse Engineering Flink Golang Kafka Kinesis Looker Machine Learning Pipelines RDBMS Research Ruby Scala Security Spark Streaming Testing
Perks/benefits: Career development Flex hours Flex vacation
Region:
Remote/Anywhere
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.
Senior AI Engineer jobsData Engineer II jobsSr. Data Engineer jobsBI Developer jobsPrincipal Data Engineer jobsStaff Data Scientist jobsStaff Machine Learning Engineer jobsData Manager jobsData Science Manager jobsData Science Intern jobsPrincipal Software Engineer jobsBusiness Data Analyst jobsJunior Data Analyst jobsData Specialist jobsResearch Scientist jobsData Analyst Intern jobsSoftware Engineer II jobsLead Data Analyst jobsDevOps Engineer jobsSr. Data Scientist jobsData Engineer III jobsAI/ML Engineer jobsJunior Data Engineer jobsStaff Software Engineer jobsData Engineering Manager jobs
Git jobsEconomics jobsLinux jobsOpen Source jobsKafka jobsAirflow jobsHadoop jobsPhysics jobsNoSQL jobsData Warehousing jobsGoogle Cloud jobsKPIs jobsJavaScript jobsRDBMS jobsMLOps jobsComputer Vision jobsScala jobsBanking jobsPostgreSQL jobsScikit-learn jobsData warehouse jobsTerraform jobsClassification jobsGitHub jobsR&D jobs
Streaming jobsOracle jobsPandas jobsPySpark jobsSAS jobsCX jobsScrum jobsBigQuery jobsDistributed Systems jobsData Mining jobsJira jobsLooker jobsReact jobsMicroservices jobsRAG jobsdbt jobsRobotics jobsRedshift jobsIndustrial jobsUnstructured data jobsJenkins jobsMySQL jobsNumPy jobsE-commerce jobsData strategy jobs