SSr. Data Engineer
Buenos Aires, Cordoba ARG, Sao Paulo BR, Mexico City, Guatemala City
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Yalo Inc.
Sell more, engage, and build deep relationships through Conversational Commerce on WhatsApp and other messaging apps.Yalo
Hi! We’re Yalo! We’re on a mission to revolutionize how businesses sell in an omnichannel way with our intelligent sales platform and intelligent agents powered by cutting-edge AI.
Imagine a world where businesses seamlessly connect with their customers across every channel—offering personalized experiences, anticipating needs, and delivering what they want with ease. That’s the reality we’re building at Yalo.
Born in Latin America and driven by its spirit of innovation, we’re transforming sales for businesses around the globe. From empowering businesses in emerging markets to helping enterprises scale intelligently, we’re redefining how companies engage with their customers and drive growth.
At Yalo, we believe the future of sales is personalized, omnichannel, intelligent, and conversational. Join us as we empower businesses to build stronger relationships and achieve remarkable results worldwide!
Job Summary 🧾
We are seeking a seasoned SSr. Data Engineer with a solid understanding of data models management, BI, data pipelines orchestration, data quality, monitoring, data integrity. We are looking for a person who also will be responsible for maintaining data pipelines, models, collaborating with Analytics Engineers to deliver insights decision-making, improve efficiency, enhance the architecture, and manage risks by interpreting complex data sets, who can also succeed in data engineering demands a blend of technical skills and soft skills such as critical thinking and communication.
Your mission?
Build, and maintain the infrastructure that enables a world class Conversational AI platform to collect, store, and process large volumes of data efficiently. By ensuring data flows seamlessly from various sources to storage systems, empowering also data analysts and scientists to derive meaningful insights that drive business decisions.
- What are the responsibilities for this role?🧠
- Design, build and maintain batch or real-time data pipelines in production. Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources.
- Build and maintain Kafka stream pipelines.
- Support and participate in data architecture decisions and assist in data strategy.
- Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures.
- Monitor data systems performance and implement optimization strategies.
- Develop ELT processes to help extract and manipulate data from multiple sources.
- Help to design and maintain a semantic layer.
- Automate data workflows such as data ingestion, aggregation, and ELT processing.
- Prepare raw data in Data Warehouses into a consumable dataset for both technical and non-technical stakeholders.
- Partner with data scientists and data analysts to deploy machine learning and data models in production.
- Build, maintain, and deploy data products for analytics and data science teams on GCP platform.
- Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership.
- Collaboration: Work closely with cross-functional teams, product managers, and stakeholders to ensure the delivery of high-quality software.
- Continuous Learning: Stay updated with the latest trends and technologies in data systems, ensuring that our systems remain state-of-the-art.
Job Requirements?💻
-
- Bachelor’s/Master’s degree in Computer Science, Information Systems, or a related field.
- Ideally experience with Confluent.
- Experience working with BigQuery cloud Data Warehouse and ideally other data platforms like Databricks.
- Advanced SQL skills and experience with relational databases and database design.
- Strong foundation in data structures, algorithms, and software design.
- Working knowledge in object-oriented languages (e.g. Python, Java).
- Strong proficiency in data pipeline and workflow management tools (e.g., Composer Airflow).
- Strong project management and organizational skills.
- Ideally working knowledge on designing and implementing a BI semantic layer.
- Excellent problem-solving, communication, and organizational skills.
- Proven ability to work independently and with a team.
What do we offer? 🥰
- Unlimited PTO policy
- Competitive rewards on the market range
- Work-Personal Life Balance
- Start-up environment
- International teamwork
- You and nothing else limit your career here
We care,
We keep it simple,
We make it happen,
We strive for excellence.
At Yalo, we are dedicated to creating a workplace that embodies our core values: caring, initiative, excellence, and simplicity. We believe in the power of diversity and inclusivity, where everyone's unique perspectives, experiences, and talents contribute to our collective success. As we embrace and respect our differences, we strive to create something extraordinary for the benefit of all.
We are proud to be an Equal Opportunity Employer, providing equal opportunities to individuals regardless of race, color, religion, national or ethnic origin, gender, sexual orientation, gender identity or expression, age, disability, protected veteran status, or any other legally protected characteristic. Our commitment to fairness and equality is a fundamental pillar of our company.
At Yalo, we uphold a culture of excellence. We constantly challenge ourselves to go above and beyond, delivering remarkable results and driving innovation. We encourage each team member to take initiative and make things happen, empowering them to bring their best ideas forward and contribute to our shared goals.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow APIs Architecture BigQuery Computer Science Conversational AI Databricks Data pipelines Data quality Data strategy Data warehouse dbt ELT Engineering ETL GCP Java Kafka Looker Machine Learning Microservices Pipelines Privacy Python RDBMS Security SQL
Perks/benefits: Career development Startup environment Unlimited paid time off
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.