Databricks Architect
Ciudad de México, CDMX, MX
Sequoia Connect
Discover global tech talent through our IT headhunting services, connecting companies with top digital transformation with IT Advisory.Description
Our client represents the connected world, offering innovative and customer-centric information technology experiences, enabling Enterprises, Associates, and Society to Rise™.
They are a USD 6 billion company with 163,000+ professionals across 90 countries, helping 1279 global customers, including Fortune 500 companies. They focus on leveraging next-generation technologies, including 5G, Blockchain, Metaverse, Quantum Computing, Cybersecurity, Artificial Intelligence, and more, on enabling end-to-end digital transformation for global customers.
Our client is one of the fastest-growing brands and among the top 7 IT service providers globally. Our client has consistently emerged as a leader in sustainability and is recognized amongst the ‘2021 Global 100 Most sustainable corporations in the World by Corporate Knights.
We are currently searching for a Databricks Architect:
Responsibilities:
- Design and develop modern cloud data platforms.
- Perform complex coding using SQL, Spark, Python, and T-SQL.
- Develop and maintain solutions using Azure stack (Azure Data Lake, Azure Data Factory, and Databricks).
- Implement ETL processes using tools such as Talend, SSIS, and Informatica.
- Build data analytics solutions with a focus on performance and scalability.
- Implement data warehousing solutions.
- Work as a Data Engineer within an Azure Big Data environment.
- Develop and optimize data models and data engineering solutions.
- Utilize Scala or Python programming for data processing.
- Leverage Azure services such as Azure Data Lake Analytics, U-SQL, and Azure SQL Data Warehouse.
- Apply analytical and problem-solving skills in a big data environment.
- Utilize Lambda Architecture and data warehousing concepts.
- Manage source code using version control systems like GIT.
- Deploy Azure resources using Azure ARM Templates.
Requirements:
- Bachelor's degree in Computer Science, Engineering, IT, Mathematics, or a related field.
- 8 to 10 years of experience in a Data Engineering role.
- Strong background in Data Science, Data Engineering, and automation within cloud-based global platforms.
- Experience with MS SQL, Azure Data Factory, Docker, and containerization tools.
- Expertise in cataloging and governance solutions for streaming with Databricks Delta Live Tables.
- Experience with multi-geo, multi-tier service design and operations.
- Proficiency in modern delivery methodologies such as SAFe Agile, Iterative, and Waterfall.
- Strong verbal and written communication skills in English.
- Excellent interpersonal and organizational skills for collaborating in distributed global teams.
- Ability to break down complex technical challenges into measurable and explainable decisions.
- Willingness to learn and adopt emerging technologies, defining engineering standards and automation processes.
- Operational experience, including early life support and root cause analysis.
Languages:
- Advanced oral English.
- Native Spanish.
Notes:
- Fully remote.
If you meet these qualifications and are pursuing new challenges, Start your application to join an award-winning employer. Explore all our job openings | Sequoia Career’s Page: https://www.sequoia-connect.com/careers/.
Requirements
None* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Architecture Azure Big Data Blockchain Computer Science Data Analytics Databricks Data warehouse Data Warehousing Docker Engineering ETL Git Informatica Lambda Mathematics MS SQL Python Scala Spark SQL SSIS Streaming Talend T-SQL
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.