Principal Engineer, Data
Remote, United States
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Verint
Verint is a leader in CX automation. The world’s most iconic brands rely on our open CCaaS platform and team of AI-powered bots to create tangible AI business outcomes, now.At Verint, we believe customer engagement is the core of every global brand. Our mission is to help organizations elevate Customer Experience (CX) and increase workforce productivity by delivering CX Automation. We hire innovators with the passion, creativity, and drive to answer constantly shifting market challenges and deliver impactful results for our customers. Our commitment to attracting and retaining a talented, diverse, and engaged team creates a collaborative environment that openly celebrates all cultures and affords personal and professional growth opportunities. Learn more at www.verint.com.
Overview of Job Function:
The Principal Data Engineer will be a part of Big Data team and responsible for developing batch or event-driven data pipelines in the cloud (AWS) while improving the reliability, security, resilience, and performance of the Big Data pipeline. The Principal Engineer will bring knowledge of software development using Java and Spark technologies and contribute to daily operations including CI/CD.
Principal Duties and Essential Responsibilities:
- Design and develop overall data architecture to ensure the effective storage, retrieval, and analysis of large-scale data sets
- Build and maintain scalable data pipelines for ingesting, processing, and transforming data from various sources into our data warehouse or data lake. Ensure data quality and data consistency throughout the pipeline
- Optimize data storage, retrieval, and processing performance through indexing, partitioning, and caching techniques
- Evaluate and select appropriate technologies, tools, and platforms for data processing
- Improve availability and reliability of data streaming pipeline
- Evaluate the scalability of data architecture to accommodate future growth and changing business needs
- Develop build and deployment automation for microservices using CI/CD
Essential Requirements:
- Bachelor's degree in Computer Science, Engineering or related field
- 8+ years developing backend services with Java/ Scala/Python
- Strong modern data store knowledge including traditional data warehouse and data lake
- Experience on solution design with microservices dealing large scale datasets
- Experience in automating operational tasks through development and coding.
- Hands-on experience using Maven, Jenkins, Git, JUnit
- 5+ years working experience using cloud services in AWS or other cloud provider
- Excellent communication skill
Preferred Requirements:
- Knowledge of Spark (java, Scala)
- Knowledge of Databrick
- Hands-on experience with Docker and Kubernetes
- Experience working in cloud environments: AWS and/or Azure
- Familiarity with performance monitoring using Datadog
- Understanding of asynchronous Java programming
#LI-KD1
MIN: $135K
MAX: $190K
Tags: Architecture AWS Azure Big Data CI/CD Computer Science CX Data pipelines Data quality Data warehouse Docker Engineering Git Java Jenkins Kubernetes Maven Microservices Pipelines Python Scala Security Spark Streaming
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.