Senior Data Engineer
US Remote
Full Time Senior-level / Expert USD 125K - 176K
Coupa Software, Inc.
See all of your business spend in one place with Coupa to make cost control, compliance and anything spend management related easier and more effective.Why join Coupa?
š¹ Pioneering Technology: At Coupa, we're at the forefront of innovation, leveraging the latest technology to empower our customers with greater efficiency and visibility in their spend.š¹ Collaborative Culture: We value collaboration and teamwork, and our culture is driven by transparency, openness, and a shared commitment to excellence.š¹ Global Impact: Join a company where your work has a global, measurable impact on our clients, the business, and each other.Ā
Learn more on Life at Coupa blog and hear from our employees about their experiences working at Coupa.Ā
The Impact of a Data Engineer to Coupa:
The Data Engineer is a key role at Coupa, responsible for designing, building, and maintaining the data infrastructure that powers our business. The individual will work closely with cross-functional teams, including Data Scientists, Product Managers, and Software Engineers, to develop data pipelines, transform raw data into usable formats, and ensure data quality and consistency across our platform.The Data Engineer will be responsible for designing and implementing robust data architectures that can handle large and complex datasets, and for creating and maintaining data warehouses, data lakes, and other data storage solutions.Ā
Suitable candidates will have a strong background in data engineering, with experience in data modelling and ETL development. They will also have experience in programming languages such as Python or Java, as well as in cloud-based data storage and processing technologies such as AWS, Azure, or GCP.The impact of a skilled Data Engineer at Coupa will be significant, ensuring that our platform is powered by reliable and accurate data, and enabling us to deliver innovative solutions to our customers and partners. Their work will contribute to the overall success and growth of the company, enabling Coupa to continue to lead the market in cloud-based spend management solutions.
What You'll Do:
- Create and maintain optimal data pipeline architecture
- Optimize Spark clusters for efficiency and performance by implementing robust monitoring systems to identify bottlenecks using data and metrics. Provide actionable recommendations for continuous improvement
- Optimize Spark clusters for efficiency and performance
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ābig dataā technologies
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
- Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
- Work with data and analytics experts to strive for greater functionality in our data systems
What You Will Bring to Coupa:
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience with processing large workloads and complex code on Spark clusters
- Proven experience in setting up monitoring for Spark clusters and driving optimization based on insights and findings
- Experience in designing and implementing scalable Data Warehouse solutions toĀ support analytical and reporting needs
- Experience with API development and design with REST or GraphQL experience building and optimizing ābig dataā data pipelines, architectures, and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management
- Working knowledge of message queuing, stream processing, and highly scalable ābig dataā data stores
- Strong project management and organizational skills
- Experience supporting and working with cross-functional teams in a dynamic environment
- We are looking for a candidate with 6-10 years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with object-oriented/object function scripting languages: Python, Java, Etc. Expertise with Python is a MUST.
- Experience with big data tools: Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Working knowledge of stream-processing systems: Storm, Spark-Streaming, etc.
ā¢Based in Bay Area, California: $155,125 - 182,500ā¢Based in California: $149,600 - $176,000ā¢Based in Colorado: $125,800 - $148,000ā¢Based in New Jersey: $149,600 - $176,000ā¢Based in New York: $149,600 - $176,000ā¢Based in Washington: $137,275 - $161,500
The successful candidateās starting salary will be determined based on permissible, non-discriminatory factors such as skills, experience, and geographic location within the state.
At Coupa, we celebrate diversity and recognize its value to our customers and employees. Coupa is proud to be an equal-opportunity workplace and affirmative-action employer. All qualified applicants will receive consideration for employment regardless of age, race, color, religion, sex, sexual orientation, gender identity, national origin, genetic information, disability, veteran status, or any other applicable status protected by state or local law.Ā
Please be advised that inquiries or resumes from recruiters will not be accepted.
By submitting your application, you acknowledge that you have read Coupaās Privacy Policy and understand that Coupa receives/collects your application, including your personal data, for the purposes of managing Coupa's ongoing recruitment and placement activities, including for employment purposes in the event of a successful application and for notification of future job opportunities if you did not succeed the first time. You will find more details about how your application is processed, the purposes of processing, and how long we retain your application in our Privacy Policy.
Tags: Airflow API Development APIs Architecture AWS Azkaban Azure Big Data Cassandra Computer Science Data pipelines Data quality Data warehouse EC2 Engineering ETL GCP GraphQL Java Kafka NoSQL Pipelines PostgreSQL Privacy Python RDBMS Redshift Spark SQL Statistics Streaming
Perks/benefits: Career development Team events Transparency
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.