Senior Data Analyst
Remote job
We are looking for an experienced Senior Data Analyst with a strong background in ETL processes, Big Data ecosystems, Python, and AWS Lambda. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines, applying algorithms, and providing actionable insights that will directly impact our end users. You will collaborate with cross-functional teams to improve data accuracy, efficiency, and usability.
Job Description:
Data Pipeline Development: Design, implement, and manage ETL processes using modern Big Data technologies to extract, transform, and load data efficiently.
Big Data Management: Work with large datasets in distributed environments, ensuring proper storage, access, and performance optimization.
Cloud Integration: Leverage AWS services, specifically AWS Lambda, for data processing, automation, and integration with various data sources.
Data Analysis & Insights: Conduct in-depth data analysis using Python to uncover trends, patterns, and actionable insights for end-user consumption.
Algorithm Development: Develop and apply data algorithms to improve data models, ensure data quality, and drive better decision-making processes.
Collaboration: Work closely with engineering, product, and business teams to understand data needs and translate them into technical solutions.
Visualization & Reporting: Develop intuitive dashboards and reports to visualize data insights and present them to business stakeholders.
Requirements
Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field.
5+ years of experience in data analysis, data engineering, or a similar role.
Experience working with Big Data frameworks and cloud-native ETL processes.
Proven track record of delivering data-driven insights that impact customer experience.
Nice to Have:
Knowledge of machine learning techniques and how they can be applied to data.
Familiarity with CI/CD pipelines and automation tools.
Understanding of security best practices for handling sensitive data.
Required Skills
ETL: Proficiency in designing and managing end-to-end ETL pipelines.
Big Data: Hands-on experience with Big Data technologies such as Hadoop, Spark, or equivalent.
Cloud Computing: Proficiency with AWS services, especially Lambda, S3, and Redshift.
Programming: Strong Python skills for data manipulation, automation, and algorithm development.
Data Warehousing: Experience with relational databases (e.g., MySQL, PostgreSQL) and data lakes.
Data Modeling: Knowledge of data modeling techniques and best practices for structured and unstructured data.
Algorithms: Experience in developing and implementing algorithms for data processing and analysis.
Visualization Tools: Proficiency in visualization tools such as Tableau, Power BI, or similar.
Problem-Solving: Strong analytical and problem-solving skills with attention to detail.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: AWS Big Data CI/CD Computer Science CX Data analysis Data management Data pipelines Data quality Data Warehousing Engineering ETL Hadoop Lambda Machine Learning MySQL Pipelines PostgreSQL Power BI Python RDBMS Redshift Security Spark Statistics Tableau Unstructured data
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.