Software Engineer II - Data Engineer (Python)
India; Pune, Maharashtra, India
Uplight
Uplight supports energy providers around the world with clean energy solutions for customer engagement and grid flexibility management.We are seeking a skilled and passionate Data Engineer with expertise in Python to join our development team. As a Data Engineer, you will play a crucial role developing different components, harnessing the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, data processing and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. You will coordinate with the rest of the team working on different layers of the infrastructure. Therefore, a commitment to collaborative problem solving, sophisticated design, and quality product is important.
You will own the development and its quality independently and be responsible for high quality deliverables. And you will work with a great team with excellent benefits.Responsibilities & SkillsYou should:
- Be excited to work with talented, committed people in a fast-paced environment.
- Use a data-driven approach and actively work on product & technology roadmap at strategy level and day-to-day tactical level.
- Have a proven experience as a Data Engineer with a focus on Python.
- Be designing, building, and maintaining high performance solutions with reusable, and reliable code.
- Use a rigorous approach for product improvement and customer satisfaction.
- Love developing great software as a seasoned product engineer.
- Be ready, able, and willing to jump onto a call with a partner or customer to help solve problems.
- Be able to deliver against several initiatives simultaneously.
- Have a strong eye for detail and quality of code.
- Have an agile mindset.
- Have strong problem-solving skills and attention to detail.
- You are an experienced developer – you ideally have 4 or more years of professional experience
- Design, build, and maintain scalable data pipelines and ETL processes to support business analytics and reporting needs.
- Strong proficiency in Python for building and automating data pipelines, ETL processes, and data integration workflows.
- Strong Experience with SQL for querying and transforming large datasets, and optimizing query performance in relational databases.
- Familiarity with big data frameworks such as Apache Spark or PySpark for distributed data processing.
- Hands-on experience with data pipeline orchestration tools like Apache Airflow or Prefect for workflow automation.
- Strong Understanding of data modeling principles for building scalable and efficient data architectures (e.g., star schema, snowflake schema).
- Good to have experience with Databricks for managing and processing large datasets, implementing Delta Lake, and leveraging its collaborative environment.
- Knowledge of Google Cloud Platform (GCP) services like BigQuery, Dataflow, Pub/Sub, and Cloud Storage for end-to-end data engineering solutions.
- Familiarity with version control systems such as Git and CI/CD pipelines for managing code and deploying workflows.
- Awareness of data governance and security best practices, including access control, data masking, and compliance with industry standards.
- Exposure to monitoring and logging tools like Datadog, Cloud Logging, or ELK stack for maintaining pipeline reliability.
- Ability to understand business requirements and translate them into technical requirements.
- Expertise in solutions design.
- Demonstrable experience with writing unit and functional tests.
- Ability to deliver against several initiatives simultaneously as a multiplier.
- You are an experienced developer - a minimum of 4+ years of professional experience
- Python experience, preferably both 2.7 and 3.x
- Strong Python knowledge - familiar with OOPs, data structures and algorithms
- Work experience & strong proficiency in Python and its associated frameworks (like Flask, FastAPI etc.)
- Experience in designing and implementing scalable microservice architecture
- Familiarity with RESTful APIs and integration of third-party APIs
- 2+ years building and managing APIs to industry-accepted RESTful standards
- Demonstrable experience with writing unit and functional tests
- Application of industry security best practices to application and system development
- Experience with database systems such as PostgreSQL, MySQL, or MongoDB
- Experience with cloud infrastructure like AWS/GCP or other cloud service provider experience
- Serverless architecture, preferably AWS Lambda
- Solid CI/CD experience
- You are a Git guru and revel in collaborative workflows
- You work on the command line confidently and are familiar with all the goodies that the linux toolkit can provide
- Knowledge of modern authorization mechanisms, such as JSON Web Token
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow APIs Architecture AWS Big Data BigQuery Business Analytics CI/CD Computer Science Databricks Dataflow Data governance Data pipelines ELK Engineering ETL FastAPI Flask GCP Git Google Cloud JSON Lambda Linux MongoDB MySQL Pipelines PostgreSQL PySpark Python RDBMS Security Snowflake Spark SQL Unstructured data
Perks/benefits: Career development Flex hours
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.