Sr. Software Engineer - AWS Job

Indore, MP, IN

Yash Technologies

YASH specialists provide information, establish contacts and build bridges between the local decision-makers in German companies and the YASH teams.

View all jobs at Yash Technologies

Apply now Apply later

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation.

 

At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future.

 

We are looking forward to hire AWS Professionals in the following areas :

 

Experience

2 - 4  Years

Job Description

  •     Masters / Bachelors Degree in Computer Science, Information Technology or other relevant fields.
    •    Minimum 2 + Years of Mandatory experience in Advance Data & Analytics Architecture, ETL, Data Engineering solution using the following Skills, tools and technologies:
    •    AWS Data & Analytics Services: Athena, Glue, DynamoDB, Redshift, Kinesis, Lambda
    •    Databricks Lakehouse Platform
    •    PySpark, Spark SQL, Spark Streaming
    •    Experience with any NoSQL Database
    •    2+ year of coding experience with modern programming or scripting language (Python).
    •    Expert-level skills in writing and optimizing SQL.
    •    Experience operating very large data warehouses or data lakes/ data platform.
    •    Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
    •    Data Modelling, star schema, derivative, measures
    •    Experience of working in Agile delivery
    •    Experience with full software development life cycle, including coding standards, code reviews, source control management, build processes, and testing.
    •    Excellent business and communication skills to work with business owners to understand data requirements.

Responsibilities :

You will be responsible to understand the client requirement and architect robust data platform on AWS cloud technologies and Databricks.
•    You will be responsible for identifying and creating reusable data solutions (accelerators, MVP)
•    You will be responsible for creating reusable components for rapid development of data platform
•    Work closely with the Product Owners and stake holders to design the Technical Architecture for data platform to meet the requirements of the proposed solution.
•    Work with the leadership to set the standards for software engineering practices within the Data engineering team and support across other disciplines
•    Play an active role in leading team meetings and workshops with clients.
•    Delivering and presenting proofs of concept to of key technology components to project stakeholders.
•    Educate clients on Cloud technologies and influence direction of the solution.
•    Choose and use the right analytical libraries, programming languages, and frameworks for each task.
•    Help us to shape the next generation of our products.
•    Explore and learn the latest AWS Data & Analytics and Databricks Platform features /technologies to provide new capabilities and increase efficiency.
•    Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd  parties – Databricks, Spark, DynamoDB, RedShift,            Kinesis, Lambda, Glue, Athena.
•    Analyze, re-architect and re-platform, migrate on-premise data warehouses to data platforms on AWS cloud using AWS or 3rd party services.
•    Design and build production data pipelines from ingestion to consumption within a big data architecture, using Python, PySpark, Databricks.
•    Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming.
•    Perform detail assessments of current state data platforms and create an appropriate transition path to AWS cloud & Databricks.
•    Build and deliver high quality data architecture to support business analyst, data scientists, and customer reporting needs.
 

Required Behavioral Competencies

Accountability: Takes responsibility for and ensures accuracy of own work, as well as the work and deadlines of the team.

Collaboration: Participates in team activities and reaches out to others in team to achieve common goals.

Agility: Demonstrates a willingness to accept and embrace differing ideas or perceptions which are beneficial to the organization.

Customer Focus: Displays awareness of customers stated needs and gives priority to meeting and exceeding customer expectations at or above expected quality within stipulated time.

Communication: Targets communications for the appropriate audience, clearly articulating and presenting his/her position or decision.

Drives Results: Sets realistic stretch goals for self & others to achieve and exceed defined goals/targets

 

At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale.

 

Our Hyperlearning workplace is grounded upon four principles

  • Flexible work arrangements, Free spirit, and emotional positivity
  • Agile self-determination, trust, transparency, and open collaboration
  • All Support needed for the realization of business goals,
  • Stable employment with a great atmosphere and ethical corporate culture
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Architecture Athena AWS Big Data Computer Science Databricks Data pipelines Data quality DynamoDB Engineering ETL Kinesis Lambda MVP NoSQL Pipelines PySpark Python Redshift SDLC Spark SQL Streaming Testing

Perks/benefits: Career development Flex hours Transparency

Region: Asia/Pacific
Country: India

More jobs like this