Sr. Data Engineer (Platform)
Taipei
Gogolook
As the industry’s leading Trust-Tech company, Gogolook uses its huge database and advanced AI technology to provide services in the field of communication fraud prevention, financial technology, and SaaS risk services.Why you should join Gogolook1. Influential products: What we make are meaningful products that create values for society and defend against frauds.2. Emphasize self-growth: We encourage technical community activities, subsidize tickets for conferences and workshops so that learning is continuously supported by the company.3. Unleash your talent: We respect the professional opinions of everyone, encourage team members to discuss with each other, and make awesome products together.4. Transparent culture: We publicly share the company's information to all, every member can read and feedback, and become a part of participating in the proposal.
As a Data Platform Engineer in Gogolook, you will play a critical role in designing, building, and maintaining our data platform infrastructure. You will collaborate with cross-functional teams to ensure data availability, reliability, and scalability for our organization's growing data needs. If you are passionate and looking to implement DataOps practices fostering collaboration and automation in data workflows, we invite you to join our team as a Data Platform Engineer.
Responsibilities
- Design and construct comprehensive ETL pipelines for high-performance, large-volume applications.
- Build and enhance the data architecture to support business intelligence systems and product applications.
- Implement data governance and security measures to ensure data privacy and compliance with relevant regulations.
- Create automation processing to reduce development time and increase data reliability.
Minimum qualifications
- Minimum of 3 years of hands-on software development experience in languages such as Python, Scala, Java, Golang, etc.
- With expertise in optimizing data flow and processing efficiency and experience handling data throughput of up to 1 million records per hour.
- Hands-on experience with cloud platforms like AWS, GCP or Azure.
- Proficiency in developing robust data pipelines, including data collection and ETL(Extract, Transform, Load) processes.
- Experience designing and implementing various components of a data platform, including data ingestion, storage, data warehousing, data orchestration.
- Experience deploying infrastructure as code using CloudFormation or Terraform in AWS, GCP or Azure.
- Experience in continuous integration and continuous deployment(CI/CD) processes.
Preferred qualifications
- Experience in Docker, Kubernetes or Helm Charts.
- Experience in machine learning platforms, like KubeFlow, MLflow or Vertex AI.
- Has experience with API design and implementation.
- Has experience using streaming tools like Kafka, AWS Kinesis, Spark streaming or Flink.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: APIs Architecture AWS Azure Business Intelligence CI/CD CloudFormation Data governance DataOps Data pipelines Data Warehousing Docker ETL FinTech Flink GCP Golang Helm Java Kafka Kinesis Kubeflow Kubernetes Machine Learning MLFlow Pipelines Privacy Python Scala Security Spark Streaming Terraform Vertex AI
Perks/benefits: Career development Conferences Startup environment Team events
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.