Senior Data Engineer
Athens
Kpler
Unlock global trade intelligence with Kpler. Real-time data for businesses to plan, grow, and thrive sustainably.
At Kpler, we are dedicated to helping our clients navigate complex markets with ease. By simplifying global trade information and providing valuable insights, we empower organisations to make informed decisions in commodities, energy, and maritime sectors.
Since our founding in 2014, we have focused on delivering top-tier intelligence through user-friendly platforms. Our team of over 500 experts from 35+ countries works tirelessly to transform intricate data into actionable strategies, ensuring our clients stay ahead in a dynamic market landscape. Join us to leverage cutting-edge innovation for impactful results and experience unparalleled support on your journey to success.
The price and arbitrage team is responsible for providing real time arbitrage opportunities by mining Kpler shipping and trade data. The team is also responsible for ingesting, cleaning, storing and publishing our prices datasets. As a staff engineer, you will participate in the evolution of the data platform and related service with the aim of extracting and consolidating disparate data sources, applying algorithms to large datasets and making them available to both internal and external clients.
We make things happenWe act decisively and with purpose, going the extra mile.
We build togetherWe foster relationships and develop creative solutions to address market challenges.
We are here to helpWe are accessible and supportive to colleagues and clients with a friendly approach.
Our People Pledge
Don’t meet every single requirement? Research shows that women and people of color are less likely than others to apply if they feel like they don’t match 100% of the job requirements. Don’t let the confidence gap stand in your way, we’d love to hear from you! We understand that experience comes in many different forms and are dedicated to adding new perspectives to the team.
Kpler is committed to providing a fair, inclusive and diverse work-environment. We believe that different perspectives lead to better ideas, and better ideas allow us to better understand the needs and interests of our diverse, global community. We welcome people of different backgrounds, experiences, abilities and perspectives and are an equal opportunity employer.
By applying, I confirm that I have read and accept the Staff Privacy Notice
Since our founding in 2014, we have focused on delivering top-tier intelligence through user-friendly platforms. Our team of over 500 experts from 35+ countries works tirelessly to transform intricate data into actionable strategies, ensuring our clients stay ahead in a dynamic market landscape. Join us to leverage cutting-edge innovation for impactful results and experience unparalleled support on your journey to success.
The price and arbitrage team is responsible for providing real time arbitrage opportunities by mining Kpler shipping and trade data. The team is also responsible for ingesting, cleaning, storing and publishing our prices datasets. As a staff engineer, you will participate in the evolution of the data platform and related service with the aim of extracting and consolidating disparate data sources, applying algorithms to large datasets and making them available to both internal and external clients.
Responsibilities:
- Provide architectural guidance and technical hands-on leadership.
- Drive the very best engineering practices and architectural standards.
- Demonstrate strong software development skills in back-end and database technologies.
- Shape the roadmap in collaboration with the product team. Help the team build ambitious yet sustainable plans.
- Push for operational excellence: build robust, scalable, cost efficient applications.
- Make sure the team commits to its SLO and help improve it.
You are or have...
- BSc/MSc in computer science, computer engineering or equivalent.
- Significant experience working with Python, Scala (or equivalent language) and Kafka.
- Proven track record of architecting and developing high throughput / low latency data pipelines
- Have experience with DevOps and Infrastructure as Code practices.
- Proficiency in building and consuming RESTful APIs.
- You are comfortable with SQL and NoSQL databases for OLTP and OLAP usages.
Nice to have
- Have knowledge of Spark.
- Have worked with ElasticSearch.
- Have experience with AWS (or another cloud provider), using Terraform.
- Have some experience with TradingView.
We make things happenWe act decisively and with purpose, going the extra mile.
We build togetherWe foster relationships and develop creative solutions to address market challenges.
We are here to helpWe are accessible and supportive to colleagues and clients with a friendly approach.
Our People Pledge
Don’t meet every single requirement? Research shows that women and people of color are less likely than others to apply if they feel like they don’t match 100% of the job requirements. Don’t let the confidence gap stand in your way, we’d love to hear from you! We understand that experience comes in many different forms and are dedicated to adding new perspectives to the team.
Kpler is committed to providing a fair, inclusive and diverse work-environment. We believe that different perspectives lead to better ideas, and better ideas allow us to better understand the needs and interests of our diverse, global community. We welcome people of different backgrounds, experiences, abilities and perspectives and are an equal opportunity employer.
By applying, I confirm that I have read and accept the Staff Privacy Notice
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Job stats:
1
0
0
Category:
Engineering Jobs
Tags: APIs AWS Computer Science Data pipelines DevOps Elasticsearch Engineering Kafka NoSQL OLAP Pipelines Privacy Python Research Scala Spark SQL Terraform
Region:
Europe
Country:
Greece
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.
Data Engineer II jobsBusiness Intelligence Developer jobsPrincipal Data Scientist jobsStaff Data Scientist jobsPrincipal Data Engineer jobsBI Developer jobsCopywriter - Freelance AI Tutor jobsData Manager jobsData Scientist II jobsData Science Manager jobsJunior Data Analyst jobsResearch Scientist jobsBusiness Data Analyst jobsLead Data Analyst jobsSr. Data Scientist jobsSr Data Engineer jobsData Science Intern jobsSoftware Engineer, Machine Learning jobsSenior Artificial Intelligence/Machine Learning Engineer - Remote, Latin America jobsBI Analyst jobsJunior Data Engineer jobsSenior AI Engineer jobsJunior Data Scientist jobsData Engineer III jobsData Specialist jobs
Snowflake jobsLinux jobsEconomics jobsPhysics jobsHadoop jobsOpen Source jobsRDBMS jobsJavaScript jobsComputer Vision jobsAirflow jobsKafka jobsMLOps jobsScala jobsData Warehousing jobsNoSQL jobsBanking jobsKPIs jobsGitHub jobsData warehouse jobsClassification jobsPostgreSQL jobsGoogle Cloud jobsSAS jobsOracle jobsCX jobs
Scikit-learn jobsScrum jobsR&D jobsTerraform jobsStreaming jobsData Mining jobsPandas jobsLooker jobsDistributed Systems jobsIndustrial jobsRobotics jobsJira jobsPySpark jobsJenkins jobsBigQuery jobsRedshift jobsReact jobsMySQL jobsdbt jobsMatlab jobsMicroservices jobsUnstructured data jobsE-commerce jobsData strategy jobsNumPy jobs