Data Engineer

Sloka IT Solutions

Ver: 145

Dia de atualização: 26-11-2025

Localização: Schaerbeek Brussels Capital

Categoria: IT - Software

Indústria: IT Services IT Consulting

Posição: Mid-Senior level

Tipo de empregos: Full-time

Loading ...

Conteúdo do emprego

Hello,

Greetings from Sloka IT Solutions,

We are hiring Data Engineerfor our client in Belgium.

If available or interested kindly apply or connect to discuss further.

I’ll be happy to provide more information.

Please find the job description for your reference.

Thank you & Have a nice day

As a Data Engineer, you will play a key role preparing the infrastructure and data that will be used to deliver high quality data products. You will help us design, develop and maintain data pipelines that will deliver insights. By using a DevOps approach, you will make sure the overall system is running on all times by automating tasks so you can spend time on creating and not deploying. You will also make sure the system is appropriately tested and monitored by using adapted methods and tools. You will collaborate with the other data engineers and data scientists of the Advanced Analytics team to create the simplest possible effective data landscape to improve delivery speed of future AI use cases.

You will be trusted to

• Conceive and build data architectures

• Participate in the short/mid/long term vision of the overall system

• Simplify & optimize existing pipelines if needed

• Execute ETL (extract/transform/load) processes from complex and/or large data sets

• Ensure data are easily accessible and that their exploitation is performing as requested, even in highly scalable circumstances

• Participate to the architecture and planning of the big data platform to optimize the ecosystem’s performances

• Create large data warehouses fit for further reporting or advanced analytics

• Collaborate with machine learning engineers for the implementation, deployment, scheduling and monitoring of different solutions

• Ensure robust CI/CD processes are in place

• Promote DevOps best practices in the team

Requirements

We are looking for strong candidates with the following academic and professional experiences :

• A Master in Informatics, Engineering, Mathematics, or related field

• Demonstrable relevant (3+ years of) experience with big data platforms (Hadoop, Cloudera, EMR, Databricks, ...)

• Technical knowledge in:

• Data pipeline management

• Cluster management

• Workflow management ( Oozie, Airflow)

• Database management of SQL and noSQL databases

• Large file storage (HDFS, Data Lake, S3, Blob storage,..)

• Strong knowledge of Scala and Python

• Strong knowledge & experience in Spark (Scala and Pyspark)

• Strong knowledge of CI/CD concepts

• Stream processing such as Kafka, Kinesis, Elasticsearch

• Good knowledge of a cloud environment

• High level understanding of data science concepts

• Knowledge of Data Visualisation framework like Qlik Sense is a plus

Your Profile

• You’re quality oriented

• You are multi-disciplined & able to work with divers APIs and understand multiple languages well enough to work with them

• You are an excellent problem analyser and solver

• You’re open minded , collaborative, team player, ready to adapt to the changing needs

• Curiosity about new techniques and tools, eagerness to always keep learning

• You’re committed to deliver, pragmatic and solution oriented.

• Experience in telecom and/or financial sector is a plus

• Experience with an agile way of working is a plus

• Languages : English (very good in reading, writing, speaking) is a must

Loading ...
Loading ...

Data limite: 10-01-2026

Clique para aplicar para o candidato livre

Aplicar

Loading ...

EMPREGOS SEMELHANTES

Loading ...
Loading ...