Data Warehouse & Reporting Developer

Posting Start Date: 26 Feb 2026

Location: Brussels, BE

Company: Serco Plc

Package Description

Joining Serco amazing team offers:

  • Competitive Salary
  • Great career opportunities
  • Corporate Benefits Package
  • International environment
  • Possible hybrid work

Job Introduction

Serco is the preferred partner for European institutions and international organisations, offering ICT services and contact centre management across Europe. Our expertise extends from information system support to consultancy under government frameworks, supporting key entities such as the European Commission and executive agencies. As a leader in service integration and management, we ensure seamless delivery of services across various platforms, embodying our commitment to excellence in every aspect of our work.

 

We are hiring a Data Warehouse & Reporting Developer for the EU Commission in Brussels, Belgium, and various locations across Europe. The position is open to both employees and freelance contractors.

Key Responsibilities

  • Development and maintenance of a fully open-source Data Lakehouse.
  • Design and development of data pipelines to support scalable and reliable data workflows, processing large volumes of both structured and unstructured data.
  • Integration of data from multiple sources, including databases, APIs, data streaming services, and cloud platforms.
  • Optimization of queries and workflows to improve performance and efficiency.
  • Writing modular, testable, and production-grade code.
  • Ensuring data quality through monitoring, validation, and quality checks, maintaining accuracy and consistency across the data platform.
  • Elaboration of test programs.
  • Comprehensive documentation of processes to support effective data pipeline management and troubleshooting.
  • Assistance with system deployment and configuration.
  • Participation in meetings with other project teams.

Skills

  • Extensive hands-on experience as a Data Engineer or Data Architect working with modern, cloud-based, open-source data platform solutions and data analytics tools.
  • Excellent knowledge of data warehouse and/or data lakehouse design and architecture.
  • Excellent knowledge of open-source, code-based data transformation tools such as dbt, Spark, and Trino.
  • Excellent knowledge of SQL.
  • Good knowledge of Python.
  • Good knowledge of open-source orchestration tools such as Airflow, Dagster, or Luigi.
  • Experience with AI-powered assistants (e.g., Amazon Q) that support or streamline data engineering processes.
  • Good knowledge of relational database systems.
  • Good knowledge of event streaming platforms and message brokers such as Kafka and RabbitMQ.
  • Extensive experience in building end-to-end data pipelines and implementing ELT frameworks.
  • Understanding of storage table formats and protocols such as Apache Iceberg or Delta Lake.
  • Proficiency with Kubernetes and Docker or Podman.
  • Good knowledge of data modelling tools.
  • Good knowledge of online analytical processing (OLAP) and data mining tools.
  • Ability to participate effectively in multilingual meetings.
  • Ability to work with a high degree of rigor and method, including adherence to naming conventions and coding standards.
  • Bachelor’s degree or higher in Computer Science, Information Technology, or a related field.