Go to page

Bibliographic Metadata

 The document is publicly available on the WWW


With the Large Hadron Collider (LHC) at CERN being upgraded to the high luminosity LHC (HL-LHC), the four major experiment (ATLAS, ALICE, CMS and LHCb) use the opportunity to also upgrade their systems. In these so-called Phase II upgrades, the systems of each experiment are reworked or even replaced to enable the recording of a considerably increased event rate. In parallel, the resolution of the tracking subsystems will be increased to reach a better event separation. The ITk Pixel detector of the ATLAS experiment is used as example for this. The increased resolution and the increased number of front-end (FE) chips also leads to a drastically increased data rate, which is necessary to transfer all event information from the detector in a timely manner. The upgrade is also used for replacing the readout and processing chain outside of the detector volume. Thereby, a unification of the different subsystems as well as a shift towards commercial products and software can be observed. This is due to the immense costs and increasing problems in maintaining the systems over a long time period due to the lack of system experts. But there are some parts of the processing that still need to be done in hardware (e.g. within a field programmable gate array (FPGA)). Therefore, an interface between the hardware systems and the software running on a remote server needs to be implemented. While the PCIe interface was quite common for this, it limits the bandwidth as each processing card will only have a single PCIe interface. With the server hosting such cards is already busy with forwarding the data via a commercial network, it cannot provide any data processing capability. Thereby, it would be more efficient to sent directly from the FPGA as this would also allow for better scaling of bandwidth. However, the standard protocols for reliable data transmissions were designed with a software implementation in mind. That’s why their implementation in hardware would be rather complicated. Therefore, the question about the necessity of guaranteed data transmission arose. To answer this question, a network stack was implemented within this thesis to investigate the level of packet drop occurring in the transmission and how this could be reduced to an acceptable level. In this context, an emerging technique named eXpress Data Path (XDP) was evaluated. With its help, the transmission of 5.2PB (i.e. 2.92 × 10e12 packets) within 168h (i.e. a week) with not a single missing packet was demonstrated.