In the following sections we will first glimpse into HyperMake by running a simple "Hello, world" task.

Then, we will gradually introduce more advanced features of HyperMake to build a pipeline for running the BEIR (paper) benchmark1.

1

BEIR is a robust and heterogeneous evaluation benchmark for zero-shot information retrieval. It includes a diverse set of retrieval tasks, such as web search, question answering, and entity retrieval. The benchmark is designed to evaluate the generalization capabilities of retrieval models across different tasks and domains.