LLM4Eval: Large Language Model for Evaluation in IR
- Hossein A. Rahmani ,
- Clemencia Siro ,
- Mohammad Aliannejadi ,
- Nick Craswell ,
- Charles L. A. Clarke ,
- Guglielmo Faggioli ,
- Bhaskar Mitra ,
- Paul Thomas ,
- Emine Yilmaz
2024 International ACM SIGIR Conference on Research and Development in Information Retrieval |
Workshop description
Large language models (LLMs) have demonstrated increasing task-solving abilities not present in smaller models. Utilizing the capabilities and responsibilities of LLMs for automated evaluation (LLM4eval) has recently attracted considerable attention in multiple research communities. For instance, LLM4eval models have been studied in the context of automated judgments, natural language generation, and retrieval augmented generation systems. We believe that the information retrieval community can significantly contribute to this growing research area by designing, implementing, analyzing, and evaluating various aspects of LLMs with applications to LLM4eval tasks. The main goal of LLM4eval workshop is to bring together researchers from industry and academia to discuss various aspects of LLMs for evaluation in information retrieval, including automated judgments, retrieval-augmented generation pipeline evaluation, altering human evaluation, robustness, and trustworthiness of LLMs for evaluation in addition to their impact on real-world applications. We also plan to run an automated judgment challenge prior to the workshop, where participants will be asked to generate labels for a given dataset while maximising correlation with human judgments. The format of the workshop is interactive, including roundtable and keynote sessions and tends to avoid the one sided dialogue of a mini-conference.