Re-Ex Accepted at ICLR 2024: Enhancing LLM Trustworthiness

Re-Ex Accepted at ICLR 2024: Enhancing LLM Trustworthiness

Re-Ex Accepted at ICLR 2024: Enhancing LLM Trustworthiness

Author

Author

Author

Jacob Choi

Jacob Choi

Jacob Choi

Date Published

Date Published

Date Published

May 12, 2024

May 12, 2024

May 12, 2024

Exciting News! Our paper, "Re-Ex: Revising after Explanation Reduces the Factual Errors in LLM Responses," has been accepted at the ICLR 2024 Workshop on Reliable and Responsible Foundation Models. We believe that our finding is crucial for enhancing the trustworthiness of LLM applications on specific industries such as Finance.


🔍 About Re-Ex

Re-Ex addresses a critical challenge in deploying LLMs—hallucination, where models generate incorrect information—through:

  1. Identifying Errors: Using external tools to pinpoint inaccuracies in LLM responses.

  2. Explaining Errors: Prompting the LLM to explain inaccuracies using the identified evidence.

  3. Revising Responses: Adjusting the initial responses to enhance accuracy and efficiency, significantly reducing token count and inference time.


📊 Benefits & Industry Impact

Our method, Re-Ex, not only improves accuracy and efficiency but is also essential for industries aiming to integrate LLMs seamlessly into their operations. The ability to reduce errors and streamline processing makes LLMs more dependable and practical for sectors like finance, healthcare, and legal, where precision is subsequently crucial.


🤝 Acknowledgements

A sincere thank you to all co-authors and collaborators for their dedication and contributions to this amazing research. Jy-yong Sohn Junseong Kim Juyeon Kim Jeongeun Lee Yoonho Chang

Learn more about our work: https://arxiv.org/pdf/2402.17097


Abstract

Mitigating hallucination issues is a key challenge that must be overcome to reliably deploy large language models (LLMs) in real-world scenarios. Recently, various methods have been proposed to detect and revise factual errors in LLMgenerated texts, in order to reduce hallucination. In this paper, we propose RE-EX, a method for post-editing LLM-generated responses. RE-EX introduces a novel reasoning step dubbed as the factual error explanation step. RE-EX revises the initial response of LLMs using 3-steps : first, external tools are used to retrieve the evidences of the factual errors in the initial LLM response; next, LLM is instructed to explain the problematic parts of the response based on the gathered evidence; finally, LLM revises the initial response using the explanations provided in the previous step. In addition to the explanation step, RE-EX also incorporates new prompting techniques to reduce the token count and inference time required for the response revision process. Compared with existing methods including FacTool, CoVE, and RARR, RE-EX provides better detection and revision performance with less inference time and fewer tokens in multiple benchmarks.

The code for RE-EX is available at: https://github.com/juyeonnn/ReEx