In this paper, we evaluate the capability of transformer-based language models in making inferences over uncertain text that includes uncertain rules of reasoning. We cover both Pretrained Language Models (PLMs) and generative Large Language Models (LLMs). Our evaluation results show that both generations of language models struggle with reasoning over uncertain text. We propose a novel end-to-end fine-tuning approach, Probabilistic Constraint Training (PCT), that utilizes probabilistic logical rules as constraints in the fine-tuning phase without relying on these rules in the inference stage. To assess the effectiveness of PCT, we utilize the related corpora and, additionally, create a new and more challenging benchmark that, unlike the previous ones, uses instance-specific rules. Our study demonstrates that PCT improves the transformer-based language model's intrinsic reasoning and makes their probabilistic logical reasoning process more explicit and explainable. Furthermore, PCT equips these models to effectively handle novel situations, including higher reasoning depth, new domains, and complex probabilistic structures.
Files and links (2)
pdf
Teaching Probabilistic Logical Reasoning to Transformers837.32 kBDownloadView
Published (Version of record)Conference proceeding pdf Open Access
url
Teaching Probabilistic Logical Reasoning to TransformersView
Published (Version of record)link to proceedings paper Open
Related links
Details
Title
Teaching Probabilistic Logical Reasoning to Transformers
Publication Details
Findings of the Association for Computational Linguistics: EACL 2024, pp.1615-1632
Resource Type
Conference proceeding
Conference
Conference of the European Chapter of the Association for Computational Linguistics, 18th (St. Julian’s, Malta, 03/17/2024–03/22/2024)
Publisher
Association for Computational Linguistics (ACL); ACL Anthology