DeepSAT is an ongoing research project investigating the use of deep reinforcement learning for automated theorem proving in propositional logic. In contrast to traditional SAT solvers, which focus on satisfiability checking, our aim is to construct formal proofs of validity within the sequent calculus framework. As a preliminary step toward this goal, we have focused on training recurrent neural networks to predict whether a given propositional formula is a tautology. These models will form the basis of the value function in the planned reinforcement learning architecture. Although in its early stages, DeepSAT lays the groundwork for a logic-agnostic, explainable, and energy-efficient alternative to existing neural approaches based on large language models, which require substantial computational resources.

Recurrent Neural Networks for Guiding Proof Search in Propositional Logic / Gianluca Amato; Nicola Balestra; Marco Maggesi; Maurizio Parton. - ELETTRONICO. - 4039:(2025), pp. 113-126. ( 26th Italian Conference on Theoretical Computer Science, ICTCS 2025 Campus of Pescara, at the Department of Economic Studies of the University of Chieti-Pescara, ita 2025).

Recurrent Neural Networks for Guiding Proof Search in Propositional Logic

Nicola Balestra;Marco Maggesi;Maurizio Parton
2025

Abstract

DeepSAT is an ongoing research project investigating the use of deep reinforcement learning for automated theorem proving in propositional logic. In contrast to traditional SAT solvers, which focus on satisfiability checking, our aim is to construct formal proofs of validity within the sequent calculus framework. As a preliminary step toward this goal, we have focused on training recurrent neural networks to predict whether a given propositional formula is a tautology. These models will form the basis of the value function in the planned reinforcement learning architecture. Although in its early stages, DeepSAT lays the groundwork for a logic-agnostic, explainable, and energy-efficient alternative to existing neural approaches based on large language models, which require substantial computational resources.
2025
CEUR Workshop Proceedings
26th Italian Conference on Theoretical Computer Science, ICTCS 2025
Campus of Pescara, at the Department of Economic Studies of the University of Chieti-Pescara, ita
2025
Gianluca Amato; Nicola Balestra; Marco Maggesi; Maurizio Parton
File in questo prodotto:
File Dimensione Formato  
Amato, Balesta, Maggesi, Parton (2025) Recurrent Neural Networks.pdf

accesso aperto

Descrizione: Copia file pubblicamente disponibile sul sito dell'editore
Tipologia: Pdf editoriale (Version of record)
Licenza: Creative commons
Dimensione 1.04 MB
Formato Adobe PDF
1.04 MB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1439595
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact