VERDI 2025

AI for V&V
Chair: Peter Popov

Dependability Assurance with Symbolic Reasoning in LLM-enabled Systems

András Gergely Deé-Lukács, András Földvári

at  16:00in  VERDIfor  30min

The rapid advancement of large language models (LLMs) creates new opportunities in cyber-physical systems, particularly in natural language interaction and decision support. However, the safety-critical nature of these systems requires LLMs to operate in a deterministic, verifiable, and reliable way. The proposed solution in the paper combines the flexibility of LLMs with the deterministic capabilities of formal approaches based on symbolic logic reasoning. The LLMs’ structured outputs generated are processed by a logic reasoning engine capable of handling contradictions, priorities, and fault-tolerance patterns. This validation process enables formally valid and auditable LLM-based decisions, even when different models produce conflicting outputs. The proposed approach is illustrated through case studies that demonstrate practical examples of symbolic evaluation.

 Overview  Program