Translating natural language to first-order logic for logical fallacy detection
Implementation & Models
- Repository exposes a script taking
--model_name(LLM for translation) and--nli_model_name(HuggingFace NLI classifier), but doesn’t ship a pretrained NLI model, causing confusion about how to run it. - Some commenters wish this were a fully trained, open model with RL-based translation instead of a wrapper around external models.
Anti‑Propaganda Ambitions vs Reality
- Several participants are excited about using FOL to expose fallacies, decompose arguments, and help people see inconsistencies in propaganda and value systems.
- Others argue propaganda relies more on selective framing, omission, emotional salience, and identity than on formal fallacies; logical checking alone cannot capture those.
- Idea emerges of user‑side “filters” (LLM/FOL layers) that rewrite, annotate, and debias incoming media streams, but this is seen as only one part of a larger solution.
Human Rationality, Values, and Virtues
- Debate over how rational people are: some say humans largely reason within their value systems; others claim people flex values to preserve identities or idols.
- Distinction drawn between “values” (evaluated outcomes) and “virtues” (presumed goods). One view: conservative politics especially centers on virtues (e.g., “capitalism vs socialism”), which makes virtue‑framed propaganda powerful and relatively logic‑proof.
- Others counter that motivated reasoning and denial occur across political tribes and that empirical psychology on asymmetries is fragile.
First‑Order Logic: Capabilities and Limits
- Some praise FOL as the standard formalism worth learning; others note it’s poor at modeling time, belief, negation, and real human reasoning.
- Gödel and undecidability are discussed: they don’t forbid proof checking, but they limit universal decision procedures; heuristics still useful.
- Multiple comments stress that logic can verify consistency relative to a spec but not whether the spec (or moral axioms) is correct—echoing the “formal specification problem.”
Datasets, Semantics, and Practicality
- Strong criticism of the LOGIC and LOGICCLIMATE benchmarks: examples mislabel tautologies and legitimate causal claims as fallacies and even quote a climate op‑ed selectively to manufacture a “false causality” case.
- Linguists and NLP veterans argue natural language semantics (Montague grammar, DRT, pragmatics, implicature) are far richer than the paper’s ad‑hoc FOL mappings, so robustness on real text is doubtful.
- Nonetheless, commenters see niche utility for constrained domains: contracts, laws, technical specs, classroom logic assistants, and pinpointing isolated fallacious sentences in articles.
Relation to LLMs and Informal Fallacies
- People share prompts already used with current LLMs to extract premises, conclusions, and fallacies and to “steelman” arguments; some examples show that state‑of‑the‑art LLMs handle textbook fallacies well.
- Several note that many real arguments are persuasive, informal, and probabilistic; detecting formal fallacies doesn’t settle truth or persuade opponents and can be waved away as “biased logic.”
- Informal fallacies like ad hominem or strawman are seen as pattern‑of‑rhetoric issues rather than pure logic; they may sometimes carry relevant information (e.g., about credibility), complicating rigid classification.