TU Wien Team Pairs ChatGPT-Style AI With Logic to Set Records

By teaming up two very different kinds of artificial intelligence, Vienna University of Technology researchers have solved tough logic problems faster than ever before. Their work shows how ChatGPT-style models can guide traditional algorithms, opening new possibilities for science and everyday decision-making.

Anyone who has stared at a half-finished Sudoku grid knows the power of a single, well-timed hint. Researchers at the Vienna University of Technology (TU Wien) have now shown that large language models, the kind of AI behind tools like ChatGPT, can provide that kind of nudge for some of the hardest logic problems in computer science — even though they cannot actually solve those problems themselves.

By letting a language model suggest extra rules for classic logic-based programs, the team sped up solutions and, in at least one case, found better answers than anyone had ever seen before.

The work, published in the Journal of Artificial Intelligence Research. was carried out within TU Wien’s iCAIML doctoral program.

The project brings together two branches of artificial intelligence that are usually kept apart.

“To understand why our discovery is so surprising, it helps to take a look at two completely different worlds of artificial intelligence,” Florentina Voboril, a doctoral student in TU Wien’s Institute of Logic and Computation, said in a news release.

On one side is symbolic AI, which tackles problems that can be written down in a precise, mathematical form. These systems follow strict logical rules to search through many possible options and pick those that satisfy all the constraints.

Classic examples include filling in a Sudoku grid, scheduling workers for shifts, routing deliveries or checking whether a complex digital circuit will behave correctly. Modern symbolic solvers can handle enormous search spaces, but they can still get bogged down when there are too many possibilities to explore.

On the other side are large language models (LLMs) such as ChatGPT or Copilot. These systems are trained on massive amounts of text and learn to predict what words are likely to come next. Their internal workings are not based on explicit, human-readable rules, and their answers are often hard to explain step by step.

Because of that, LLMs are usually seen as a poor fit for tasks that require airtight logical reasoning. They are powerful at generating and summarizing language, but they are not designed to execute code or guarantee that every step of a solution is correct.

Voboril and colleagues asked a different question: instead of asking a language model to solve a logic problem directly, what if it could help set up the problem in a smarter way?

“We examined how symbolic and sub-symbolic AI can be combined to make use of the strengths of both worlds,” Voboril added.

In symbolic AI, a common challenge is that there are simply too many options to check. A solver might, in theory, try every possible way to assign numbers to a Sudoku grid or every possible way to assign nurses to shifts, but that quickly becomes impossible as the problem grows.

“Often you cannot simply try them all. That’s why it’s extremely helpful to have certain rules that eliminate parts of the search space from the start,” added Voboril.

Researchers sometimes add such extra rules, known as “streamliners,” by hand. A streamliner does not change what counts as a valid solution, but it prunes away regions of the search space that are unlikely to lead anywhere useful. That can make the solver much faster and can even guide it toward especially elegant or efficient solutions.

Voboril compares it to navigating a maze.

“Imagine we’re trying to find the shortest path out of a maze. If I already know that certain parts of the maze are not connected to any exit, I can block off those areas and focus on the rest. That way, you find a better solution faster,” she said.

The TU Wien team used a large language model to automatically propose such streamliners.

First, they took the code that a symbolic AI system would normally process — a formal description of the problem and its constraints. They then fed this code to an LLM. The language model did not run the code or compute a solution. Instead, it treated the code as text and tried to spot patterns and regularities.

Based on those patterns, the LLM suggested additional constraints that might safely narrow down the search. These candidate streamliners were then added back into the symbolic solver, which checked whether they helped or hurt performance.

In effect, the language model acted like a creative assistant, brainstorming promising shortcuts that human experts had not thought of, while the symbolic solver remained the rigorous engine that guaranteed correctness.

“In this way, we were able to solve certain problems significantly faster than symbolic AI had been able to do so far. For one of these problems, we even set new world records—finding solutions that are better than any previously known ones,” Voboril added.

Those record-setting results highlight a broader point: even without step-by-step logical understanding, language models can still capture useful structure in highly technical domains. Their pattern-recognition abilities, honed on vast training data, can reveal regularities that are hard for humans to see.

The study suggests that combining symbolic and sub-symbolic AI could be a powerful strategy well beyond academic benchmarks. Many real-world tasks — from optimizing supply chains and public transit to planning hospital staff schedules or operating rooms — boil down to huge, rule-based search problems. Small improvements in how those problems are solved can translate into major savings of time, money and energy, or better service for patients and customers.

By letting language models propose streamliners and letting symbolic solvers enforce the rules, future systems could tackle these challenges more efficiently and flexibly than either approach alone.

For students and researchers, the work is also a reminder that the most exciting advances in AI may come not from choosing one camp over the other, but from finding creative ways to connect them.

Source: Vienna University of Technology