Resonancy Logo
Article - LLM Reasoning: The New Puzzle Piece in Software Development
How LLMs are transforming software development
6 min read
AILLMSoftware DevelopmentReasoningAutomation
LLM Reasoning: The New Puzzle Piece in Software Development

One aspect that excites me about LLM reasoning is that it's the missing piece of the puzzle for many "fuzzy" problems in software development. To date, most software solutions have been developed with deterministic logic. While specialized reasoning frameworks like expert systems, Bayesian inference networks, and constraint satisfaction solvers have existed for decades, they've required domain-specific expertise to implement effectively. Because of the high cost associated with such solutions, only the most valuable reasoning problems were considered worth solving.

Now, LLMs are democratizing reasoning capabilities, making it straightforward to apply contextual reasoning to problems that would be complex to model using explicit algorithmic logic. As software engineers, one of our most resource-intensive tasks is identifying every edge case and odd user journey to consider appropriate responses for each scenario. Accommodating these edge cases increases development complexity and implementation costs exponentially. Experienced engineers and product managers typically begin by systematically identifying potential exceptions and determine which should be solved, for which to throw exceptions, or conditionally ignore to reduce project scope.

LLMs fundamentally transform this paradigm. Rather than explicitly coding exhaustive deterministic logic, we can harness LLM-powered solutions to manage ambiguity and edge cases through contextual understanding and intent recognition. This approach enables systems to elegantly process unexpected inputs. Workflow solutions, which used to be too fuzzy to systematize, can now be dumped into the fuzzy web of LLM reasoning to provide a low-cost, fast, and likely correct solution. Likely correct here is important, but remember, at best the work of humans is likely correct ;p

The implications for software development lifecycle are substantial: development cycles can be shortened. Features previously descoped due to complexity can now be implemented with higher efficiency, and problems that used to be too expensive to solve can now be solved economically.

Systems now handle unexpected situations much better. Take chatbots for example: old-school bots needed programmers to manually define every possible user question and response path. With LLM reasoning, chatbots simply understand what users mean, even when questions are unclear or unexpected. This same approach helps with complex workflows like:

  • Processing inconsistently formatted invoices from different vendors
  • Interpreting ambiguous project requirements and suggesting clarifications
  • Managing exceptions in approval processes without rigid escalation rules
  • Analysing customer feedback without predefined sentiment categories
  • Translating technical jargon to plain language for different audiences
  • Adapting form validation based on context rather than strict rules

Instead of breaking when faced with something new, LLM-powered systems can adapt and respond intelligently.

We're entering an era where software can also employ reasoning capabilities, rather than requiring engineers to anticipate and hardcode every path. This paradigm shift promises a new range of capable and adaptive applications, faster development, and superior user experiences.

New capability unlocked. Now, we're challenged to set aside our old habits and ask how do we adapt and apply this new superpower.

Gerrie van Wyk (MEng)
Gerrie van Wyk (MEng)LinkedIn
Cofounder of Resonancy

As a cofounder, Gerrie architects scalable software solutions at Resonancy. He specializes in streamlining complex business processes, building robust systems that drive efficiency and create new opportunities for innovation.