Instead of just generating the next response, it simulates entire conversation trees to find paths that achieve long-term goals.

How it works:

  • Generates multiple response candidates at each conversation state
  • Simulates how conversations might unfold down each branch (using the LLM to predict user responses)
  • Scores each trajectory on metrics like empathy, goal achievement, coherence
  • Uses MCTS with UCB1 to efficiently explore the most promising paths
  • Selects the response that leads to the best expected outcome

Limitations:

  • Scoring is done by the same LLM that generates responses
  • Branch pruning is naive - just threshold-based instead of something smarter like progressive widening
  • Memory usage grows with tree size, there currently no node recycling
  • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 days ago

    I mean LLMs have gotten orders of magnitude more efficient in just the past year, but also using these types approaches might make it possible to use much smaller models, and iterate on the result.