agent: let baseFlow run loops until llm final response arrives (#26)
Commit: 73f0333
Date: July 11, 2025 at 1:07 PM
For example, for a typical function tool call sequence, we need
1) runOneStep (= 1 LLM call)
- client calls LLM with info about available tools.
- model responses with the picked function tool and args.
- client calls the function with the args and yield the event.
2) next iteration of the for loop.
3) runOneStep (= 1 LLM call)
- client calls LLM with the function call result (along with previous history)
- model responses based on the history and the function call result.
4) next iteration or get out of the loop.
- fix misuse of yield in loop.
(When looping, we need to check yield's return value and get out of the loop)
- if partial content arrives, now, yield an error and exit.
(note: in python it threw an error, but it is recently changed so it just logs and exit)
Powered by ChangeCrab