Goal
When a user talks to the bot via the reply feature, the text of the replied-to message must be included in the prompt that is sent to the LLM.
Right now only the user’s own message reaches the provider, so important context is lost.
Motivation
• Preserves conversational context (especially in busy group chats).
• Aligns behaviour with Hiroshi implementation (see s-nagaev/hiroshi#24).
Tasks
-
Update message-building logic
• Detect message.reply_to_message in the incoming Telegram update.
• Concatenate the original text and the user’s reply into one prompt, e.g.:
> {replied_to_message.text}
{user_message.text}
-
Provider-agnostic
• Ensure the combined prompt is passed unchanged to any LLM provider (OpenAI, BigModel, etc.).
Reference
Similar fix in Hiroshi: s-nagaev/hiroshi#24
Goal
When a user talks to the bot via the reply feature, the text of the replied-to message must be included in the prompt that is sent to the LLM.
Right now only the user’s own message reaches the provider, so important context is lost.
Motivation
• Preserves conversational context (especially in busy group chats).
• Aligns behaviour with Hiroshi implementation (see s-nagaev/hiroshi#24).
Tasks
Update message-building logic
• Detect
message.reply_to_messagein the incoming Telegram update.• Concatenate the original text and the user’s reply into one prompt, e.g.:
Provider-agnostic
• Ensure the combined prompt is passed unchanged to any LLM provider (OpenAI, BigModel, etc.).
Reference
Similar fix in Hiroshi: s-nagaev/hiroshi#24