Conversation
3538efc to
38603d0
Compare
9080243 to
bba6687
Compare
bba6687 to
f3ca5e1
Compare
jamadeo
left a comment
There was a problem hiding this comment.
Definitely the right way to go for reducing context. I left some questions about response parsing.
@katzdave has some recent additions to trim the context window when you hit the limit, maybe we should invoke this when we hit that?
| "excluded_examples": [], | ||
| } | ||
| ) | ||
| except Exception as e: |
There was a problem hiding this comment.
avoid catching Exception, the stack trace should have enough info here
| from .example_selector import select_relevant_examples | ||
|
|
||
| log("[agent] Running example selection analysis...") | ||
| selection_result = await select_relevant_examples(target_files, examples, client) |
There was a problem hiding this comment.
should we put it behind a flag?
There was a problem hiding this comment.
hm good point. which would be the default tho?
| if "choices" in response and response["choices"]: | ||
| message = response["choices"][0].get("message", {}) | ||
| if isinstance(message, dict): | ||
| content = message.get("content", "") | ||
| else: | ||
| content = str(message) | ||
| else: | ||
| content = str(response) |
There was a problem hiding this comment.
this is different format handling, but now I realize it's redundant.
| deletions: int | ||
|
|
||
|
|
||
| @dataclass |
There was a problem hiding this comment.
FWIW I am not a fan of having files called utils. It can just mean anything. maybe we can just call this code_handling and move all code handling here? now it just a mixed bag of things max built.
| "excluded_examples": [], | ||
| } | ||
| ) | ||
| except Exception as e: |
There was a problem hiding this comment.
come on, don't catch general Exceptions
| if not examples: | ||
| raise FileNotFoundError("No valid example pairs found in examples directory") | ||
|
|
||
| from .example_selector import select_relevant_examples |
| from .example_selector import select_relevant_examples | ||
|
|
||
| log("[agent] Running example selection analysis...") | ||
| selection_result = await select_relevant_examples(target_files, examples, client) |
There was a problem hiding this comment.
this can throw exceptions, which would stop the whole thing. is that what we want?
There was a problem hiding this comment.
if it is not under feature flag - probably yes as there will be no examples.
In case we put it under a flag - probably not
| ] | ||
|
|
||
| response, _ = await client.generate_completion(messages=messages, temperature=0.1) | ||
|
|
There was a problem hiding this comment.
openai allows you to specify json as an output format, which is more reliable than asking for it. you can even specify the format.
There was a problem hiding this comment.
yeah, but we aren't using claude? or you mean openAi style client?
There was a problem hiding this comment.
we are using databricks which provides an openai client independent of the model we choose I think. of course that's a leaky abstraction so it might not work, but we should give it a shot. it's a useful trick in general
This PR enables intelligent example selection to minimize the impact on context window