When running the default basic.py evaluation with additional config parameters, the following error arises:
in dependency_graph.py, line 131, in get_parent_metrics.
data_sources = [FileDataSource(path=Path(".../conversations.jsonl"),
name='test function metric',
notes='test function metric',
format='jsonl')]
eval = Eval(metrics=Metrics(function=[FunctionItem(name="index_in_thread",
metric_level='Turn',
kwargs={})
],
rubric=None),
do_completion=False,
grader_llm=None,
name="basic_eval_test",
notes="This is a basic evaluation test to demonstrate FlexEval functionality.",
completion_llm=None,
)
config = Config(clear_tables=True,
logs_path=Path('.../logs'),
env_filepath=Path("/.env"),
max_workers=1,
random_seed_conversation_sampling=42,
max_n_conversation_threads=50,
nb_evaluations_per_thread=1,
raise_on_completion_error=True,
raise_on_metric_error=True,
)
eval_run = EvalRun(
data_sources=data_sources,
database_path=Path(".../evaluation.db"),
eval=eval,
config=config,
)
When running the default basic.py evaluation with additional config parameters, the following error arises:
TypeError: 'NoneType' object is not iterablein dependency_graph.py, line 131, in get_parent_metrics.
I'm defining the Eval Run in Python code as follows: