Skip to content

why the Macro F score or Micro F Score are so low in validation.log? #3

@shaomai00

Description

@shaomai00

I saw this in validation.log
Test : Coverage = 719.66, Average Precision = 0.18053248555916795, Micro Precision = 0.06627056672003306, Micro Recall = 0.6256742172322824, Micro F Score = 0.11984691841392547
=> Test : Macro Precision = 0.011802305630313965, Macro Recall = 0.14954214765979945, Macro F Score = 0.021877804367018278
=> P@K = [ 0. 0.02093145 0.0188383 ]

I'm a little confused about why the F score is so low, while in your paper is C-F1 is 48.6 and O-F1 is 67.6.
and why is P@K so strange _
How could I know this model is really worked on 'delicious' dataset?

please help~ thank you very much~

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions