Skip to content

Commit af7fabe

Browse files
committed
added benchmark tutorial, added consensus/benchmark api reference
1 parent 1755960 commit af7fabe

File tree

5 files changed

+44
-1
lines changed

5 files changed

+44
-1
lines changed
40.6 KB
Loading
25.3 KB
Loading

docs/source/benchmark_scatter.png

60.3 KB
Loading

docs/source/superannotate.sdk.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -322,3 +322,5 @@ Utility functions
322322
--------------------------------
323323

324324
.. autofunction:: superannotate.dicom_to_rgb_sequence
325+
.. autofunction:: superannotate.consensus
326+
.. autofunction:: superannotate.benchmark

docs/source/tutorial.sdk.rst

Lines changed: 42 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -544,4 +544,45 @@ Analogically the box plots of consensus scores for each project are colored acco
544544
Scatter plot of consensus score vs instance area is separated by projects. Hovering on a point reveals its annotator and image name.
545545
The points are colored according to class name. Each annotator is represented with separate symbol.
546546

547-
.. image:: consensus_scatter.png
547+
.. image:: consensus_scatter.png
548+
549+
----------
550+
551+
552+
Computing benchmark scores for instances between ground truth project and given project list
553+
____________________________________________________________________________________________
554+
555+
556+
Benchmark is a tool to compare the quallity of the annotations of the same image that is present in several projects with
557+
the ground truth annotation of the same image that is in a separate project.
558+
559+
To compute the benchmark scores:
560+
561+
.. code-block:: python
562+
563+
res_df = sa.benchmark("<ground_truth_project_name>",[project_names], "<path_to_export_folder>", [image_list], "<annotation_type>")
564+
565+
Here pandas DataFrame with exactly same structure as in case of consensus computation is returned.
566+
567+
Besides the pandas DataFrame there is an option to get the following plots by setting the show_plots flag to True:
568+
569+
* Box plot of benchmark scores for each annotators
570+
* Box plot of benchmark scores for each project
571+
* Scatter plots of benchmark score vs instance area for each project
572+
573+
.. code-block:: python
574+
575+
sa.benchmark("<ground_truth_project_name>", [project_names], "<path_to_export_folder>", [image_list], "<annotation_type>", show_plots=True)
576+
577+
To the left of each box plot the original score points of that annotator is depicted, the box plots are colored by annotator.
578+
579+
.. image:: benchmark_annotators_box.png
580+
581+
Analogically the box plots of benchmark scores for each project are colored according to project name.
582+
583+
.. image:: benchmark_projects_box.png
584+
585+
Scatter plot of benchmark score vs instance area is separated by projects. Hovering on a point reveals its annotator and image name.
586+
The points are colored according to class name. Each annotator is represented with separate symbol.
587+
588+
.. image:: benchmark_scatter.png

0 commit comments

Comments
 (0)