Skip to content

Commit

Permalink
Fix Cloud-CV#1191 Cloud-CV#1345: Add section about compiled libraries…
Browse files Browse the repository at this point in the history
… and evaluation script
  • Loading branch information
deshraj authored Feb 11, 2018
1 parent a246785 commit 97709ee
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 9 deletions.
2 changes: 1 addition & 1 deletion docs/source/architecture_decisions.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Architecture Decisions
## Architectural Decisions

This is a collection of records for architecturally significant decisions.

Expand Down
8 changes: 4 additions & 4 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,17 +61,17 @@

# General information about the project.
project = u'EvalAI'
copyright = u'2017, CloudCV Team'
copyright = u'2018, CloudCV Team'
author = u'CloudCV Team'

# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u'1.0'
version = u'1.1'
# The full version, including alpha/beta/rc tags.
release = u'1.0'
release = u'1.1'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down Expand Up @@ -329,7 +329,7 @@
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'EvalAI', u'EvalAI Documentation',
author, 'EvalAI', 'One line description of project.',
author, 'EvalAI', 'Evaluating state of the art in AI',
'Miscellaneous'),
]

Expand Down
6 changes: 4 additions & 2 deletions docs/source/evaluation_scripts.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
## Evaluation Script
## Writing Evaluation Script

Each challenge has an evaluation script, which evaluates the submission of participants and returns the scores which will populate the leaderboard.

The logic for evaluating and judging a submission is customizable and varies from challenge to challenge, but the overall structure of evaluation scripts are fixed due to architectural reasons.

Evaluation scripts are required to have an `evaluate` function. This is the main function, which is used by workers to evaluate the submission messages.

The syntax of evaluate function is
The syntax of evaluate function is:

```
Expand Down Expand Up @@ -53,3 +53,5 @@ output['result'] = [
```

`output` should contain a key named `result`, which is a list containing entries per challenge phase split. Each challenge phase split object contains various keys, which are then displayed as columns in leaderboard.

**Note**: If your evaluation script uses some precompiled libraries (<a href="https://github.com/pdollar/coco/">MSCOCO</a> for example), then make sure that the library is compiled against a Linux Distro (Ubuntu 14.04 recommended). Libraries compiled against OSx or Windows might or might not work properly.
5 changes: 3 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,12 @@ Contents:
:maxdepth: 2

setup
challenge_creation
evaluation_scripts
submission
architecture
architecture_decisions
directory_structure
challenge_creation
submission
migrations
contribution
pull_request
Expand Down

0 comments on commit 97709ee

Please sign in to comment.