Skip to content

Commit 1e23bc5

Browse files
hizkifwdeshraj
authored andcommitted
Docs: Fix minor typos in EvalAI docs (Cloud-CV#1387)
* docs/architecture: Fix grammar * docs/challenge_creation: Fix grammar * docs/contribution: Fix grammar * docs/directory_structure: Fix grammar * docs/evaluation_scripts: Fix grammar * docs/glossary: Fix grammar * docs/migrations: Fix grammar * docs/pull_request: Fix grammar * docs/setup: Fix grammar * docs/submission: Fix grammar * docs/architecture_decisions: Fix grammar and spelling
1 parent 1a6f72e commit 1e23bc5

11 files changed

+165
-153
lines changed

docs/source/architecture.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
## Architecture
22

3-
EvalAI helps researchers, students, and data-scientists to create, collaborate and participate in various AI challenges organized around the globe. To achieve this we leverage some of the best open source tools and technologies.
3+
EvalAI helps researchers, students, and data-scientists to create, collaborate and participate in various AI challenges organized around the world. To achieve this we leverage some of the best open source tools and technologies.
44

55
### Technologies that the project use:
66

77
#### Django
88

9-
Django is the heart of the application. It powers our complete backend. We use Django version 1.10.
9+
Django is the heart of the application, which powers our backend. We use Django version 1.10.
1010

1111
#### Django Rest Framework
1212

@@ -18,7 +18,7 @@ We currently use RabbitMQ for queueing submission messages which are then later
1818

1919
#### PostgreSQL
2020

21-
PostgreSQL is used as our primary datastore. All our tables currently reside in a single database named as `evalai`.
21+
PostgresSQL is used as our primary datastore. All our tables currently reside in a single database named `evalai`
2222

2323
#### Angular JS
2424

docs/source/architecture_decisions.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@ This is a collection of records for architecturally significant decisions.
44

55
### URL Patterns
66

7-
We follow a very basic, yet strong convention for URL so that our rest APIs are properly namespaced. First of all, we rely heavily on HTTP verbs to perform **CRUD** actions.
7+
We follow a very basic, yet strong convention for URLs, so that our rest APIs are properly namespaced. First of all, we rely heavily on HTTP verbs to perform **CRUD** actions.
88

9-
For example, to perform **CRUD** operation on _Challenge Host Model_, following will be the URL patterns.
9+
For example, to perform **CRUD** operation on _Challenge Host Model_, the following URL patterns will be used.
1010

1111
* `GET /hosts/challenge_host_team` - Retrieves a list of challenge host teams
1212

@@ -20,38 +20,38 @@ For example, to perform **CRUD** operation on _Challenge Host Model_, following
2020

2121
* `DELETE /hosts/challenge_host_team/<challenge_host_team_id>` - Deletes a specific challenge host team
2222

23-
Also, we have namespaced the URL patterns on an app basis, so URLs for _Challenge Host Model_ which is in _hosts_ app will be
23+
Also, we have namespaced the URL patterns on a per-app basis, so URLs for _Challenge Host Model_, which is in the _hosts_ app, will be
2424

2525
```
2626
/hosts/challenge_host_team
2727
```
2828

29-
This way one can easily identify where a particular API is located.
29+
This way, one can easily identify where a particular API is located.
3030

3131
We use underscore **_** in URL patterns.
3232

3333
### Processing submission messages asynchronously
3434

35-
When a submission message is made, a REST API is called which saves the data related to submission in the database. A submission involves the processing and evaluation of `input_file`. This file is used to evaluate the submission and then decides the status of the submission whether it is _FINISHED_ or _FAILED_.
35+
When a submission message is made, a REST API is called which saves the data related to the submission in the database. A submission involves the processing and evaluation of `input_file`. This file is used to evaluate the submission and then decide the status of the submission, whether it is _FINISHED_ or _FAILED_.
3636

37-
One way to process the submission was to evaluate it as soon as it was made and hence blocking the request of the participant. Blocking the request here means to send the response to the participant only when the submission has been submitted and its output is known. This would have worked fine if the number of the submissions made is very low, but this is not the case.
37+
One way to process the submission is to evaluate it as soon as it is made, hence blocking the participant's request. Blocking the request here means to send the response to the participant only when the submission has been made and its output is known. This would work fine if the number of the submissions made is very low, but this is not the case.
3838

39-
Hence we decided to process and evaluate submission message in an asynchronous manner. To process the message in this way, we need to change our architecture a bit and add a Message Framework, along with a worker so that it can process the message.
39+
Hence we decided to process and evaluate submission message in an asynchronous manner. To process the messages this way, we need to change our architecture a bit and add a Message Framework, along with a worker so that it can process the message.
4040

4141
Out of all the awesome messaging framework available, we have chosen RabbitMQ, because of its transactional nature and reliability. Also, RabbitMQ is easily horizontally scalable, which means we can easily handle the heavy load by simply adding more nodes to the cluster.
4242

4343
For the worker, we went ahead with a normal python worker, which simply runs a process and loads all the required data in its memory. As soon as the worker starts, it listens on a RabbitMQ queue named `submission_task_queue` for new submission messages.
4444

4545
### Submission Worker
4646

47-
Submission worker is responsible for processing submission messages. It listens on a queue named `submission_task_queue` and on receiving a message for a submission it processes and evaluates the submission.
47+
The submission worker are responsible for processing submission messages. It listens on a queue named `submission_task_queue`, and on receiving a message for a submission, it processes and evaluates the submission.
4848

49-
One of the major design changes that we decided to implement in submission worker was to load all the data related to challenge in the memory of the worker instead of fetching it every time whenever a submission message is there for any challenge. So the worker, when starting, fetches the list of active challenges from the database and then loads it into memory by maintaining a map `EVALUATION_SCRIPTS` on challenge id. This was actually a major performance improvement.
49+
One of the major design changes that we decided to implement in the submission worker was to load all the data related to the challenge in the worker's memory, instead of fetching it every time a new submission message arrives. So the worker, when starting, fetches the list of active challenges from the database and then loads it into memory by maintaining the map `EVALUATION_SCRIPTS` on challenge id. This was actually a major performance improvement.
5050

51-
Another major design that we incorporated here was dynamically importing the challenge module and loading it in the map instead of invoking a new python process every time a submission message arrives. So now whenever a new message for a submission is received, we already have its corresponding challenge module being loaded in a map `EVALUATION_SCRIPTS`, and we just need to call
51+
Another major design change that we incorporated here was to dynamically import the challenge module and to load it in the map instead of invoking a new python process every time a submission message arrives. So now whenever a new message for a submission is received, we already have its corresponding challenge module being loaded in a map called `EVALUATION_SCRIPTS`, and we just need to call
5252

5353
```
5454
EVALUATION_SCRIPTS[challenge_id].evaluate(*params)
5555
```
5656

57-
This was again a major performance improvement, wherein we saved us from the task of invoking and managing Python processes to evaluate submission messages. Also invoking a new python process every time for a new submission would have been really slow.
57+
This was again a major performance improvement, which saved us from the task of invoking and managing Python processes to evaluate submission messages. Also invoking a new python process every time for a new submission would be really slow.

docs/source/challenge_creation.md

Lines changed: 27 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,37 @@
1-
# Create Challenge
1+
# Creating a Challenge
22

3-
One can create a challenge in EvalAI using following methods:
3+
One can create a challenge in EvalAI using either:
44

5-
* Challenge creation using zip configuration
6-
7-
* Challenge creation using web interface
5+
1. zip configuration
6+
2. web interface
87

98
## Challenge creation using zip configuration
109

1110
### Getting Started
1211

13-
Creating a challenge on EvalAI is a three step process. You just need to upload the challenge details in a challenge configuration file (**YAML file**) and we will take care of the rest.
12+
Creating a challenge on EvalAI is a three-step process. You just need to upload the challenge details in a challenge configuration file (**YAML file**) and we will take care of the rest.
1413

1514
The challenge configuration file on EvalAI consists of following fields:
1615

1716
* **title**: Title of the challenge
1817

1918
* **short_description**: Short description of the challenge (preferably 140 characters max)
2019

21-
* **description**: Long description of the challenge (set relative path of the html file. For eg. `challenge_details/description.html`)
20+
* **description**: Long description of the challenge (use a relative path for the html file, e.g. `challenge_details/description.html`)
2221

23-
* **evaluation_criteria**: Evaluation criteria and details of the challenge (set relative path of the html file. For eg. `challenge_details/evaluation.html`)
22+
* **evaluation_criteria**: Evaluation criteria and details of the challenge (use a relative path for the html file, e.g. `challenge_details/evaluation.html`)
2423

25-
* **terms_and_conditions**: Terms and conditions of the challenge (set relative path of the html file. For eg. `challenge_details/tnc.html`)
24+
* **terms_and_conditions**: Terms and conditions of the challenge (use a relative path for the html file, e.g. `challenge_details/tnc.html`)
2625

27-
* **image**: Logo of the challenge (set relative path of the logo in the zip configuration. For eg. `images/logo/challenge_logo.jpg`). **Note**: The image must be in jpg, jpeg or png format.
26+
* **image**: Logo of the challenge (use a relative path for the logo in the zip configuration, e.g. `images/logo/challenge_logo.jpg`). **Note**: The image must be in jpg, jpeg or png format.
2827

29-
* **submission_guidelines**: Submission guidelines of the challenge (set relative path of the html file. For eg. `challenge_details/submission_guidelines.html`)
28+
* **submission_guidelines**: Submission guidelines of the challenge (use a relative path for the html file, e.g. `challenge_details/submission_guidelines.html`)
3029

31-
* **evaluation_script**: The evaluation script using which the submissions will be evaluated (relative path of the evaluation script file or folder from this YAML file.)
30+
* **evaluation_script**: The evaluation script using which the submissions will be evaluated (path of the evaluation script file or folder relative to this YAML file.)
3231

33-
* **start_date**: Start DateTime of the challenge (Format: YYYY-MM-DD HH:MM:SS. For eg. 2017-07-07 10:10:10)
32+
* **start_date**: Start DateTime of the challenge (Format: YYYY-MM-DD HH:MM:SS, e.g. 2017-07-07 10:10:10)
3433

35-
* **end_date**: End DateTime of the challenge (Format: YYYY-MM-DD HH:MM:SS. For eg. 2017-07-07 10:10:10)
34+
* **end_date**: End DateTime of the challenge (Format: YYYY-MM-DD HH:MM:SS, e.g. 2017-07-07 10:10:10)
3635

3736
* **published**: True/False (Boolean field that gives the flexibility to publish the challenge once approved by EvalAI Admin. Default is `False`)
3837

@@ -72,23 +71,23 @@ The challenge configuration file on EvalAI consists of following fields:
7271

7372
* **name**: Name of the challenge phase
7473

75-
* **description**: Long description of the challenge phase (set relative path of the html file. For eg. `challenge_details/phase1_description.html`)
74+
* **description**: Long description of the challenge phase (set relative path of the html file, e.g. `challenge_details/phase1_description.html`)
7675

7776
* **leaderboard_public**: True/False (Boolean field that gives the flexibility to Challenge Hosts to make their leaderboard public or private. Default is `False`)
7877

7978
* **is_public**: True/False (Boolean field that gives the flexibility to Challenge Hosts to either hide or show the challenge phase to participants. Default is `False`)
8079

81-
* **start_date**: Start DateTime of the challenge phase (Format: YYYY-MM-DD HH:MM:SS. For eg. 2017-07-07 10:10:10)
80+
* **start_date**: Start DateTime of the challenge phase (Format: YYYY-MM-DD HH:MM:SS, e.g. 2017-07-07 10:10:10)
8281

83-
* **end_date**: End DateTime of the challenge phase (Format: YYYY-MM-DD HH:MM:SS. For eg. 2017-07-07 10:10:10)
82+
* **end_date**: End DateTime of the challenge phase (Format: YYYY-MM-DD HH:MM:SS, e.g. 2017-07-07 10:10:10)
8483

85-
* **test_annotation_file**: This file is used for ranking the submission made by a participant. An annotation file can be shared by more than one challenge phase. (Relative path of the test annotation file from this yaml file. Eg: `challenge_details/test_annotation.txt`)
84+
* **test_annotation_file**: This file is used for ranking the submission made by a participant. An annotation file can be shared by more than one challenge phase. (Path of the test annotation file relative to this YAML file, e.g. `challenge_details/test_annotation.txt`)
8685

87-
* **codename**: Challenge phase codename. Note that the codename of a challenge phase is used to map the results returned by the evaluation script to a particular challenge phase and the codename specified here should match with the codename specified in the evaluation script to make sure that the mapping is perfect.
86+
* **codename**: Challenge phase codename. Note that the codename of a challenge phase is used to map the results returned by the evaluation script to a particular challenge phase. The codename specified here should match with the codename specified in the evaluation script to perfect mapping.
8887

89-
* **max_submissions_per_day**: Positive integer number which tells the maximum number of submissions per day to a challenge phase
88+
* **max_submissions_per_day**: Positive integer which tells the maximum number of submissions per day to a challenge phase.
9089

91-
* **max_submissions**: Positive integer number that decides the overall maximum number of submissions that can be done to a challenge phase.
90+
* **max_submissions**: Positive integer that decides the overall maximum number of submissions that can be done to a challenge phase.
9291

9392

9493
* **dataset_splits**:
@@ -103,7 +102,7 @@ The challenge configuration file on EvalAI consists of following fields:
103102

104103
* **name**: Name of the dataset split (it must be unique for every dataset split)
105104

106-
* **codename**: Codename of dataset split. Note that the codename of dataset split is used to map the results returned by the evaluation script to a particular dataset split in EvalAI's database. Please make sure that no two dataset splits have the same codename. Again, make sure that the dataset split's codename match with what is there in the evaluation script provided by the challenge host.
105+
* **codename**: Codename of dataset split. Note that the codename of a dataset split is used to map the results returned by the evaluation script to a particular dataset split in EvalAI's database. Please make sure that no two dataset splits have the same codename. Again, make sure that the dataset split's codename match with what is in the evaluation script provided by the challenge host.
107106

108107

109108
* **challenge_phase_splits**:
@@ -126,7 +125,7 @@ The challenge configuration file on EvalAI consists of following fields:
126125

127126
### Sample zip configuration file
128127

129-
Here is the sample configuration file for a challenge with 1 phase and 2 dataset split:
128+
Here is a sample configuration file for a challenge with 1 phase and 2 dataset split:
130129

131130
```yaml
132131

@@ -150,7 +149,7 @@ leaderboard:
150149

151150
challenge_phases:
152151
- id: 1
153-
name: Challenge Name of the challenge phase
152+
name: Challenge name of the challenge phase
154153
description: challenge_phase_description.html
155154
leaderboard_public: True
156155
is_public: True
@@ -182,16 +181,16 @@ challenge_phase_splits:
182181
```
183182
### Challenge Creation Examples
184183

185-
Please see this [repository](https://github.com/Cloud-CV/EvalAI-Examples) to know how to create different types of challenges on EvalAI.
184+
Please see this [repository](https://github.com/Cloud-CV/EvalAI-Examples) for examples on the different types of challenges on EvalAI.
186185

187186
### Next Steps
188187

189-
The next step is to create a zip file that contains YAML config file, all the HTML templates for challenge description, challenge phase description, evaluation criteria, submission guidelines, evaluation script, test annotation file(s) and challenge logo(optional).
188+
The next step is to create a zip file that contains the YAML config file, all the HTML templates for the challenge description, challenge phase description, evaluation criteria, submission guidelines, evaluation script, test annotation file(s) and challenge logo (optional).
190189

191-
The final step is to create a challenge host team for the challenge on EvalAI and then after selecting that team just upload the zip folder created in the above step and the challenge will be created.
190+
The final step is to create a challenge host team for the challenge on EvalAI. After that, just upload the zip folder created in the above steps and the challenge will be created.
192191

193192
If you have issues in creating a challenge on EvalAI, please feel free to create an issue on our Github Issues Page.
194193

195194
## Create challenge using web interface
196195

197-
Todo: We are working on this feature and will keep you updated about the same.
196+
Todo: We are working on this feature and will keep you updated.

docs/source/contribution.rst

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,25 +2,25 @@ Contributing guidelines
22
-----------------------
33

44
Thank you for your interest in contributing to EvalAI! Here are a few
5-
pointers about how you can help.
5+
pointers on how you can help.
66

77
Setting things up
88
~~~~~~~~~~~~~~~~~
99

1010
To set up the development environment, follow the instructions in
11-
README.
11+
our README.
1212

1313
Finding something to work on
1414
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1515

16-
The issue tracker of EvalAI a good place to start. If you find something
16+
EvalAI's issue tracker is good place to start. If you find something
1717
that interests you, comment on the thread and we’ll help get you
1818
started.
1919

2020
Alternatively, if you come across a new bug on the site, please file a
21-
new issue and comment if you would like to be assigned. The existing
21+
new issue and comment if you would like to be assigned. Existing
2222
issues are tagged with one or more labels, based on the part of the
23-
website it touches, its importance etc., that can help you in selecting
23+
website it touches, its importance etc., which can help you select
2424
one.
2525

2626
If neither of these seem appealing, please post on our channel and we
@@ -32,12 +32,12 @@ Instructions to submit code
3232
Before you submit code, please talk to us via the issue tracker so we
3333
know you are working on it.
3434

35-
Our central development branch is development. Coding is done on feature
35+
Our central development branch is `development`. Coding is done on feature
3636
branches based off of development and merged into it once stable and
3737
reviewed. To submit code, follow these steps:
3838

3939
1. Create a new branch off of development. Select a descriptive branch
40-
name. We highly encourage to use `autopep8` to follow the PEP8 styling. Run the following command before creating the pull request:
40+
name. We highly encourage you to use `autopep8` to follow the PEP8 styling. Run the following command before creating the pull request:
4141

4242
::
4343

0 commit comments

Comments
 (0)