You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/architecture.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,12 @@
1
1
## Architecture
2
2
3
-
EvalAI helps researchers, students, and data-scientists to create, collaborate and participate in various AI challenges organized around the globe. To achieve this we leverage some of the best open source tools and technologies.
3
+
EvalAI helps researchers, students, and data-scientists to create, collaborate and participate in various AI challenges organized around the world. To achieve this we leverage some of the best open source tools and technologies.
4
4
5
5
### Technologies that the project use:
6
6
7
7
#### Django
8
8
9
-
Django is the heart of the application. It powers our complete backend. We use Django version 1.10.
9
+
Django is the heart of the application, which powers our backend. We use Django version 1.10.
10
10
11
11
#### Django Rest Framework
12
12
@@ -18,7 +18,7 @@ We currently use RabbitMQ for queueing submission messages which are then later
18
18
19
19
#### PostgreSQL
20
20
21
-
PostgreSQL is used as our primary datastore. All our tables currently reside in a single database named as `evalai`.
21
+
PostgresSQL is used as our primary datastore. All our tables currently reside in a single database named `evalai`
Copy file name to clipboardExpand all lines: docs/source/architecture_decisions.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,9 +4,9 @@ This is a collection of records for architecturally significant decisions.
4
4
5
5
### URL Patterns
6
6
7
-
We follow a very basic, yet strong convention for URL so that our rest APIs are properly namespaced. First of all, we rely heavily on HTTP verbs to perform **CRUD** actions.
7
+
We follow a very basic, yet strong convention for URLs, so that our rest APIs are properly namespaced. First of all, we rely heavily on HTTP verbs to perform **CRUD** actions.
8
8
9
-
For example, to perform **CRUD** operation on _Challenge Host Model_, following will be the URL patterns.
9
+
For example, to perform **CRUD** operation on _Challenge Host Model_, the following URL patterns will be used.
10
10
11
11
*`GET /hosts/challenge_host_team` - Retrieves a list of challenge host teams
12
12
@@ -20,38 +20,38 @@ For example, to perform **CRUD** operation on _Challenge Host Model_, following
20
20
21
21
*`DELETE /hosts/challenge_host_team/<challenge_host_team_id>` - Deletes a specific challenge host team
22
22
23
-
Also, we have namespaced the URL patterns on an app basis, so URLs for _Challenge Host Model_ which is in _hosts_ app will be
23
+
Also, we have namespaced the URL patterns on a per-app basis, so URLs for _Challenge Host Model_, which is in the _hosts_ app, will be
24
24
25
25
```
26
26
/hosts/challenge_host_team
27
27
```
28
28
29
-
This way one can easily identify where a particular API is located.
29
+
This way, one can easily identify where a particular API is located.
30
30
31
31
We use underscore **_** in URL patterns.
32
32
33
33
### Processing submission messages asynchronously
34
34
35
-
When a submission message is made, a REST API is called which saves the data related to submission in the database. A submission involves the processing and evaluation of `input_file`. This file is used to evaluate the submission and then decides the status of the submission whether it is _FINISHED_ or _FAILED_.
35
+
When a submission message is made, a REST API is called which saves the data related to the submission in the database. A submission involves the processing and evaluation of `input_file`. This file is used to evaluate the submission and then decide the status of the submission, whether it is _FINISHED_ or _FAILED_.
36
36
37
-
One way to process the submission was to evaluate it as soon as it was made and hence blocking the request of the participant. Blocking the request here means to send the response to the participant only when the submission has been submitted and its output is known. This would have worked fine if the number of the submissions made is very low, but this is not the case.
37
+
One way to process the submission is to evaluate it as soon as it is made, hence blocking the participant's request. Blocking the request here means to send the response to the participant only when the submission has been made and its output is known. This would work fine if the number of the submissions made is very low, but this is not the case.
38
38
39
-
Hence we decided to process and evaluate submission message in an asynchronous manner. To process the message in this way, we need to change our architecture a bit and add a Message Framework, along with a worker so that it can process the message.
39
+
Hence we decided to process and evaluate submission message in an asynchronous manner. To process the messages this way, we need to change our architecture a bit and add a Message Framework, along with a worker so that it can process the message.
40
40
41
41
Out of all the awesome messaging framework available, we have chosen RabbitMQ, because of its transactional nature and reliability. Also, RabbitMQ is easily horizontally scalable, which means we can easily handle the heavy load by simply adding more nodes to the cluster.
42
42
43
43
For the worker, we went ahead with a normal python worker, which simply runs a process and loads all the required data in its memory. As soon as the worker starts, it listens on a RabbitMQ queue named `submission_task_queue` for new submission messages.
44
44
45
45
### Submission Worker
46
46
47
-
Submission worker is responsible for processing submission messages. It listens on a queue named `submission_task_queue` and on receiving a message for a submission it processes and evaluates the submission.
47
+
The submission worker are responsible for processing submission messages. It listens on a queue named `submission_task_queue`, and on receiving a message for a submission, it processes and evaluates the submission.
48
48
49
-
One of the major design changes that we decided to implement in submission worker was to load all the data related to challenge in the memory of the workerinstead of fetching it every time whenever a submission message is there for any challenge. So the worker, when starting, fetches the list of active challenges from the database and then loads it into memory by maintaining a map `EVALUATION_SCRIPTS` on challenge id. This was actually a major performance improvement.
49
+
One of the major design changes that we decided to implement in the submission worker was to load all the data related to the challenge in the worker's memory, instead of fetching it every time a new submission message arrives. So the worker, when starting, fetches the list of active challenges from the database and then loads it into memory by maintaining the map `EVALUATION_SCRIPTS` on challenge id. This was actually a major performance improvement.
50
50
51
-
Another major design that we incorporated here was dynamically importing the challenge module and loading it in the map instead of invoking a new python process every time a submission message arrives. So now whenever a new message for a submission is received, we already have its corresponding challenge module being loaded in a map `EVALUATION_SCRIPTS`, and we just need to call
51
+
Another major design change that we incorporated here was to dynamically import the challenge module and to load it in the map instead of invoking a new python process every time a submission message arrives. So now whenever a new message for a submission is received, we already have its corresponding challenge module being loaded in a map called`EVALUATION_SCRIPTS`, and we just need to call
This was again a major performance improvement, wherein we saved us from the task of invoking and managing Python processes to evaluate submission messages. Also invoking a new python process every time for a new submission would have been really slow.
57
+
This was again a major performance improvement, which saved us from the task of invoking and managing Python processes to evaluate submission messages. Also invoking a new python process every time for a new submission would be really slow.
Copy file name to clipboardExpand all lines: docs/source/challenge_creation.md
+27-28Lines changed: 27 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,38 +1,37 @@
1
-
# Create Challenge
1
+
# Creating a Challenge
2
2
3
-
One can create a challenge in EvalAI using following methods:
3
+
One can create a challenge in EvalAI using either:
4
4
5
-
* Challenge creation using zip configuration
6
-
7
-
* Challenge creation using web interface
5
+
1. zip configuration
6
+
2. web interface
8
7
9
8
## Challenge creation using zip configuration
10
9
11
10
### Getting Started
12
11
13
-
Creating a challenge on EvalAI is a threestep process. You just need to upload the challenge details in a challenge configuration file (**YAML file**) and we will take care of the rest.
12
+
Creating a challenge on EvalAI is a three-step process. You just need to upload the challenge details in a challenge configuration file (**YAML file**) and we will take care of the rest.
14
13
15
14
The challenge configuration file on EvalAI consists of following fields:
16
15
17
16
***title**: Title of the challenge
18
17
19
18
***short_description**: Short description of the challenge (preferably 140 characters max)
20
19
21
-
***description**: Long description of the challenge (set relative path of the html file. For eg. `challenge_details/description.html`)
20
+
***description**: Long description of the challenge (use a relative path for the html file, e.g. `challenge_details/description.html`)
22
21
23
-
***evaluation_criteria**: Evaluation criteria and details of the challenge (set relative path of the html file. For eg. `challenge_details/evaluation.html`)
22
+
***evaluation_criteria**: Evaluation criteria and details of the challenge (use a relative path for the html file, e.g. `challenge_details/evaluation.html`)
24
23
25
-
***terms_and_conditions**: Terms and conditions of the challenge (set relative path of the html file. For eg. `challenge_details/tnc.html`)
24
+
***terms_and_conditions**: Terms and conditions of the challenge (use a relative path for the html file, e.g. `challenge_details/tnc.html`)
26
25
27
-
***image**: Logo of the challenge (set relative path of the logo in the zip configuration. For eg.`images/logo/challenge_logo.jpg`). **Note**: The image must be in jpg, jpeg or png format.
26
+
***image**: Logo of the challenge (use a relative path for the logo in the zip configuration, e.g.`images/logo/challenge_logo.jpg`). **Note**: The image must be in jpg, jpeg or png format.
28
27
29
-
***submission_guidelines**: Submission guidelines of the challenge (set relative path of the html file. For eg. `challenge_details/submission_guidelines.html`)
28
+
***submission_guidelines**: Submission guidelines of the challenge (use a relative path for the html file, e.g. `challenge_details/submission_guidelines.html`)
30
29
31
-
***evaluation_script**: The evaluation script using which the submissions will be evaluated (relative path of the evaluation script file or folder from this YAML file.)
30
+
***evaluation_script**: The evaluation script using which the submissions will be evaluated (path of the evaluation script file or folder relative to this YAML file.)
32
31
33
-
***start_date**: Start DateTime of the challenge (Format: YYYY-MM-DD HH:MM:SS. For eg. 2017-07-07 10:10:10)
32
+
***start_date**: Start DateTime of the challenge (Format: YYYY-MM-DD HH:MM:SS, e.g. 2017-07-07 10:10:10)
34
33
35
-
***end_date**: End DateTime of the challenge (Format: YYYY-MM-DD HH:MM:SS. For eg. 2017-07-07 10:10:10)
34
+
***end_date**: End DateTime of the challenge (Format: YYYY-MM-DD HH:MM:SS, e.g. 2017-07-07 10:10:10)
36
35
37
36
***published**: True/False (Boolean field that gives the flexibility to publish the challenge once approved by EvalAI Admin. Default is `False`)
38
37
@@ -72,23 +71,23 @@ The challenge configuration file on EvalAI consists of following fields:
72
71
73
72
***name**: Name of the challenge phase
74
73
75
-
***description**: Long description of the challenge phase (set relative path of the html file. For eg. `challenge_details/phase1_description.html`)
74
+
***description**: Long description of the challenge phase (set relative path of the html file, e.g. `challenge_details/phase1_description.html`)
76
75
77
76
***leaderboard_public**: True/False (Boolean field that gives the flexibility to Challenge Hosts to make their leaderboard public or private. Default is `False`)
78
77
79
78
***is_public**: True/False (Boolean field that gives the flexibility to Challenge Hosts to either hide or show the challenge phase to participants. Default is `False`)
80
79
81
-
***start_date**: Start DateTime of the challenge phase (Format: YYYY-MM-DD HH:MM:SS. For eg. 2017-07-07 10:10:10)
80
+
***start_date**: Start DateTime of the challenge phase (Format: YYYY-MM-DD HH:MM:SS, e.g. 2017-07-07 10:10:10)
82
81
83
-
***end_date**: End DateTime of the challenge phase (Format: YYYY-MM-DD HH:MM:SS. For eg. 2017-07-07 10:10:10)
82
+
***end_date**: End DateTime of the challenge phase (Format: YYYY-MM-DD HH:MM:SS, e.g. 2017-07-07 10:10:10)
84
83
85
-
***test_annotation_file**: This file is used for ranking the submission made by a participant. An annotation file can be shared by more than one challenge phase. (Relative path of the test annotation file from this yaml file. Eg:`challenge_details/test_annotation.txt`)
84
+
***test_annotation_file**: This file is used for ranking the submission made by a participant. An annotation file can be shared by more than one challenge phase. (Path of the test annotation file relative to this YAML file, e.g.`challenge_details/test_annotation.txt`)
86
85
87
-
***codename**: Challenge phase codename. Note that the codename of a challenge phase is used to map the results returned by the evaluation script to a particular challenge phase and the codename specified here should match with the codename specified in the evaluation script to make sure that the mapping is perfect.
86
+
***codename**: Challenge phase codename. Note that the codename of a challenge phase is used to map the results returned by the evaluation script to a particular challenge phase. The codename specified here should match with the codename specified in the evaluation script to perfect mapping.
88
87
89
-
***max_submissions_per_day**: Positive integer number which tells the maximum number of submissions per day to a challenge phase
88
+
***max_submissions_per_day**: Positive integer which tells the maximum number of submissions per day to a challenge phase.
90
89
91
-
***max_submissions**: Positive integer number that decides the overall maximum number of submissions that can be done to a challenge phase.
90
+
***max_submissions**: Positive integer that decides the overall maximum number of submissions that can be done to a challenge phase.
92
91
93
92
94
93
***dataset_splits**:
@@ -103,7 +102,7 @@ The challenge configuration file on EvalAI consists of following fields:
103
102
104
103
***name**: Name of the dataset split (it must be unique for every dataset split)
105
104
106
-
***codename**: Codename of dataset split. Note that the codename of dataset split is used to map the results returned by the evaluation script to a particular dataset split in EvalAI's database. Please make sure that no two dataset splits have the same codename. Again, make sure that the dataset split's codename match with what is there in the evaluation script provided by the challenge host.
105
+
***codename**: Codename of dataset split. Note that the codename of a dataset split is used to map the results returned by the evaluation script to a particular dataset split in EvalAI's database. Please make sure that no two dataset splits have the same codename. Again, make sure that the dataset split's codename match with what is in the evaluation script provided by the challenge host.
107
106
108
107
109
108
***challenge_phase_splits**:
@@ -126,7 +125,7 @@ The challenge configuration file on EvalAI consists of following fields:
126
125
127
126
### Sample zip configuration file
128
127
129
-
Here is the sample configuration file for a challenge with 1 phase and 2 dataset split:
128
+
Here is a sample configuration file for a challenge with 1 phase and 2 dataset split:
130
129
131
130
```yaml
132
131
@@ -150,7 +149,7 @@ leaderboard:
150
149
151
150
challenge_phases:
152
151
- id: 1
153
-
name: Challenge Name of the challenge phase
152
+
name: Challenge name of the challenge phase
154
153
description: challenge_phase_description.html
155
154
leaderboard_public: True
156
155
is_public: True
@@ -182,16 +181,16 @@ challenge_phase_splits:
182
181
```
183
182
### Challenge Creation Examples
184
183
185
-
Please see this [repository](https://github.com/Cloud-CV/EvalAI-Examples)to know how to create different types of challenges on EvalAI.
184
+
Please see this [repository](https://github.com/Cloud-CV/EvalAI-Examples)for examples on the different types of challenges on EvalAI.
186
185
187
186
### Next Steps
188
187
189
-
The next step is to create a zip file that contains YAML config file, all the HTML templates for challenge description, challenge phase description, evaluation criteria, submission guidelines, evaluation script, test annotation file(s) and challenge logo(optional).
188
+
The next step is to create a zip file that contains the YAML config file, all the HTML templates for the challenge description, challenge phase description, evaluation criteria, submission guidelines, evaluation script, test annotation file(s) and challenge logo(optional).
190
189
191
-
The final step is to create a challenge host team for the challenge on EvalAI and then after selecting that team just upload the zip folder created in the above step and the challenge will be created.
190
+
The final step is to create a challenge host team for the challenge on EvalAI. After that, just upload the zip folder created in the above steps and the challenge will be created.
192
191
193
192
If you have issues in creating a challenge on EvalAI, please feel free to create an issue on our Github Issues Page.
194
193
195
194
## Create challenge using web interface
196
195
197
-
Todo: We are working on this feature and will keep you updated about the same.
196
+
Todo: We are working on this feature and will keep you updated.
0 commit comments