|
13 | 13 |
|
14 | 14 | - "TCP segment of a reassembled PDU" in wireshark.
|
15 | 15 |
|
| 16 | +- Jim F. would like the server to log request start, request queue (to thread |
| 17 | + pool), app start, app finish, and request finish (all data has been |
| 18 | + flushed to client) events. |
| 19 | + |
| 20 | + Some challenges exist trying to divine per-request end time. We currently |
| 21 | + have the potential for request pipelining; the channel might service more |
| 22 | + than one request before it closes. We currently don't preserve any |
| 23 | + information about which request a response's data belongs to while flushing |
| 24 | + response data from a connection's output buffer. |
| 25 | + |
| 26 | + While accepting request data from a client, Waitress will obtain N request |
| 27 | + bodies and schedule all the requests it receives with the task manager. |
| 28 | + For example, if it obtains two request bodies in a single recv() call it |
| 29 | + will create two request objects and schedule *both* of these requests to be |
| 30 | + serviced by the task manager immediately. |
| 31 | + |
| 32 | + The task thread manager will service these synchronously: the first request |
| 33 | + will be run first, then the second. When the first request runs, it will |
| 34 | + push data to the out buffer, then it will end. Then the second request |
| 35 | + will run, and push data to the same out buffer, and it will end. While |
| 36 | + these requests are executing, the channel from whence they came stops |
| 37 | + accepting requests until the previously scheduled requests have actually |
| 38 | + been serviced. The request-burdened channel will be *sending* data to the |
| 39 | + client while the requests are being serviced, it just won't accept any more |
| 40 | + data until existing requests have been serviced. In the meantime, other |
| 41 | + channels may still be generating requests and adding tasks to the task |
| 42 | + manager. |
| 43 | + |
| 44 | + To capture request-end time we could create an output buffer per request or |
| 45 | + we could keep a dictionary of the final bytestream position of the |
| 46 | + outbuffer for each response to to request id; either would be a |
| 47 | + straightforward way to capture the fact that a particular request's |
| 48 | + response data has been flushed. We currently don't do that though. |
| 49 | + |
| 50 | + Here's what we can currently log without changing anything: |
| 51 | + |
| 52 | + An example of the events logged for a connection that receives two requests |
| 53 | + and each request succeeds, and the connection is closed after sending all |
| 54 | + data:: |
| 55 | + |
| 56 | + channel created: channel 1 at time 10 |
| 57 | + request created: channel 1, request id 1 at time 11 |
| 58 | + request created: channel 1, request id 2 at time 12 |
| 59 | + channel requests queued: channel 1, request ids 1,2 at time 13 |
| 60 | + request started: request id 1 at time 14 |
| 61 | + request serviced: request id 1 at time 15 |
| 62 | + request started: request id 2 at time 16 |
| 63 | + request serviced: request id 2 at time 17 |
| 64 | + channel closed: channel 1 at time 18 |
| 65 | + |
| 66 | + An example of the events logged for a connection that receives two requests |
| 67 | + and the first request fails in such a way that the next request cannot |
| 68 | + proceed (content-length header of the first response does not match number |
| 69 | + of bytes sent in response to the first request, for example):: |
| 70 | + |
| 71 | + channel created: channel 1 at time 10 |
| 72 | + request created: channel 1, request id 1 at time 11 |
| 73 | + request created: channel 1, request id 2 at time 12 |
| 74 | + channel requests queued: channel 1, request ids 1,2 at time 13 |
| 75 | + request started: request id 1 at time 14 |
| 76 | + request serviced: request id 1 at time 15 |
| 77 | + request cancelled: request id 2 at time 17 |
| 78 | + channel closed: channel 1 at time 18 |
| 79 | + |
| 80 | + An example of the events logged for a connection that receives four |
| 81 | + requests (which all succeed in generating successful responses) but where |
| 82 | + the client waits for the first two responses to send the second two |
| 83 | + requests: |
| 84 | + |
| 85 | + channel created: channel 1 at time 10 |
| 86 | + request created: channel 1, request id 1 at time 11 |
| 87 | + request created: channel 1, request id 2 at time 12 |
| 88 | + channel requests queued: channel 1, request ids 1,2 at time 13 |
| 89 | + request started: request id 1 at time 14 |
| 90 | + request serviced: request id 1 at time 15 |
| 91 | + request started: request id 2 at time 15 |
| 92 | + request serviced: request id 2 at time 16 |
| 93 | + request created: channel 1, request id 3 at time 17 |
| 94 | + request created: channel 1, request id 4 at time 18 |
| 95 | + channel requests queued: channel 1, request ids 3,4 at time 18 |
| 96 | + request started: request id 3 at time 19 |
| 97 | + request serviced: request id 3 at time 20 |
| 98 | + request started: request id 4 at time 21 |
| 99 | + request serviced: request id 4 at time 22 |
| 100 | + channel closed: channel 1 at time 23 |
0 commit comments