You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 14, 2024. It is now read-only.
This fixes the breaking changes for Python 2 that were made in the
last attempt and improves Python 3 compatibility a bit more.
Python 3 supports the new `file` argument to the `print` function
that allows you to specify where to print to. In the code, this
was being used to print to stderr. Unfortunately Python 2 does not
yet support this, so we are now using `sys.stderr.write` instead.
Python 2's `Queue` module was renamed to `queue` in Python 3. We
should be importing them both to ensure that the code runs on both
versions. By default, we are assuming that Python 3 will be used
and are using it as the first import before falling back to the old
name.
`threading._sleep` was a private method and was removed in Python 3.
It has been replaced by `time.sleep` for now, as that is both public
and consistent across versions.
When determining chunk sizes and the number of chunks, there was a
possibility of floating values being passed in or calculated. To
ensure that the generated range does not throw an exception, we are
manually casting these to integers.
The iterator was previously changed so `next` became `__next__`,
which was the Python 3 method for the `next` function. This adds
Python 2 compatibility alongside of these changes.
print("Threadpool: processed %i individual items, with %i threads, one at a time, in %f s ( %f items / s )"% (ni, p.size(), elapsed, ni/elapsed), file=sys.stderr)
75
-
76
-
# it couldn't yet notice that the input is depleted as we pulled exaclty
77
-
# ni items - the next one would remove it. Instead, we delete our channel
74
+
sys.stderr.write("Threadpool: processed %i individual items, with %i threads, one at a time, in %f s ( %f items / s )\n"% (ni, p.size(), elapsed, ni/elapsed))
75
+
76
+
# it couldn't yet notice that the input is depleted as we pulled exaclty
77
+
# ni items - the next one would remove it. Instead, we delete our channel
print("Threadpool: processed %i individual items in chunks of %i, with %i threads, one at a time, in %f s ( %f items / s )"% (ni, ni/4, p.size(), elapsed, ni/elapsed), file=sys.stderr)
166
-
165
+
sys.stderr.write("Threadpool: processed %i individual items in chunks of %i, with %i threads, one at a time, in %f s ( %f items / s)\n"% (ni, ni/4, p.size(), elapsed, ni/elapsed))
166
+
167
167
task._assert(ni, ni)
168
168
assertp.num_tasks() ==1+null_tasks
169
169
assertp.remove_task(task) isp# del manually this time
print("Dependent Tasks: evaluated %i items with read(1) of %i dependent in %f s ( %i items / s )"% (ni, aic, elapsed_single, ni/elapsed_single), file=sys.stderr)
261
-
262
-
260
+
sys.stderr.write("Dependent Tasks: evaluated %i items with read(1) of %i dependent in %f s ( %i items / s )\n"% (ni, aic, elapsed_single, ni/elapsed_single))
print("Dependent Tasks: evaluated %i items with read(1), min_size=%i, of %i dependent in %f s ( %i items / s )"% (ni, nri, aic, elapsed_minsize, ni/elapsed_minsize), file=sys.stderr)
281
-
280
+
sys.stderr.write("Dependent Tasks: evaluated %i items with read(1), min_size=%i, of %i dependent in %f s ( %i items / s )\n"% (ni, nri, aic, elapsed_minsize, ni/elapsed_minsize))
281
+
282
282
# it should have been a bit faster at least, and most of the time it is
283
283
# Sometimes, its not, mainly because:
284
284
# * The test tasks lock a lot, hence they slow down the system
print("Dependent Tasks: evaluated 2 connected pools and %i items with read(0), of %i dependent tasks in %f s ( %i items / s )"% (ni, aic+aic-1, elapsed, ni/elapsed), file=sys.stderr)
324
-
325
-
322
+
323
+
sys.stderr.write("Dependent Tasks: evaluated 2 connected pools and %i items with read(0), of %i dependent tasks in %f s ( %i items / s )\n"% (ni, aic+aic-1, elapsed, ni/elapsed))
324
+
325
+
326
326
# loose the handles of the second pool to allow others to go as well
print("Dependent Tasks: evaluated 2 connected pools and %i items with read(1), of %i dependent tasks in %f s ( %i items / s )"% (ni, aic+aic-1, elapsed, ni/elapsed), file=sys.stderr)
349
-
347
+
348
+
sys.stderr.write("Dependent Tasks: evaluated 2 connected pools and %i items with read(1), of %i dependent tasks in %f s ( %i items / s )\n"% (ni, aic+aic-1, elapsed, ni/elapsed))
0 commit comments