Skip to content

Commit

Permalink
Fix assorted typos within assorted comments (DataBiosphere#4023)
Browse files Browse the repository at this point in the history
  • Loading branch information
Hexotical authored Jan 28, 2022
1 parent 355d152 commit 5c431e5
Show file tree
Hide file tree
Showing 11 changed files with 27 additions and 27 deletions.
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ tests=src/toil/test
cov="--cov=toil"
extras=
# You can say make develop packages=xxx to install packages in the same Python
# environemnt as Toil itself without creating dependency conflicts with Toil
# environment as Toil itself without creating dependency conflicts with Toil
packages=
sdist_name:=toil-$(shell python version_template.py distVersion).tar.gz

Expand Down
8 changes: 4 additions & 4 deletions attic/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -245,8 +245,8 @@ other jobs in a simple way.

The basic pattern provided by toil is as follows:

1. You have a job running on your cluster which requires further parallelisation.
2. You create a list of jobs to perform this parallelisation. These are the 'child' jobs of your process, we call them collectively the 'children'.
1. You have a job running on your cluster which requires further parallelization.
2. You create a list of jobs to perform this parallelization. These are the 'child' jobs of your process, we call them collectively the 'children'.
3. You create a 'follow-on' job, to be performed after all the children have successfully completed. This job is responsible for cleaning up the input files created for the children and doing any further processing. Children should not cleanup files created by parents, in case of a batch system failure which requires the child to be re-run (see 'Atomicity' below).
4. You end your current job successfully.
5. The batch system runs the children. These jobs may in turn have children and follow-on jobs.
Expand Down Expand Up @@ -312,7 +312,7 @@ Job.makeJobFnJob(setup, (fileToSort, N))

Notice that the child and follow-on jobs have also been refactored as functions, hence the methods **[addChildJobFn](https://github.com/benedictpaten/toil/blob/development/scriptTree/job.py#L82)** and **[setFollowOnFn](https://github.com/benedictpaten/toil/blob/development/scriptTree/job.py#L67)**, which take functions as opposed to Job objects.

Note, there are two types of functions you can wrap - **job functions**, whose first argument must be the wrapping job object (the setup function above is an excample of a job function), and plain functions that do not have a reference to the wrapping job.
Note, there are two types of functions you can wrap - **job functions**, whose first argument must be the wrapping job object (the setup function above is an example of a job function), and plain functions that do not have a reference to the wrapping job.

##Creating a scriptTree script:

Expand Down Expand Up @@ -392,7 +392,7 @@ toil replicates the environment in which toil or scriptTree is invoked and provi

Toil checkpoints its state on disk, so that it or the job manager can be wiped out and restarted. There is some gnarly test code to show how this works, it will keep crashing everything, at random points, but eventually everything will complete okay. As a user you needn't worry about any of this, but your child jobs must be atomic (as with all batch systems), and must follow the convention regarding input files.

* _How scaleable?_
* _How scalable?_

We have tested having 1000 concurrent jobs running on our cluster. This will depend on the underlying batch system being used.

Expand Down
6 changes: 3 additions & 3 deletions src/toil/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -492,7 +492,7 @@ class BotoCredentialAdapter(provider.Provider):
respecting the Boto 3 config files, even when parts of the app still use
Boto 2.
This class also handles cacheing credentials in multi-process environments
This class also handles caching credentials in multi-process environments
to avoid loads of processes swamping the EC2 metadata service.
"""

Expand Down Expand Up @@ -678,7 +678,7 @@ def _obtain_credentials_from_cache_or_boto3(self):
os.close(fd)
fd = None
log.debug('Failed to obtain credentials, removing %s.', tmp_path)
# This unblocks the loosers.
# This unblocks the losers.
os.unlink(tmp_path)
# Bail out. It's too likely to happen repeatedly
raise
Expand All @@ -689,7 +689,7 @@ def _obtain_credentials_from_cache_or_boto3(self):
log.debug('Credentials are not temporary. Leaving %s empty and renaming it to %s.',
tmp_path, path)
# No need to actually cache permanent credentials,
# because we hnow we aren't getting them from the
# because we know we aren't getting them from the
# metadata server or by assuming a role. Those both
# give temporary credentials.
else:
Expand Down
6 changes: 3 additions & 3 deletions src/toil/batchSystems/kubernetes.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
Ony useful with network-based job stores, like AWSJobStore.
Within non-priveleged Kubernetes containers, additional Docker containers
Within non-privileged Kubernetes containers, additional Docker containers
cannot yet be launched. That functionality will need to wait for user-mode
Docker
"""
Expand Down Expand Up @@ -769,7 +769,7 @@ def getUpdatedBatchJob(self, maxWait):
jobID = int(jobObject.metadata.name[len(self.job_prefix):])
jobObjectListConditions =jobObject.status.conditions
totalPods = jobObject.status.active + jobObject.status.finished + jobObject.status.failed
# Exit Reason defaults to 'Successfully Finsihed` unless said otherwise
# Exit Reason defaults to 'Successfully Finished` unless said otherwise
exitReason = BatchJobExitReason.FINISHED
exitCode = 0

Expand All @@ -783,7 +783,7 @@ def getUpdatedBatchJob(self, maxWait):
jobObject.status.succeeded, jobObject.status.failed, jobObject.status.active)
# Get termination information of job
termination = jobObjectListConditions[0]
# Log out succeess/failure given a reason
# Log out success/failure given a reason
logger.info("%s REASON: %s", termination.type, termination.reason)

# Log out reason of failure and pod exit code
Expand Down
8 changes: 4 additions & 4 deletions src/toil/cwl/cwltoil.py
Original file line number Diff line number Diff line change
Expand Up @@ -1103,7 +1103,7 @@ def open(self, fn: str, mode: str) -> IO[Any]:
return super().open(fn, mode)

def exists(self, path: str) -> bool:
"""Test for file existance."""
"""Test for file existence."""
# toil's _abs() throws errors when files are not found and cwltool's _abs() does not
try:
return os.path.exists(self._abs(path))
Expand Down Expand Up @@ -1203,7 +1203,7 @@ def toil_get_file(
:param streaming_allowed: If streaming is allowed
:param pipe_threads: List of threads responsible for streaming the data
and open file descriptors corresponding to those files. Caller is resposible
and open file descriptors corresponding to those files. Caller is responsible
to close the file descriptors (to break the pipes) and join the threads
"""
pipe_threads_real = pipe_threads or []
Expand Down Expand Up @@ -2720,7 +2720,7 @@ def scan_for_unsupported_requirements(
# If we are using the Toil FileStore we can't do InplaceUpdateRequirement
req, is_mandatory = tool.get_requirement("InplaceUpdateRequirement")
if req and is_mandatory:
# The tool actualy uses this one, and it isn't just a hint.
# The tool actually uses this one, and it isn't just a hint.
# Complain and explain.
raise CWL_UNSUPPORTED_REQUIREMENT_EXCEPTION(
"Toil cannot support InplaceUpdateRequirement when using the Toil file store. "
Expand Down Expand Up @@ -3558,7 +3558,7 @@ def remove_at_id(doc: Any) -> None:
def find_default_container(
args: argparse.Namespace, builder: cwltool.builder.Builder
) -> Optional[str]:
"""Find the default constuctor by consulting a Toil.options object."""
"""Find the default constructor by consulting a Toil.options object."""
if args.default_container:
return str(args.default_container)
if args.beta_use_biocontainers:
Expand Down
4 changes: 2 additions & 2 deletions src/toil/job.py
Original file line number Diff line number Diff line change
Expand Up @@ -1507,7 +1507,7 @@ def registerPromise(self, path):
def prepareForPromiseRegistration(self, jobStore):
"""
Ensure that a promise by this job (the promissor) can register with the promissor when
another job referring to the promise (the promissee) is being serialized. The promissee
another job referring to the promise (the promisee) is being serialized. The promisee
holds the reference to the promise (usually as part of the the job arguments) and when it
is being pickled, so will the promises it refers to. Pickling a promise triggers it to be
registered with the promissor.
Expand Down Expand Up @@ -2338,7 +2338,7 @@ def _executor(self, stats, fileStore):
if not self.checkpoint:
for jobStoreFileID in Promise.filesToDelete:
# Make sure to wrap the job store ID in a FileID object so the file store will accept it
# TODO: talk directly to the job sotre here instead.
# TODO: talk directly to the job store here instead.
fileStore.deleteGlobalFile(FileID(jobStoreFileID, 0))
else:
# Else copy them to the job description to delete later
Expand Down
6 changes: 3 additions & 3 deletions src/toil/jobStores/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,9 +115,9 @@ def __enter__(self):
return self.writable

def __exit__(self, exc_type, exc_val, exc_tb):
# Closeing the writable end will send EOF to the readable and cause the reader thread
# Closing the writable end will send EOF to the readable and cause the reader thread
# to finish.
# TODO: Can close() fail? If so, whould we try and clean up after the reader?
# TODO: Can close() fail? If so, would we try and clean up after the reader?
self.writable.close()
try:
if self.thread is not None:
Expand Down Expand Up @@ -286,7 +286,7 @@ class ReadableTransformingPipe(ReadablePipe):
The :meth:`.transform` method runs in its own thread, and should move data
chunk by chunk instead of all at once. It should finish normally if it
encounters either an EOF on the readable, or a :class:`BrokenPipeError` on
the writable. This means tat it should make sure to actually catch a
the writable. This means that it should make sure to actually catch a
:class:`BrokenPipeError` when writing.
See also: :class:`toil.lib.misc.WriteWatchingStream`.
Expand Down
8 changes: 4 additions & 4 deletions src/toil/provisioners/abstractProvisioner.py
Original file line number Diff line number Diff line change
Expand Up @@ -415,7 +415,7 @@ def addNodes(self, nodeTypes: Set[str], numNodes, preemptable, spotBid=None):
def addManagedNodes(self, nodeTypes: Set[str], minNodes, maxNodes, preemptable, spotBid=None) -> None:
"""
Add a group of managed nodes of the given type, up to the given maximum.
The nodes will automatically be launched and termianted depending on cluster load.
The nodes will automatically be launched and terminated depending on cluster load.
Raises ManagedNodesNotSupportedException if the provisioner
implementation or cluster configuration can't have managed nodes.
Expand Down Expand Up @@ -475,7 +475,7 @@ def getNodeShape(self, instance_type: str, preemptable=False):
def destroyCluster(self) -> None:
"""
Terminates all nodes in the specified cluster and cleans up all resources associated with the
cluser.
cluster.
:param clusterName: identifier of the cluster to terminate.
"""
raise NotImplementedError
Expand Down Expand Up @@ -705,7 +705,7 @@ def addToilService(self, config: InstanceConfiguration, role: str, keyPath: str
:param role: Should be 'leader' or 'worker'. Will not work for 'worker' until leader credentials have been collected.
:param keyPath: path on the node to a server-side encryption key that will be added to the node after it starts. The service will wait until the key is present before starting.
:param preemptable: Whether a woeker should identify itself as preemptable or not to the scheduler.
:param preemptable: Whether a worker should identify itself as preemptable or not to the scheduler.
"""

# If keys are rsynced, then the mesos-agent needs to be started after the keys have been
Expand Down Expand Up @@ -1157,7 +1157,7 @@ def _getIgnitionUserData(self, role, keyPath=None, preemptable=False, architectu
the worker to the leader.
:param str keyPath: The path of a secret key for the worker to wait for the leader to create on it.
:param bool preemptable: Set to true for a worker node to identify itself as preemptible in the cluster.
:param bool preemptable: Set to true for a worker node to identify itself as preemptable in the cluster.
"""

# Start with a base config
Expand Down
2 changes: 1 addition & 1 deletion src/toil/test/batchSystems/batchSystemTest.py
Original file line number Diff line number Diff line change
Expand Up @@ -1066,7 +1066,7 @@ def testConcurrencyWithDisk(self):
def testNestedResourcesDoNotBlock(self):
"""
Resources are requested in the order Memory > Cpu > Disk.
Test that inavailability of cpus for one job that is scheduled does not block another job
Test that unavailability of cpus for one job that is scheduled does not block another job
that can run.
"""
tempDir = self._createTempDir('testFiles')
Expand Down
2 changes: 1 addition & 1 deletion src/toil/test/src/resourceTest.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ def _test(self, module_name,
# Now it gets a bit complicated: Ensure that the context manager returned by the
# jobStore's write_shared_file_stream() method is entered and that the file handle yielded
# by the context manager is written to once with the zipped source tree from which
# 'toil.resource' was orginally imported. Keep the zipped tree around such that we can
# 'toil.resource' was originally imported. Keep the zipped tree around such that we can
# mock the download later.
file_handle = jobStore.write_shared_file_stream.return_value.__enter__.return_value
# The first 0 index selects the first call of write(), the second 0 selects positional
Expand Down
2 changes: 1 addition & 1 deletion version_template.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@

def version():
"""
A version identifier that includes the full-legth commit SHA1 and an optional suffix to
A version identifier that includes the full-length commit SHA1 and an optional suffix to
indicate that the working copy is dirty.
"""
return '-'.join(filter(None, [distVersion(), currentCommit(), ('dirty' if dirty() else None)]))
Expand Down

0 comments on commit 5c431e5

Please sign in to comment.