Skip to content

Commit

Permalink
Merge pull request #1083 from BD2KGenomics/issues/1076-fix-documentat…
Browse files Browse the repository at this point in the history
…ion-build

Fix documentation build bugs (resolves #1076)
  • Loading branch information
hannes-ucsc authored Jul 30, 2016
2 parents 3240cd8 + 80e5d96 commit f770353
Show file tree
Hide file tree
Showing 9 changed files with 41 additions and 47 deletions.
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ clean_pypi:

docs: check_venv check_build_reqs
# Strange, but seemingly benign Sphinx warning floods stderr if not filtered:
cd docs && make html 2>&1 | grep -v "WARNING: duplicate object description of"
cd docs && make html
clean_docs: check_venv
- cd docs && make clean

Expand Down
1 change: 0 additions & 1 deletion docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@ help:

clean:
rm -rf $(BUILDDIR)/*
- rm -rf ./generated_rst

html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
Expand Down
3 changes: 0 additions & 3 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,6 @@ def real_dir_name(p, n=1):
raise RuntimeError('A virtualenv must be active and Sphinx must be installed in it')
path_to_dir = os.path.dirname(os.path.abspath(__file__))

subprocess.check_call('mkdir -p %s/generated_rst' % path_to_dir, shell=True)
subprocess.check_call('cd %s/generated_rst && sphinx-apidoc -fo . ../../src/' % path_to_dir, shell=True)

assert real_dir_name(__file__, 2) == real_dir_name(toil.version.__file__, 3), \
"Another Toil installation seems to have precedence over this working directory."
toilVersion = toil.version.version
Expand Down
2 changes: 0 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -98,11 +98,9 @@ Contents:
architecture
batchSystem
jobStore
generated_rst/modules

Indices and tables
==================

* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
2 changes: 1 addition & 1 deletion docs/toilAPI.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Job.FileStore
-------------
The FileStore is an abstraction of a Toil run's shared storage.

.. autoclass:: toil.job::Job.FileStore
.. autoclass:: toil.fileStore::FileStore
:members:

Job.Runner
Expand Down
26 changes: 13 additions & 13 deletions src/toil/fileStore.py
Original file line number Diff line number Diff line change
Expand Up @@ -277,21 +277,21 @@ def readGlobalFile(self, fileStoreID, userPath=None, cache=True, mutable=None):
:param FileID fileStoreID: job store id for the file
:param string userPath: a path to the name of file to which the global \
file will be copied or hard-linked (see below).
:param string userPath: a path to the name of file to which the global
file will be copied or hard-linked (see below).
:param boolean cache: If True, a copy of the file will be saved into a cache that can be
used by other workers. caching supports multiple concurrent workers requesting the same
file by allowing only one to download the file while the others wait for it to complete.
used by other workers. caching supports multiple concurrent workers requesting the same
file by allowing only one to download the file while the others wait for it to complete.
:param boolean mutable: If True, the file path returned points to a file that is
modifiable by the user. Using False is recommended as it saves disk by making multiple
workers share a file via hard links. The value defaults to False unless backwards
compatibility was requested.
modifiable by the user. Using False is recommended as it saves disk by making multiple
workers share a file via hard links. The value defaults to False unless backwards
compatibility was requested.
:return: an absolute path to a local, temporary copy of the file keyed \
by fileStoreID.
:rtype : string
:return: an absolute path to a local, temporary copy of the file keyed
by fileStoreID.
:rtype: string
"""
# Check that the file hasn't been deleted by the user
if fileStoreID in self.filesToDelete:
Expand Down Expand Up @@ -757,7 +757,7 @@ def addToCache(self, localFilePath, jobStoreFileID, callingFunc, mutable=None):

def returnFileSize(self, fileStoreID, cachedFileSource, lockFileHandle,
fileAlreadyCached=False):
'''
"""
Returns the fileSize of the file described by fileStoreID to the job requirements pool
if the file was recently added to, or read from cache (A job that reads n bytes from
cache doesn't really use those n bytes as a part of it's job disk since cache is already
Expand All @@ -767,9 +767,9 @@ def returnFileSize(self, fileStoreID, cachedFileSource, lockFileHandle,
:param str cachedFileSource: File being added to cache
:param file lockFileHandle: Open file handle to the cache lock file
:param bool fileAlreadyCached: A flag to indicate whether the file was already cached or
not. If it was, then it means that you don't need to add the filesize to cache again.
not. If it was, then it means that you don't need to add the filesize to cache again.
:return: None
'''
"""
fileSize = os.stat(cachedFileSource).st_size
cacheInfo = self._CacheState._load(self.cacheStateFile)
# If the file isn't cached, add the size of the file to the cache pool. However, if the
Expand Down
48 changes: 24 additions & 24 deletions src/toil/job.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,9 @@ def __init__(self, memory=None, cores=None, disk=None, preemptable=None, checkpo
:param disk: the amount of local disk space required by the job, expressed in bytes.
:param preemptable: if the job can be run on a preemptable node.
:param checkpoint: if any of this job's successor jobs completely fails,
exhausting all their retries, remove any successor jobs and rerun this job to restart the
subtree. Job must be a leaf vertex in the job graph when initially defined, see
:func:`toil.job.Job.checkNewCheckpointsAreCutVertices`.
exhausting all their retries, remove any successor jobs and rerun this job to restart the
subtree. Job must be a leaf vertex in the job graph when initially defined, see
:func:`toil.job.Job.checkNewCheckpointsAreCutVertices`.
:type cores: int or string convertable by bd2k.util.humanize.human2bytes to an int
:type disk: int or string convertable by bd2k.util.humanize.human2bytes to an int
:type preemptable: boolean
Expand Down Expand Up @@ -187,22 +187,22 @@ def addService(self, service, parentService=None):
"""
Add a service.
The :func:`toil.job.Job.Service.start` method of the service will be called \
after the run method has completed but before any successors are run. \
The service's :func:`toil.job.Job.Service.stop` method will be called once \
The :func:`toil.job.Job.Service.start` method of the service will be called
after the run method has completed but before any successors are run.
The service's :func:`toil.job.Job.Service.stop` method will be called once
the successors of the job have been run.
Services allow things like databases and servers to be started and accessed \
Services allow things like databases and servers to be started and accessed
by jobs in a workflow.
:raises toil.job.JobException: If service has already been made the child of a job or another service.
:param toil.job.Job.Service service: Service to add.
:param toil.job.Job.Service parentService: Service that will be started before 'service' is
started. Allows trees of services to be established. parentService must be a service
of this job.
started. Allows trees of services to be established. parentService must be a service
of this job.
:return: a promise that will be replaced with the return value from
:func:`toil.job.Job.Service.start` of service in any successor of the job.
:rtype:toil.job.Promise
:func:`toil.job.Job.Service.start` of service in any successor of the job.
:rtype: toil.job.Promise
"""
if parentService is not None:
# Do check to ensure that parentService is a service of this job
Expand Down Expand Up @@ -374,13 +374,13 @@ def allocatePromiseFile(self, path):

def checkJobGraphForDeadlocks(self):
"""
:raises toil.job.JobGraphDeadlockException: if the job graph \
is cyclic, contains multiple roots or contains checkpoint jobs that are
not leaf vertices when defined (see :func:`toil.job.Job.checkNewCheckpointsAreLeaves`).
See :func:`toil.job.Job.checkJobGraphConnected`, \
:func:`toil.job.Job.checkJobGraphAcyclic` and \
See :func:`toil.job.Job.checkJobGraphConnected`,
:func:`toil.job.Job.checkJobGraphAcyclic` and
:func:`toil.job.Job.checkNewCheckpointsAreLeafVertices` for more info.
:raises toil.job.JobGraphDeadlockException: if the job graph
is cyclic, contains multiple roots or contains checkpoint jobs that are
not leaf vertices when defined (see :func:`toil.job.Job.checkNewCheckpointsAreLeaves`).
"""
self.checkJobGraphConnected()
self.checkJobGraphAcylic()
Expand Down Expand Up @@ -578,7 +578,7 @@ def stop(self, fileStore):
Stops the service.
:param toil.job.Job.FileStore fileStore: A fileStore object to create temporary files with.
Function can block until complete.
Function can block until complete.
"""
pass

Expand All @@ -588,8 +588,8 @@ def check(self):
:raise RuntimeError: If the service failed, this will cause the service job to be labeled failed.
:returns: True if the service is still running, else False. If False then the service job will be terminated,
and considered a success. Important point: if the service job exits due to a failure, it should raise a
RuntimeError, not return False!
and considered a success. Important point: if the service job exits due to a failure, it should raise a
RuntimeError, not return False!
"""
pass

Expand Down Expand Up @@ -1451,13 +1451,13 @@ def __init__(self, valueOrCallable, *args):
For example, let f, g, and h be functions. Then a Toil workflow can be
defined as follows::
A = Job.wrapFn(f)
B = A.addChildFn(g, cores=PromisedRequirement(A.rv())
C = B.addChildFn(h, cores=PromisedRequirement(lambda x: 2*x, B.rv()))
A = Job.wrapFn(f)
B = A.addChildFn(g, cores=PromisedRequirement(A.rv())
C = B.addChildFn(h, cores=PromisedRequirement(lambda x: 2*x, B.rv()))
:param valueOrCallable: A single Promise instance or a function that
takes \*args as input parameters.
:param int|Promise *args: variable length argument list
:param int|Promise \*args: variable length argument list
"""
if hasattr(valueOrCallable, '__call__'):
assert len(args) != 0, 'Need parameters for PromisedRequirement function.'
Expand Down
2 changes: 1 addition & 1 deletion src/toil/test/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,7 @@ def make_tests(generalMethod, targetClass=None, **kwargs):
function is the result of a unique combination of parameters applied to the generalMethod. Each of the
parameters has a corresponding string that will be used to name the method. These generated functions
are named in the scheme:
test_[generalMethodName]___[firstParamaterName]_[someValueName]__[secondParamaterName]_...
test_[generalMethodName]___[firstParamaterName]_[someValueName]__[secondParamaterName]_...
The arguments following the generalMethodName should be a series of one or more dictionaries of the form
{str : type, ...} where the key represents the name of the value. The names will be used to represent the
Expand Down
2 changes: 1 addition & 1 deletion src/toil/test/batchSystems/batchSystemTest.py
Original file line number Diff line number Diff line change
Expand Up @@ -726,7 +726,7 @@ def count(delta, file_path):
"""
Increments counter file and returns the max number of times the file
has been modified. Counter data must be in the form:
concurrent tasks, max concurrent tasks (counter should be initialized to 0,0)
concurrent tasks, max concurrent tasks (counter should be initialized to 0,0)
:param int delta: increment value
:param str file_path: path to shared counter file
Expand Down

0 comments on commit f770353

Please sign in to comment.