Skip to content

[Docs] Fix TOCs and update QNN derived primitives #862

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Dec 2, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .pylintdict
Original file line number Diff line number Diff line change
@@ -123,6 +123,7 @@ dp
dt
eda
edaspy
egger
eigen
eigenphase
@@ -576,6 +577,7 @@ vatan
vec
vectorized
veeravalli
vicente
vicentini
vigo
ville
6 changes: 6 additions & 0 deletions docs/apidocs/qiskit_machine_learning.gradients.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
.. _qiskit-machine-learning-gradients:

.. automodule:: qiskit_machine_learning.gradients
:no-members:
:no-inherited-members:
:no-special-members:
6 changes: 6 additions & 0 deletions docs/apidocs/qiskit_machine_learning.optimizers.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
.. _qiskit-machine-learning-optimizers:

.. automodule:: qiskit_machine_learning.optimizers
:no-members:
:no-inherited-members:
:no-special-members:
6 changes: 6 additions & 0 deletions docs/apidocs/qiskit_machine_learning.state_fidelities.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
.. _qiskit-machine-learning-state_fidelities:

.. automodule:: qiskit_machine_learning.state_fidelities
:no-members:
:no-inherited-members:
:no-special-members:
3 changes: 3 additions & 0 deletions qiskit_machine_learning/__init__.py
Original file line number Diff line number Diff line change
@@ -37,8 +37,11 @@
circuit.library
connectors
datasets
gradients
kernels
neural_networks
optimizers
state_fidelities
utils
"""
23 changes: 11 additions & 12 deletions qiskit_machine_learning/algorithms/__init__.py
Original file line number Diff line number Diff line change
@@ -53,18 +53,8 @@
PegasosQSVC
QSVC
NeuralNetworkClassifier
VQC
Inference
+++++++++++
Algorithms for inference.
.. autosummary::
:toctree: ../stubs/
:nosignatures:
QBayesian
NeuralNetworkClassifier
Regressors
++++++++++
@@ -75,9 +65,18 @@
:nosignatures:
QSVR
NeuralNetworkRegressor
VQR
NeuralNetworkRegressor
Inference
+++++++++++
Algorithms for inference.
.. autosummary::
:toctree: ../stubs/
:nosignatures:
QBayesian
"""
from .trainable_model import TrainableModel
from .serializable_model import SerializableModelMixin
6 changes: 3 additions & 3 deletions qiskit_machine_learning/algorithms/trainable_model.py
Original file line number Diff line number Diff line change
@@ -32,7 +32,7 @@


class TrainableModel(SerializableModelMixin):
"""Base class for ML model that defines a scikit-learn like interface for Estimators."""
"""Base class for ML model that defines a scikit-learn-like interface for `Estimator` instances."""

# pylint: disable=too-many-positional-arguments
def __init__(
@@ -46,10 +46,10 @@ def __init__(
):
"""
Args:
neural_network: An instance of an quantum neural network. If the neural network has a
neural_network: An instance of a quantum neural network. If the neural network has a
one-dimensional output, i.e., `neural_network.output_shape=(1,)`, then it is
expected to return values in [-1, +1] and it can only be used for binary
classification. If the output is multi-dimensional, it is assumed that the result
classification. If the output is multidimensional, it is assumed that the result
is a probability distribution, i.e., that the entries are non-negative and sum up
to one. Then there are two options, either one-hot encoding or not. In case of
one-hot encoding, each probability vector resulting a neural network is considered
6 changes: 3 additions & 3 deletions qiskit_machine_learning/circuit/library/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# This code is part of a Qiskit project.
#
# (C) Copyright IBM 2020, 2023.
# (C) Copyright IBM 2020, 2024.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
@@ -19,7 +19,7 @@
.. currentmodule:: qiskit_machine_learning.circuit.library
Feature Maps
Feature maps
------------
.. autosummary::
@@ -29,7 +29,7 @@
RawFeatureVector
Helper Circuits
Helper circuits
---------------
.. autosummary::
4 changes: 2 additions & 2 deletions qiskit_machine_learning/connectors/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# This code is part of a Qiskit project.
#
# (C) Copyright IBM 2021, 2023.
# (C) Copyright IBM 2021, 2024.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
@@ -14,7 +14,7 @@
Connectors (:mod:`qiskit_machine_learning.connectors`)
======================================================
Connectors from Qiskit Machine Learning to other frameworks.
"Connector" tools to couple Qiskit Machine Learning to other frameworks.
.. currentmodule:: qiskit_machine_learning.connectors
4 changes: 2 additions & 2 deletions qiskit_machine_learning/datasets/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# This code is part of a Qiskit project.
#
# (C) Copyright IBM 2019, 2023.
# (C) Copyright IBM 2019, 2024.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
@@ -14,7 +14,7 @@
Datasets (:mod:`qiskit_machine_learning.datasets`)
==================================================
A set of sample datasets suitable for machine learning problems
A set of sample datasets to test machine learning algorithms.
.. currentmodule:: qiskit_machine_learning.datasets
13 changes: 7 additions & 6 deletions qiskit_machine_learning/gradients/__init__.py
Original file line number Diff line number Diff line change
@@ -10,10 +10,11 @@
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.

"""
r"""
Gradients (:mod:`qiskit_machine_learning.gradients`)
==============================================
Algorithms to calculate the gradient of a quantum circuit.
====================================================
Algorithms to calculate the gradient of a cost landscape to optimize a given objective function.
.. currentmodule:: qiskit_machine_learning.gradients
@@ -29,7 +30,7 @@
EstimatorGradientResult
SamplerGradientResult
Linear Combination of Unitaries
Linear combination of unitaries
-------------------------------
.. autosummary::
@@ -39,7 +40,7 @@
LinCombEstimatorGradient
LinCombSamplerGradient
Parameter Shift Rules
Parameter-shift rules
---------------------
.. autosummary::
@@ -49,7 +50,7 @@
ParamShiftEstimatorGradient
ParamShiftSamplerGradient
Simultaneous Perturbation Stochastic Approximation
Simultaneous perturbation stochastic approximation
--------------------------------------------------
.. autosummary::
8 changes: 4 additions & 4 deletions qiskit_machine_learning/neural_networks/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# This code is part of a Qiskit project.
#
# (C) Copyright IBM 2019, 2023.
# (C) Copyright IBM 2019, 2024.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
@@ -36,7 +36,7 @@
NeuralNetwork
Neural Networks
Neural networks
---------------
.. autosummary::
@@ -46,8 +46,8 @@
EstimatorQNN
SamplerQNN
Neural Network Metrics
----------------------
Metrics for neural networks
---------------------------
.. autosummary::
:toctree: ../stubs/
25 changes: 17 additions & 8 deletions qiskit_machine_learning/neural_networks/estimator_qnn.py
Original file line number Diff line number Diff line change
@@ -120,16 +120,25 @@ def __init__(
):
r"""
Args:
estimator: The estimator used to compute neural network's results.
If ``None``, a default instance of the reference estimator,
:class:`~qiskit.primitives.Estimator`, will be used.
circuit: The quantum circuit to represent the neural network. If a
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` is passed, the
`input_params` and `weight_params` do not have to be provided, because these two
``input_params`` and ``weight_params`` do not have to be provided, because these two
properties are taken from the
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit`.
estimator: The estimator used to compute neural network's results.
If ``None``, a default instance of the reference estimator,
:class:`~qiskit.primitives.Estimator`, will be used.
.. warning::
The assignment ``estimator=None`` defaults to using
:class:`~qiskit.primitives.Estimator`, which points to a deprecated estimator V1
(as of Qiskit 1.2). ``EstimatorQNN`` will adopt Estimator V2 as default no later than
Qiskit Machine Learning 0.9.
observables: The observables for outputs of the neural network. If ``None``,
use the default :math:`Z^{\otimes num\_qubits}` observable.
use the default :math:`Z^{\otimes n}` observable, where :math:`n`
is the number of qubits.
input_params: The parameters that correspond to the input data of the network.
If ``None``, the input data is not bound to any parameters.
If a :class:`~qiskit_machine_learning.circuit.library.QNNCircuit` is provided the
@@ -139,9 +148,10 @@ def __init__(
If ``None``, the weights are not bound to any parameters.
If a :class:`~qiskit_machine_learning.circuit.library.QNNCircuit` is provided the
`weight_params` value here is ignored. Instead, the value is taken from the
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` weight_parameters.
`weight_parameters` associated with
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit`.
gradient: The estimator gradient to be used for the backward pass.
If None, a default instance of the estimator gradient,
If ``None``, a default instance of the estimator gradient,
:class:`~qiskit_machine_learning.gradients.ParamShiftEstimatorGradient`, will be used.
input_gradients: Determines whether to compute gradients with respect to input data.
Note that this parameter is ``False`` by default, and must be explicitly set to
@@ -152,7 +162,6 @@ def __init__(
Defaults to ``None``, as some primitives do not need transpiled circuits.
Raises:
QiskitMachineLearningError: Invalid parameter values.
QiskitMachineLearningError: Gradient is required if
"""
if estimator is None:
estimator = Estimator()
77 changes: 45 additions & 32 deletions qiskit_machine_learning/neural_networks/sampler_qnn.py
Original file line number Diff line number Diff line change
@@ -143,39 +143,52 @@ def __init__(
input_gradients: bool = False,
pass_manager: BasePassManager | None = None,
):
"""
Args: sampler: The sampler primitive used to compute the neural network's results. If
``None`` is given, a default instance of the reference sampler defined by
:class:`~qiskit.primitives.Sampler` will be used. circuit: The parametrized quantum
circuit that generates the samples of this network. If a
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` is passed,
the `input_params` and `weight_params` do not have to be provided, because these two
properties are taken from the :class:`~qiskit_machine_learning.circuit.library.QNNCircuit
`. input_params: The parameters of the circuit corresponding to the input. If a
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` is provided the
`input_params` value here is ignored. Instead, the value is taken from the
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` input_parameters.
weight_params: The parameters of the circuit corresponding to the trainable weights. If a
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` is provided the
`weight_params` value here is ignored. Instead, the value is taken from the
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` weight_parameters. sparse:
Returns whether the output is sparse or not. interpret: A callable that maps the measured
integer to another unsigned integer or tuple of unsigned integers. These are used as new
indices for the (potentially sparse) output array. If no interpret function is passed,
then an identity function will be used by this neural network. output_shape: The output
shape of the custom interpretation. For SamplerV1, it is ignored if no custom interpret
method is provided where the shape is taken to be ``2^circuit.num_qubits``. gradient: An
optional sampler gradient to be used for the backward pass. If ``None`` is given,
a default instance of
:class:`~qiskit_machine_learning.gradients.ParamShiftSamplerGradient` will be used.
input_gradients: Determines whether to compute gradients with respect to input data. Note
that this parameter is ``False`` by default, and must be explicitly set to ``True`` for a
proper gradient computation when using
:class:`~qiskit_machine_learning.connectors.TorchConnector`.
pass_manager: The pass manager to transpile the circuits, if necessary.
Defaults to ``None``, as some primitives do not need transpiled circuits.
r"""
Args:
circuit: The parametrized quantum
circuit that generates the samples of this network. If a
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` is passed,
the `input_params` and `weight_params` do not have to be provided, because these two
properties are taken from the
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit`.
sampler: The sampler primitive used to compute the neural network's results. If
``None`` is given, a default instance of the reference sampler defined by
:class:`~qiskit.primitives.Sampler` will be used.
.. warning::
The assignment ``sampler=None`` defaults to using
:class:`~qiskit.primitives.Sampler`, which points to a deprecated Sampler V1
(as of Qiskit 1.2). ``SamplerQNN`` will adopt Sampler V2 as default no later than
Qiskit Machine Learning 0.9.
input_params: The parameters of the circuit corresponding to the input. If a
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` is provided the
`input_params` value here is ignored. Instead, the value is taken from the
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` input_parameters.
weight_params: The parameters of the circuit corresponding to the trainable weights. If a
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` is provided the
`weight_params` value here is ignored. Instead, the value is taken from the
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` ``weight_parameters``.
sparse: Returns whether the output is sparse or not.
interpret: A callable that maps the measured integer to another unsigned integer or tuple
of unsigned integers. These are used as new indices for the (potentially sparse)
output array. If no interpret function is passed, then an identity function will be
used by this neural network.
output_shape: The output shape of the custom interpretation. For SamplerV1, it is ignored
if no custom interpret method is provided where the shape is taken to be
``2^circuit.num_qubits``.
gradient: An optional sampler gradient to be used for the backward pass. If ``None`` is
given, a default instance of
:class:`~qiskit_machine_learning.gradients.ParamShiftSamplerGradient` will be used.
input_gradients: Determines whether to compute gradients with respect to input data. Note
that this parameter is ``False`` by default, and must be explicitly set to ``True``
for a proper gradient computation when using
:class:`~qiskit_machine_learning.connectors.TorchConnector`.
pass_manager: The pass manager to transpile the circuits, if necessary.
Defaults to ``None``, as some primitives do not need transpiled circuits.
Raises:
QiskitMachineLearningError: Invalid parameter values.
QiskitMachineLearningError: Invalid parameter values.
"""
# set primitive, provide default
if sampler is None:
17 changes: 8 additions & 9 deletions qiskit_machine_learning/optimizers/__init__.py
Original file line number Diff line number Diff line change
@@ -10,13 +10,12 @@
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.

"""
r"""
Optimizers (:mod:`qiskit_machine_learning.optimizers`)
================================================
Classical Optimizers.
======================================================
This package contains a variety of classical optimizers and were designed for use by
qiskit_algorithm's quantum variational algorithms, such as :class:`~qiskit_machine_learning.VQE`.
Contains a variety of classical optimizers designed for
Qiskit Algorithm's quantum variational algorithms.
Logically, these optimizers can be divided into two categories:
`Local Optimizers`_
@@ -29,7 +28,7 @@
.. currentmodule:: qiskit_machine_learning.optimizers
Optimizer Base Classes
Optimizer base classes
----------------------
.. autosummary::
@@ -40,7 +39,7 @@
Optimizer
Minimizer
Steppable Optimization
Steppable optimization
----------------------
.. autosummary::
@@ -58,7 +57,7 @@
OptimizerState
Local Optimizers
Local optimizers
----------------
.. autosummary::
@@ -92,7 +91,7 @@
https://github.com/qiskit-community/qiskit-algorithms/issues/84.
Global Optimizers
Global optimizers
-----------------
The global optimizers here all use `NLOpt <https://nlopt.readthedocs.io/en/latest/>`_ for their
core function and can only be used if the optional dependent ``NLOpt`` package is installed.
13 changes: 1 addition & 12 deletions qiskit_machine_learning/optimizers/optimizer_utils/__init__.py
Original file line number Diff line number Diff line change
@@ -9,18 +9,7 @@
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""Utils for optimizers
Optimizer Utils (:mod:`qiskit_machine_learning.optimizers.optimizer_utils`)
=====================================================================
.. autosummary::
:toctree: ../stubs/
:nosignatures:
LearningRate
"""
""" Supplementary tools for optimizers. """

from .learning_rate import LearningRate

32 changes: 16 additions & 16 deletions qiskit_machine_learning/optimizers/spsa.py
Original file line number Diff line number Diff line change
@@ -40,13 +40,13 @@
class SPSA(Optimizer):
"""Simultaneous Perturbation Stochastic Approximation (SPSA) optimizer.
SPSA [1] is an gradient descent method for optimizing systems with multiple unknown parameters.
SPSA [1] is a gradient descent method for optimizing systems with multiple unknown parameters.
As an optimization method, it is appropriately suited to large-scale population models,
adaptive modeling, and simulation optimization.
.. seealso::
Many examples are presented at the `SPSA Web site <http://www.jhuapl.edu/SPSA>`__.
Many examples are presented at the `SPSA website <http://www.jhuapl.edu/SPSA>`__.
The main feature of SPSA is the stochastic gradient approximation, which requires only two
measurements of the objective function, regardless of the dimension of the optimization
@@ -76,7 +76,7 @@ class SPSA(Optimizer):
.. note::
This component has some function that is normally random. If you want to reproduce behavior
then you should set the random number generator seed in the algorithm_globals
then you should set the random number generator seed in the ``algorithm_globals``
(``qiskit_machine_learning.utils.algorithm_globals.random_seed = seed``).
@@ -105,15 +105,15 @@ def loss(x):
spsa = SPSA(maxiter=300)
result = spsa.minimize(loss, x0=initial_point)
To use the Hessian information, i.e. 2-SPSA, you can add `second_order=True` to the
initializer of the `SPSA` class, the rest of the code remains the same.
To use the Hessian information, i.e. 2-SPSA, you can add ``second_order=True`` to the
initializer of the ``SPSA`` class, the rest of the code remains the same.
.. code-block:: python
two_spsa = SPSA(maxiter=300, second_order=True)
result = two_spsa.minimize(loss, x0=initial_point)
The `termination_checker` can be used to implement a custom termination criterion.
The ``termination_checker`` can be used to implement a custom termination criterion.
.. code-block:: python
@@ -214,31 +214,31 @@ def __init__(
second_order: If True, use 2-SPSA instead of SPSA. In 2-SPSA, the Hessian is estimated
additionally to the gradient, and the gradient is preconditioned with the inverse
of the Hessian to improve convergence.
regularization: To ensure the preconditioner is symmetric and positive definite, the
regularization: To ensure the pre-conditioner is symmetric and positive definite, the
identity times a small coefficient is added to it. This generator yields that
coefficient.
hessian_delay: Start multiplying the gradient with the inverse Hessian only after a
certain number of iterations. The Hessian is still evaluated and therefore this
argument can be useful to first get a stable average over the last iterations before
using it as preconditioner.
using it as pre-conditioner.
lse_solver: The method to solve for the inverse of the Hessian. Per default an
exact LSE solver is used, but can e.g. be overwritten by a minimization routine.
initial_hessian: The initial guess for the Hessian. By default the identity matrix
initial_hessian: The initial guess for the Hessian. By default, the identity matrix
is used.
callback: A callback function passed information in each iteration step. The
information is, in this order: the number of function evaluations, the parameters,
the function value, the stepsize, whether the step was accepted.
the function value, the step-size, whether the step was accepted.
termination_checker: A callback function executed at the end of each iteration step. The
arguments are, in this order: the parameters, the function value, the number
of function evaluations, the stepsize, whether the step was accepted. If the callback
of function evaluations, the step-size, whether the step was accepted. If the callback
returns True, the optimization is terminated.
To prevent additional evaluations of the objective method, if the objective has not yet
been evaluated, the objective is estimated by taking the mean of the objective
evaluations used in the estimate of the gradient.
Raises:
ValueError: If ``learning_rate`` or ``perturbation`` is an array with less elements
ValueError: If ``learning_rate`` or ``perturbation`` is an array with fewer elements
than the number of iterations.
@@ -255,7 +255,7 @@ def __init__(
for attr, name in zip([learning_rate, perturbation], ["learning_rate", "perturbation"]):
if isinstance(attr, (list, np.ndarray)):
if len(attr) < maxiter:
raise ValueError(f"Length of {name} is smaller than maxiter ({maxiter}).")
raise ValueError(f"Length of {name} is smaller than 'maxiter' ({maxiter}).")

self.learning_rate = learning_rate
self.perturbation = perturbation
@@ -306,7 +306,7 @@ def calibrate(
loss: The loss function.
initial_point: The initial guess of the iteration.
c: The initial perturbation magnitude.
stability_constant: The value of `A`.
stability_constant: The value of :math:`A`.
target_magnitude: The target magnitude for the first update step, defaults to
:math:`2\pi / 10`.
alpha: The exponent of the learning rate power series.
@@ -628,10 +628,10 @@ def minimize(
if self.termination_checker(
self._nfev, x_next, fx_check, np.linalg.norm(update), True
):
logger.info("terminated optimization at {k}/{self.maxiter} iterations")
logger.info("Terminated optimization at %s/%s iterations.", k, self.maxiter)
break

logger.info("SPSA: Finished in %s", time() - start)
logger.info("SPSA: Finished in %s.", time() - start)

if self.last_avg > 1:
x = np.mean(np.asarray(last_steps), axis=0)
10 changes: 6 additions & 4 deletions qiskit_machine_learning/state_fidelities/__init__.py
Original file line number Diff line number Diff line change
@@ -9,14 +9,16 @@
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""

r"""
State Fidelities (:mod:`qiskit_machine_learning.state_fidelities`)
============================================================
Algorithms that compute the fidelity of pairs of quantum states.
==================================================================
Algorithms that compute the fidelity of two given quantum states.
.. currentmodule:: qiskit_machine_learning.state_fidelities
State Fidelities
State fidelities
----------------
.. autosummary::