Skip to content

Commit 0129f92

Browse files
authored
Update readme (#14)
* Update README.md * Update README.md Adjust for recent changes to cfg options. * Create tutorials * Add tutorials folder and move GQE Usage into there along with blog example from @mawolf2023 * Move to examples folder. * Rename co2_18qubits.py to gqe_co2_18q.py * Update gqe_co2_18q.py
1 parent b41c8ed commit 0129f92

File tree

3 files changed

+103
-1
lines changed

3 files changed

+103
-1
lines changed

Diff for: README.md

-1
Original file line numberDiff line numberDiff line change
@@ -38,4 +38,3 @@ ninja
3838
export PYTHONPATH=$HOME/.cudaq:$PWD/python/cudaqlib
3939
ctest
4040
```
41-

Diff for: examples/python/README.md

+44
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
# Welcome to the CUDA-Q Libraries repository
2+
3+
## GQE Usage
4+
GQE class usage: `gqe(cost, pool, config=None, **kwargs)` can take a `config` object and\or additional `**kwargs`
5+
6+
The `config` object is of type `ConfigDict` imported from `ml_collections`.
7+
8+
Example usage:
9+
```
10+
from ml_collections import ConfigDict
11+
cfg = ConfigDict()
12+
cfg.seed = 3047
13+
```
14+
15+
The available config options are provided in the table below:
16+
17+
| **Parameter** | **Default Value** | **Description** |
18+
|------------------------|---------------------|-------------------|
19+
| `cfg.num_samples` | `5` | Number of circuits to generate during each epoch/batch |
20+
| `cfg.max_iters` | `100` | Number of epochs to run |
21+
| `cfg.ngates` | `20` | Number of gates that make up each generated circuit |
22+
| `cfg.seed` | `3047` | Random seed |
23+
| `cfg.lr` | `5e-7` | Learning rate used by the optimizer |
24+
| `cfg.energy_offset` | `0.0` | Offset added to expectation value of the cirucit (Energy) for numerical stability, see [K. Nakaji et al. (2024)](https://arxiv.org/abs/2401.09253) Sec. 3 |
25+
| `cfg.grad_norm_clip` | `1.0` | max_norm for clipping gradients, see [Ligthning docs](https://lightning.ai/docs/fabric/stable/api/fabric_methods.html#clip-gradients) |
26+
| `cfg.temperature` | `5.0` | Starting inverse temperature $\beta$ as described in [K. Nakaji et al. (2024)](https://arxiv.org/abs/2401.09253) Sec. 2.2 |
27+
| `cfg.del_temperature` | `0.05` | Temperature increase after each epoch |
28+
| `cfg.resid_pdrop` | `0.0` | The dropout probability for all fully connected layers in the embeddings, encoder, and pooler, see [GPT2Config](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/configuration_gpt2.py) |
29+
| `cfg.embd_pdrop` | `0.0` | The dropout ratio for the embeddings, see [GPT2Config](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/configuration_gpt2.py) |
30+
| `cfg.attn_pdrop` | `0.0` | The dropout ratio for the attention, see [GPT2Config](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/configuration_gpt2.py) |
31+
| `cfg.small` | `False` | True: Uses a small transfomer (6 hidden layers and 6 attention heads as opposed to the default transformer of 12 of each) |
32+
| `cfg.save_dir` | `"./output/"` | Path to save files |
33+
34+
The `**kwargs` takes the following args:
35+
| **arg** | **Description** |
36+
|------------------------|-------------------|
37+
| `model` | Can pass in an already constructed transformer |
38+
| `optimizer` | Can pass in an already constructed optimizer |
39+
40+
In addition, `**kwargs` can be used to overwrite any of the default configs if you don't pass in a full config object, i.e:
41+
| **arg** | **Description** |
42+
|------------------------|-------------------|
43+
| `max_iters` | Overrides cfg.max_iters for total number of epochs to run |
44+
| `energy_offset` | Overrides cfg.energy_offset for offset to add to expectation value |

Diff for: examples/python/gqe_co2_18q.py

+59
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
import cudaq, cudaqlib
2+
3+
# Define the molecule
4+
distance = 1.1621
5+
geometry = [('O', (0., 0., 0.)), ('C', (0., 0., distance)), ('O', (0., 0., 2*distance))]
6+
molecule = cudaqlib.operators.create_molecule(geometry, 'sto-3g', 0, 0, MP2=True, nele_cas=10, norb_cas=9)
7+
8+
# Get the system Hamiltonian
9+
hamiltonian = molecule.hamiltonian
10+
11+
# Get the number of qubits
12+
numQubits = molecule.hamiltonian.get_qubit_count()
13+
14+
# Create the operator pool
15+
pool = cudaqlib.gse.get_operator_pool('uccsd',
16+
num_qubits=numQubits,
17+
num_electrons=10,
18+
operator_coeffs=[0.003125, -0.003125, 0.00625, -0.00625, 0.0125, -0.0125, 0.025, -0.025, 0.05, -0.05, 0.1, -0.1])
19+
20+
# Define Hartree-Fock
21+
@cudaq.kernel
22+
def init(q: cudaq.qview):
23+
for i in range(10):
24+
x(q[i])
25+
26+
27+
# Define the GQE cost function
28+
def cost(sampledPoolOperations: list):
29+
"""
30+
Cost should take operator pool indices and
31+
return the associated cost. For the chemistry
32+
example, we'll take uccsd pool indices and return
33+
cudaq observe result
34+
"""
35+
# Convert the operator pool elements to cudaq.pauli_words
36+
asWords = [
37+
cudaq.pauli_word(op.to_string(False)) for op in sampledPoolOperations
38+
]
39+
40+
# Get the pool coefficients as its own list
41+
operatorCoeffs = [op.get_coefficient().real for op in sampledPoolOperations]
42+
43+
@cudaq.kernel
44+
def kernel(numQubits: int, coeffs: list[float],
45+
words: list[cudaq.pauli_word]):
46+
q = cudaq.qvector(numQubits)
47+
init(q)
48+
for i, word in enumerate(words):
49+
exp_pauli(coeffs[i], q, word)
50+
51+
return cudaq.observe(kernel, molecule.hamiltonian, numQubits,
52+
operatorCoeffs, asWords).expectation()
53+
54+
55+
minE, optimPoolOps = cudaqlib.gqe(cost, pool, max_iters=10, energy_offset=184.)
56+
print(f'Ground Energy = {minE}')
57+
print('Ansatz Ops')
58+
for idx in optimPoolOps:
59+
print(pool[idx].get_coefficient().real, pool[idx].to_string(False))

0 commit comments

Comments
 (0)