Skip to content

Commit 7d4fb7f

Browse files
dmdunlabrian-kelleyntjohnson1DunlavyDeepBlockDeepak
authored
14 testing implement tests for full coverage (#128)
* Merge latest updates (#124) * Update nvecs to use tenmat. * Full implementation of collapse. Required implementation of tensor.from_tensor_type for tenmat objects. Updated tensor tests. (#32) * Update __init__.py Bump version. * Create CHANGELOG.md Changelog update * Update CHANGELOG.md Consistent formatting * Update CHANGELOG.md Correction * Create ci-tests.yml * Update README.md Adding coverage statistics from coveralls.io * Create requirements.txt * 33 use standard license (#34) * Use standard, correctly formatted LICENSE * Delete LICENSE * Create LICENSE * Update and rename ci-tests.yml to regression-tests.yml * Update README.md * Fix bug in tensor.mttkrp that only showed up when ndims > 3. (#36) * Update __init__.py Bump version * Bump version * Adding files to support pypi dist creation and uploading * Fix PyPi installs. Bump version. * Fixing np.reshape usage. Adding more tests for tensor.ttv. (#38) * Fixing issues with np.reshape; requires order='F' to align with Matlab functionality. (#39) Closes #30 . * Bump version. * Adding tensor.ttm. Adding use case in tenmat to support ttm testing. (#40) Closes #27 * Bump version * Format CHANGELOG * Update CHANGELOG.md * pypi puslishing action on release * Allowing rdims or cdims to be empty array. (#43) Closes #42 * Adding tensor.ttt implementation. (#44) Closes 28 * Bump version * Implement ktensor.score and associated tests. * Changes to supporting pyttb data classes and associated tests to enable ktensor.score. * Bump version. * Compatibility with numpy 1.24.x (#49) Close #48 * Replace "numpy.float" with equivalent "float" numpy.float was deprecated in 1.20 and removed in 1.24 * sptensor.ttv: support 'vector' being a plain list (rather than just numpy.ndarray). Backwards compatible - an ndarray argument still works. This is because in newer numpy, it's not allowed to do np.array(list) where the elements of list are ndarrays of different shapes. * Make ktensor.innerprod call ttv with 'vector' as plain list (instead of numpy.ndarray, because newer versions don't allow ragged arrays) * tensor.ttv: avoid ragged numpy arrays * Fix two unit test failures due to numpy related changes * More numpy updates - numpy.int is removed - use int instead - don't try to construct ragged/inhomogeneous numpy arrays in tests. Use plain lists of vectors instead * Fix typo in assert message * Let ttb.tt_dimscheck catch empty input error In the three ttv methods, ttb.tt_dimscheck checks that 'vector' argument is not an empty list/ndarray. Revert previous changes that checked for this before calling tt_dimscheck. * Bump version * TENSOR: Fix slices ref shen return value isn't scalar or vector. #41 (#50) Closes #41 * Ttensor implementation (#51) * TENSOR: Fix slices ref shen return value isn't scalar or vector. #41 * TTENSOR: Add tensor creation (partial support of core tensor types) and display * SPTENSOR: Add numpy scalar type for multiplication filter. * TTENSOR: Double, full, isequal, mtimes, ndims, size, uminus, uplus, and partial innerprod. * TTENSOR: TTV (finishes innerprod), mttkrp, and norm * TTENSOR: TTM, permute and minor cleanup. * TTENSOR: Reconstruct * TTENSOR: Nvecs * SPTENSOR: * Fix argument mismatch for ttm (modes s.b. dims) * Fix ttm for rectangular matrices * Make error message consitent with tensor TENSOR: * Fix error message * TTENSOR: Improve test coverage and corresponding bug fixes discovered. * Test coverage (#52) * SPTENSOR: * Fix argument mismatch for ttm (modes s.b. dims) * Fix ttm for rectangular matrices * Make error message consitent with tensor TENSOR: * Fix error message * SPTENSOR: Improve test coverage, replace prints, and some doc string fixes. * PYTTUB_UTILS: Improve test coverage * TENMAT: Remove impossible condition. Shape is a property, the property handles the (0,) shape condition. So ndims should never see it. * TENSOR: Improve test coverage. One line left, but logic of setitem is unclear without MATLAB validation of behavior. * CP_APR: Add tests fpr sptensor, and corresponding bug fixes to improve test coverage. --------- Co-authored-by: Danny Dunlavy <[email protected]> * Bump version * TUCKER_ALS: Add tucker_als to validate ttucker implementation. (#53) * Bump version of actions (#55) actions/setup-python@v4 to avoid deprecation warnings * Tensor docs plus Linting and Typing and Black oh my (#54) * TENSOR: Apply black and enforce it * TENSOR: Add isort and pylint. Fix to pass then enforce * TENSOR: Variety of linked fixes: * Add mypy type checking * Update infrastructure for validating package * Fix doc tests and add more examples * DOCTEST: Add doctest automatically to regression * Fix existing failures * DOCTEST: Fix non-uniform array * DOCTEST: Fix precision errors in example * AUTOMATION: Add test directory otherwise only doctests run * TENSOR: Fix bad rebase from numpy fix * Auto formatting (#60) * COVERAGE: Fix some coverage regressions from pylint PR * ISORT: Run isort on source and tests * BLACK: Run black on source and tests * BLACK: Run black on source and tests * FORMATTING: Add tests and verification for autoformatting * FORMATTING: Add black/isort to root to simplify * Add preliminary contributor guide instructions Closes #59 * TUCKER_ALS: TTM with negative values is broken in ttensor (#62) (#66) * Replace usage in tucker_als * Update test for tucker_als to ensure result matches expectation * Add early error handling in ttensor ttm for negative dims * Hosvd (#67) * HOSVD: Preliminary outline of core functionality * HOSVD: Fix numeric bug * Was slicing incorrectly * Update test to check convergence * HOSVD: Finish output and test coverage * TENSOR: Prune numbers real * Real and mypy don't play nice python/mypy#3186 * This allows partial typing support of HOSVD * Add test that matches TTB for MATLAB output of HOSVD (#79) This closes #78 * Bump version (#81) Closes #80 * Lint pyttb_utils and lint/type sptensor (#77) * PYTTB_UTILS: Fix and enforce pylint * PYTTB_UTILS: Pull out utility only used internally in sptensor * SPTENSOR: Fix and enforce pylint * SPTENSOR: Initial pass a typing support * SPTENSOR: Complete initial typing coverage * SPTENSOR: Fix test coverage from typing changes. * PYLINT: Update test to lint files in parallel to improve dev experience. * HOSVD: Negative signs can be permuted for equivalent decomposition (#82) * Pre commit (#83) * Setup and pyproject are redundant. Remove and resolve install issue * Try adding pre-commit hooks * Update Makefile for simplicity and add notes to contributor guide. * Make pre-commit optional opt-in * Make regression tests use simplified dependencies so we track fewer places. * Using dynamic version in pyproject.toml to reduce places where version is set. (#86) * Adding shell=True to subprocess.run() calls (#87) * Adding Nick to authors (#89) * Release prep (#90) * Fix author for PyPI. Bump to dev version. * Exclude dims (#91) * Explicit Exclude_dims: * Updated tt_dimscheck * Update all uses of tt_dimscheck and propagate interface * Add test coverage for exclude dims changes * Tucker_als: Fix workaround that motivated exclude_dims * Bump version * Spelling * Tensor generator helpers (#93) * TENONES: Add initial tenones support * TENZEROS: Add initial tenzeros support * TENDIAG: Add initial tendiag support * SPTENDIAG: Add initial sptendiag support * Link in autodocumentation for recently added code: (#98) * TTENSOR, HOSVD, TUCKER_ALS, Tensor generators * Remove warning for nvecs: (#99) * Make debug level log for now * Remove test enforcement * Rand generators (#100) * Non-functional change: * Fix numpy deprecation warning, logic should be equivalent * Tenrand initial implementation * Sptenrand initial implementation * Complete pass on ktensor docs. (#101) * Bump version * Bump version * Trying to fix coveralls * Trying coveralls github action * Fixing arrange and normalize. (#103) * Fixing arrange and normalize. * Merge main (#104) * Trying to fix coveralls * Trying coveralls github action * Rename contributor guide for github magic (#106) * Rename contributor guide for github magic * Update reference to contributor guide from README * Fixed the mean and stdev typo for cp_als (#117) * Changed cp_als() param 'tensor' to 'input_tensor' to avoid ambiguity (#118) * Changed cp_als() param 'tensor' to 'input_tensor' to avoid ambiguity * Formatted changes with isort and black. * Updated all `tensor`-named paramteres to `input_tensor`, including in docs (#120) * Tensor growth (#109) * Tensor.__setitem__: Break into methods * Non-functional change to make logic flow clearer * Tensor.__setitem__: Fix some types to resolve edge cases * Sptensor.__setitem__: Break into methods * Non-functional change to make flow clearer * Sptensor.__setitem__: Catch additional edge cases in sptensor indexing * Tensor.__setitem__: Catch subtensor additional dim growth * Tensor indexing (#116) * Tensor.__setitem__/__getitem__: Fix linear index * Before required numpy array now works on value/slice/Iterable * Tensor.__getitem__: Fix subscripts usage * Consistent with setitem now * Update usages (primarily in sptensor) * Sptensor.__setitem__/__getitem__: Fix subscripts usage * Consistent with tensor and MATLAB now * Update test usage * sptensor: Add coverage for improved indexing capability * tensor: Add coverage for improved indexing capability --------- Co-authored-by: brian-kelley <[email protected]> Co-authored-by: ntjohnson1 <[email protected]> Co-authored-by: Dunlavy <[email protected]> Co-authored-by: DeepBlockDeepak <[email protected]> * Adding tests and data for import_data, export_data, sptensor, ktensor. Small changes in code that was unreachable. * Updating formatting with black * More updates for coverage. * Black formatting updates * Update regression-tests.yml Adding verbose to black and isort calls * Black updated locally to align with CI testing * Update regression-tests.yml --------- Co-authored-by: brian-kelley <[email protected]> Co-authored-by: ntjohnson1 <[email protected]> Co-authored-by: Dunlavy <[email protected]> Co-authored-by: DeepBlockDeepak <[email protected]>
1 parent d70a102 commit 7d4fb7f

9 files changed

+154
-59
lines changed

pyttb/cp_apr.py

Lines changed: 42 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -253,7 +253,7 @@ def tt_cp_apr_mu(
253253
kktModeViolations = np.zeros((N,))
254254

255255
if printitn > 0:
256-
print("\nCP_APR:\n")
256+
print("CP_APR:")
257257

258258
# Start the wall clock timer.
259259
start = time.time()
@@ -304,7 +304,7 @@ def tt_cp_apr_mu(
304304
# Print status
305305
if printinneritn != 0 and divmod(i, printinneritn)[1] == 0:
306306
print(
307-
"\t\tMode = {}, Inner Iter = {}, KKT violation = {}\n".format(
307+
"\t\tMode = {}, Inner Iter = {}, KKT violation = {}".format(
308308
n, i, kktModeViolations[n]
309309
)
310310
)
@@ -325,11 +325,11 @@ def tt_cp_apr_mu(
325325
# Check for convergence
326326
if isConverged:
327327
if printitn > 0:
328-
print("Exiting because all subproblems reached KKT tol.\n")
328+
print("Exiting because all subproblems reached KKT tol.")
329329
break
330330
if nTimes[iter] > stoptime:
331331
if printitn > 0:
332-
print("Exiting because time limit exceeded.\n")
332+
print("Exiting because time limit exceeded.")
333333
break
334334

335335
t_stop = time.time() - start
@@ -345,12 +345,12 @@ def tt_cp_apr_mu(
345345
normTensor**2 + M.norm() ** 2 - 2 * input_tensor.innerprod(M)
346346
)
347347
fit = 1 - (normresidual / normTensor) # fraction explained by model
348-
print("===========================================\n")
349-
print(" Final log-likelihood = {} \n".format(obj))
350-
print(" Final least squares fit = {} \n".format(fit))
351-
print(" Final KKT violation = {}\n".format(kktViolations[iter]))
352-
print(" Total inner iterations = {}\n".format(sum(nInnerIters)))
353-
print(" Total execution time = {} secs\n".format(t_stop))
348+
print("===========================================")
349+
print(" Final log-likelihood = {}".format(obj))
350+
print(" Final least squares fit = {}".format(fit))
351+
print(" Final KKT violation = {}".format(kktViolations[iter]))
352+
print(" Total inner iterations = {}".format(sum(nInnerIters)))
353+
print(" Total execution time = {} secs".format(t_stop))
354354

355355
output = {}
356356
output["params"] = (
@@ -472,7 +472,7 @@ def tt_cp_apr_pdnr(
472472
times = np.zeros((maxiters, 1))
473473

474474
if printitn > 0:
475-
print("\nCP_PDNR (alternating Poisson regression using damped Newton)\n")
475+
print("CP_PDNR (alternating Poisson regression using damped Newton)")
476476

477477
dispLineWarn = printinneritn > 0
478478

@@ -493,7 +493,7 @@ def tt_cp_apr_pdnr(
493493
sparseIx.append(row_indices)
494494

495495
if printitn > 0:
496-
print("done\n")
496+
print("done")
497497

498498
e_vec = np.ones((1, rank))
499499

@@ -578,13 +578,16 @@ def tt_cp_apr_pdnr(
578578
kktModeViolations[n] = kkt_violation
579579

580580
if printinneritn > 0 and np.mod(i, printinneritn) == 0:
581-
print("\tMode = {}, Row = {}, InnerIt = {}".format(n, jj, i))
581+
print(
582+
"\tMode = {}, Row = {}, InnerIt = {}".format(n, jj, i),
583+
end="",
584+
)
582585

583586
if i == 0:
584-
print(", RowKKT = {}\n".format(kkt_violation))
587+
print(", RowKKT = {}".format(kkt_violation))
585588
else:
586589
print(
587-
", RowKKT = {}, RowObj = {}\n".format(
590+
", RowKKT = {}, RowObj = {}".format(
588591
kkt_violation, -f_new
589592
)
590593
)
@@ -667,7 +670,7 @@ def tt_cp_apr_pdnr(
667670
if printitn > 0 and np.mod(iter, printitn) == 0:
668671
fnVals[iter] = -tt_loglikelihood(input_tensor, M)
669672
print(
670-
"{}. Ttl Inner Its: {}, KKT viol = {}, obj = {}, nz: {}\n".format(
673+
"{}. Ttl Inner Its: {}, KKT viol = {}, obj = {}, nz: {}".format(
671674
iter,
672675
nInnerIters[iter],
673676
kktViolations[iter],
@@ -684,7 +687,7 @@ def tt_cp_apr_pdnr(
684687
if isConverged and inexact and rowsubprobStopTol <= stoptol:
685688
break
686689
if times[iter] > stoptime:
687-
print("EXiting because time limit exceeded\n")
690+
print("EXiting because time limit exceeded")
688691
break
689692

690693
t_stop = time.time() - start
@@ -700,12 +703,12 @@ def tt_cp_apr_pdnr(
700703
normTensor**2 + M.norm() ** 2 - 2 * input_tensor.innerprod(M)
701704
)
702705
fit = 1 - (normresidual / normTensor) # fraction explained by model
703-
print("===========================================\n")
704-
print(" Final log-likelihood = {} \n".format(obj))
705-
print(" Final least squares fit = {} \n".format(fit))
706-
print(" Final KKT violation = {}\n".format(kktViolations[iter]))
707-
print(" Total inner iterations = {}\n".format(sum(nInnerIters)))
708-
print(" Total execution time = {} secs\n".format(t_stop))
706+
print("===========================================")
707+
print(" Final log-likelihood = {}".format(obj))
708+
print(" Final least squares fit = {}".format(fit))
709+
print(" Final KKT violation = {}".format(kktViolations[iter]))
710+
print(" Total inner iterations = {}".format(sum(nInnerIters)))
711+
print(" Total execution time = {} secs".format(t_stop))
709712

710713
output = {}
711714
output["params"] = (
@@ -840,7 +843,7 @@ def tt_cp_apr_pqnr(
840843
times = np.zeros((maxiters, 1))
841844

842845
if printitn > 0:
843-
print("\nCP_PQNR (alternating Poisson regression using quasi-Newton)\n")
846+
print("CP_PQNR (alternating Poisson regression using quasi-Newton)")
844847

845848
dispLineWarn = printinneritn > 0
846849

@@ -861,7 +864,7 @@ def tt_cp_apr_pqnr(
861864
sparseIx.append(row_indices)
862865

863866
if printitn > 0:
864-
print("done\n")
867+
print("done")
865868

866869
# Main loop: iterate until convergence or a max threshold is reached
867870
for iter in range(maxiters):
@@ -958,20 +961,22 @@ def tt_cp_apr_pqnr(
958961

959962
# We now use \| KKT \|_{inf}:
960963
kkt_violation = np.max(np.abs(np.minimum(m_row, gradM)))
961-
# print("Intermediate Printing m_row: {}\n and gradM{}".format(m_row, gradM))
962964

963965
# Report largest row subproblem initial violation
964966
if i == 0 and kkt_violation > kktModeViolations[n]:
965967
kktModeViolations[n] = kkt_violation
966968

967969
if printinneritn > 0 and np.mod(i, printinneritn) == 0:
968-
print("\tMode = {}, Row = {}, InnerIt = {}".format(n, jj, i))
970+
print(
971+
"\tMode = {}, Row = {}, InnerIt = {}".format(n, jj, i),
972+
end="",
973+
)
969974

970975
if i == 0:
971-
print(", RowKKT = {}\n".format(kkt_violation))
976+
print(", RowKKT = {}".format(kkt_violation))
972977
else:
973978
print(
974-
", RowKKT = {}, RowObj = {}\n".format(
979+
", RowKKT = {}, RowObj = {}".format(
975980
kkt_violation, -f_new
976981
)
977982
)
@@ -1075,7 +1080,7 @@ def tt_cp_apr_pqnr(
10751080
if printitn > 0 and np.mod(iter, printitn) == 0:
10761081
fnVals[iter] = -tt_loglikelihood(input_tensor, M)
10771082
print(
1078-
"{}. Ttl Inner Its: {}, KKT viol = {}, obj = {}, nz: {}\n".format(
1083+
"{}. Ttl Inner Its: {}, KKT viol = {}, obj = {}, nz: {}".format(
10791084
iter, nInnerIters[iter], kktViolations[iter], fnVals[iter], num_zero
10801085
)
10811086
)
@@ -1086,7 +1091,7 @@ def tt_cp_apr_pqnr(
10861091
if isConverged:
10871092
break
10881093
if times[iter] > stoptime:
1089-
print("Exiting because time limit exceeded\n")
1094+
print("Exiting because time limit exceeded")
10901095
break
10911096

10921097
t_stop = time.time() - start
@@ -1102,12 +1107,12 @@ def tt_cp_apr_pqnr(
11021107
normTensor**2 + M.norm() ** 2 - 2 * input_tensor.innerprod(M)
11031108
)
11041109
fit = 1 - (normresidual / normTensor) # fraction explained by model
1105-
print("===========================================\n")
1106-
print(" Final log-likelihood = {} \n".format(obj))
1107-
print(" Final least squares fit = {} \n".format(fit))
1108-
print(" Final KKT violation = {}\n".format(kktViolations[iter]))
1109-
print(" Total inner iterations = {}\n".format(sum(nInnerIters)))
1110-
print(" Total execution time = {} secs\n".format(t_stop))
1110+
print("===========================================")
1111+
print(" Final log-likelihood = {}".format(obj))
1112+
print(" Final least squares fit = {}".format(fit))
1113+
print(" Final KKT violation = {}".format(kktViolations[iter]))
1114+
print(" Total inner iterations = {}".format(sum(nInnerIters)))
1115+
print(" Total execution time = {} secs".format(t_stop))
11111116

11121117
output = {}
11131118
output["params"] = (

pyttb/export_data.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,9 @@ def export_data(data, filename, fmt_data=None, fmt_weights=None):
1515
"""
1616
Export tensor-related data to a file.
1717
"""
18+
if not isinstance(data, (ttb.tensor, ttb.sptensor, ttb.ktensor, np.ndarray)):
19+
assert False, f"Invalid data type for export: {type(data)}"
20+
1821
# open file
1922
fp = open(filename, "w")
2023

@@ -54,9 +57,6 @@ def export_data(data, filename, fmt_data=None, fmt_weights=None):
5457
export_size(fp, data.shape)
5558
export_array(fp, data, fmt_data)
5659

57-
else:
58-
assert False, "Invalid data type for export"
59-
6060

6161
def export_size(fp, shape):
6262
# Export the size of something to a file

pyttb/import_data.py

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,23 +24,27 @@ def import_data(filename):
2424
data_type = import_type(fp)
2525

2626
if data_type not in ["tensor", "sptensor", "matrix", "ktensor"]:
27+
fp.close()
2728
assert False, f"Invalid data type found: {data_type}"
2829

2930
if data_type == "tensor":
3031
shape = import_shape(fp)
3132
data = import_array(fp, np.prod(shape))
33+
fp.close()
3234
return ttb.tensor().from_data(data, shape)
3335

3436
elif data_type == "sptensor":
3537
shape = import_shape(fp)
3638
nz = import_nnz(fp)
3739
subs, vals = import_sparse_array(fp, len(shape), nz)
40+
fp.close()
3841
return ttb.sptensor().from_data(subs, vals, shape)
3942

4043
elif data_type == "matrix":
4144
shape = import_shape(fp)
4245
mat = import_array(fp, np.prod(shape))
4346
mat = np.reshape(mat, np.array(shape))
47+
fp.close()
4448
return mat
4549

4650
elif data_type == "ktensor":
@@ -54,11 +58,9 @@ def import_data(filename):
5458
fac = import_array(fp, np.prod(fac_shape))
5559
fac = np.reshape(fac, np.array(fac_shape))
5660
factor_matrices.append(fac)
61+
fp.close()
5762
return ttb.ktensor().from_data(weights, factor_matrices)
5863

59-
# Close file
60-
fp.close()
61-
6264

6365
def import_type(fp):
6466
# Import IO data type

pyttb/sptensor.py

Lines changed: 3 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -671,14 +671,11 @@ def logical_and(self, B: Union[float, sptensor, ttb.tensor]) -> sptensor:
671671
if not self.shape == B.shape:
672672
assert False, "Must be tensors of the same shape"
673673

674-
def is_length_2(x):
675-
return len(x) == 2
676-
677674
C = sptensor.from_aggregator(
678675
np.vstack((self.subs, B.subs)),
679676
np.vstack((self.vals, B.vals)),
680677
self.shape,
681-
is_length_2,
678+
lambda x: len(x) == 2,
682679
)
683680

684681
return C
@@ -735,15 +732,11 @@ def logical_or(
735732
assert False, "Logical Or requires tensors of the same size"
736733

737734
if isinstance(B, ttb.sptensor):
738-
739-
def is_length_ge_1(x):
740-
return len(x) >= 1
741-
742735
return sptensor.from_aggregator(
743736
np.vstack((self.subs, B.subs)),
744737
np.ones((self.subs.shape[0] + B.subs.shape[0], 1)),
745738
self.shape,
746-
is_length_ge_1,
739+
lambda x: len(x) >= 1,
747740
)
748741

749742
assert False, "Sptensor Logical Or argument must be scalar or sptensor"
@@ -780,12 +773,9 @@ def logical_xor(
780773
if self.shape != other.shape:
781774
assert False, "Logical XOR requires tensors of the same size"
782775

783-
def length1(x):
784-
return len(x) == 1
785-
786776
subs = np.vstack((self.subs, other.subs))
787777
return ttb.sptensor.from_aggregator(
788-
subs, np.ones((len(subs), 1)), self.shape, length1
778+
subs, np.ones((len(subs), 1)), self.shape, lambda x: len(x) == 1
789779
)
790780

791781
assert False, "The argument must be an sptensor, tensor or scalar"

tests/data/invalid_dims.tns

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
matrix
2+
2
3+
4 2 1
4+
1.0000000000000000e+00
5+
5.0000000000000000e+00
6+
2.0000000000000000e+00
7+
6.0000000000000000e+00
8+
3.0000000000000000e+00
9+
7.0000000000000000e+00
10+
4.0000000000000000e+00
11+
8.0000000000000000e+00

tests/data/invalid_type.tns

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
list
2+
2
3+
4 2
4+
1.0000000000000000e+00
5+
5.0000000000000000e+00
6+
2.0000000000000000e+00
7+
6.0000000000000000e+00
8+
3.0000000000000000e+00
9+
7.0000000000000000e+00
10+
4.0000000000000000e+00
11+
8.0000000000000000e+00

tests/test_cp_apr.py

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,7 @@ def test_cpapr_mu(capsys):
148148
ktensorInstance = ttb.ktensor.from_data(weights, factor_matrices)
149149
tensorInstance = ktensorInstance.full()
150150
np.random.seed(123)
151-
M, _, _ = ttb.cp_apr(tensorInstance, 2)
151+
M, _, _ = ttb.cp_apr(tensorInstance, 2, printinneritn=1)
152152
# Consume the cp_apr diagnostic printing
153153
capsys.readouterr()
154154
assert np.isclose(M.full().data, ktensorInstance.full().data).all()
@@ -175,7 +175,9 @@ def test_cpapr_pdnr(capsys):
175175
ktensorInstance = ttb.ktensor.from_data(weights, factor_matrices)
176176
tensorInstance = ktensorInstance.full()
177177
np.random.seed(123)
178-
M, _, _ = ttb.cp_apr(tensorInstance, 2, algorithm="pdnr")
178+
M, _, _ = ttb.cp_apr(
179+
tensorInstance, 2, algorithm="pdnr", printinneritn=1, inexact=False
180+
)
179181
capsys.readouterr()
180182
assert np.isclose(M.full().data, ktensorInstance.full().data, rtol=1e-04).all()
181183

@@ -221,7 +223,7 @@ def test_cpapr_pqnr(capsys):
221223
ktensorInstance = ttb.ktensor.from_data(weights, factor_matrices)
222224
tensorInstance = ktensorInstance.full()
223225
np.random.seed(123)
224-
M, _, _ = ttb.cp_apr(tensorInstance, 2, algorithm="pqnr")
226+
M, _, _ = ttb.cp_apr(tensorInstance, 2, algorithm="pqnr", printinneritn=1)
225227
capsys.readouterr()
226228
assert np.isclose(M.full().data, ktensorInstance.full().data, rtol=1e-01).all()
227229

0 commit comments

Comments
 (0)