Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TF1 InvalidArgumentError: Rank of input must be no greater than rank of output shape #202

Open
Hrovatin opened this issue Jul 3, 2021 · 0 comments

Comments

@Hrovatin
Copy link

Hrovatin commented Jul 3, 2021

I tried to run diffxpy with tf1 and normal noise. However, I get the below error.

import diffxpy.api as de
from anndata import AnnData
import pandas as pd
import numpy as np

# Mock data
adata=AnnData(np.random.randn(100,10),obs=pd.DataFrame(np.random.randn(100,1),columns=['a']))

# DE test
result=de.test.continuous_1d(
            data=adata,
            continuous='a',
            factor_loc_totest='a',
            formula_loc='~1+a',
            formula_scale='~1',
            df = 3,
            test = 'wald',
            sample_description=adata.obs,
            size_factors=abs(adata.X.sum(axis=1)),
            backend='tf1',
            noise_model='norm'
    )

WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

/home/icb/karin.hrovatin/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/batchglm/utils/linalg.py:107: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  params

WARNING:tensorflow:Entity <bound method ReducableTensorsGLMALL.assemble_tensors of <batchglm.train.tf1.glm_norm.reducible_tensors.ReducibleTensors object at 0x7fcfc54937f0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: LIVE_VARS_IN
WARNING: Entity <bound method ReducableTensorsGLMALL.assemble_tensors of <batchglm.train.tf1.glm_norm.reducible_tensors.ReducibleTensors object at 0x7fcfc54937f0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: LIVE_VARS_IN
WARNING:tensorflow:Entity <bound method ReducableTensorsGLMALL.assemble_tensors of <batchglm.train.tf1.glm_norm.reducible_tensors.ReducibleTensors object at 0x7fcfc54e3b38>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: LIVE_VARS_IN
WARNING: Entity <bound method ReducableTensorsGLMALL.assemble_tensors of <batchglm.train.tf1.glm_norm.reducible_tensors.ReducibleTensors object at 0x7fcfc54e3b38>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: LIVE_VARS_IN
WARNING:tensorflow:Entity <bound method ReducableTensorsGLMALL.assemble_tensors of <batchglm.train.tf1.glm_norm.reducible_tensors.ReducibleTensors object at 0x7fcfc53d3710>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: LIVE_VARS_IN
WARNING: Entity <bound method ReducableTensorsGLMALL.assemble_tensors of <batchglm.train.tf1.glm_norm.reducible_tensors.ReducibleTensors object at 0x7fcfc53d3710>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: LIVE_VARS_IN
WARNING:tensorflow:Entity <bound method ReducableTensorsGLMALL.assemble_tensors of <batchglm.train.tf1.glm_norm.reducible_tensors.ReducibleTensors object at 0x7fcfc5353f98>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: LIVE_VARS_IN
WARNING: Entity <bound method ReducableTensorsGLMALL.assemble_tensors of <batchglm.train.tf1.glm_norm.reducible_tensors.ReducibleTensors object at 0x7fcfc5353f98>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: LIVE_VARS_IN
WARNING:tensorflow:From /home/icb/karin.hrovatin/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/batchglm/train/tf1/base_glm/estimator_graph.py:907: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
   1364     try:
-> 1365       return fn(*args)
   1366     except errors.OpError as e:

~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
   1349       return self._call_tf_sessionrun(options, feed_dict, fetch_list,
-> 1350                                       target_list, run_metadata)
   1351 

~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
   1442                                             fetch_list, target_list,
-> 1443                                             run_metadata)
   1444 

InvalidArgumentError: {{function_node __inference_Dataset_map_fetch_fn_372}} Rank of input (3) must be no greater than rank of output shape (2).
	 [[{{node BroadcastTo}}]]
	 [[full_data/reducible_tensors_eval_ll_jac/ReduceDataset]]

During handling of the above exception, another exception occurred:

InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-4-16f8030636fd> in <module>
     10             size_factors=abs(adata.X.sum(axis=1)),
     11             backend='tf1',
---> 12             noise_model='norm'
     13     )

~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/diffxpy/testing/tests.py in continuous_1d(data, continuous, factor_loc_totest, formula_loc, formula_scale, df, spline_basis, as_numeric, test, init_a, init_b, gene_names, sample_description, constraints_loc, constraints_scale, noise_model, size_factors, batch_size, backend, train_args, training_strategy, quick_scale, dtype, **kwargs)
   2308             quick_scale=quick_scale,
   2309             dtype=dtype,
-> 2310             **kwargs
   2311         )
   2312         de_test = DifferentialExpressionTestWaldCont(

~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/diffxpy/testing/tests.py in wald(data, factor_loc_totest, coef_to_test, formula_loc, formula_scale, as_numeric, init_a, init_b, gene_names, sample_description, dmat_loc, dmat_scale, constraints_loc, constraints_scale, noise_model, size_factors, batch_size, backend, train_args, training_strategy, quick_scale, dtype, **kwargs)
    738         quick_scale=quick_scale,
    739         dtype=dtype,
--> 740         **kwargs,
    741     )
    742 

~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/diffxpy/testing/tests.py in _fit(noise_model, data, design_loc, design_scale, design_loc_names, design_scale_names, constraints_loc, constraints_scale, init_model, init_a, init_b, gene_names, size_factors, batch_size, backend, training_strategy, quick_scale, train_args, close_session, dtype)
    244     estim.train_sequence(
    245         training_strategy=training_strategy,
--> 246         **train_args
    247     )
    248 

~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/batchglm/models/base/estimator.py in train_sequence(self, training_strategy, **kwargs)
    122                         (x, str(d[x]), str(kwargs[x]))
    123                     )
--> 124             self.train(**d, **kwargs)
    125             logger.debug("Training sequence #%d complete", idx + 1)
    126 

~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/batchglm/train/tf1/base_glm_all/estimator.py in train(self, learning_rate, convergence_criteria, stopping_criteria, train_loc, train_scale, use_batching, optim_algo, *args, **kwargs)
    315                 require_fim=require_fim,
    316                 is_batched=use_batching,
--> 317                 **kwargs
    318             )
    319 

~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/batchglm/train/tf1/base/estimator.py in _train(self, learning_rate, feed_dict, convergence_criteria, stopping_criteria, train_op, trustregion_mode, require_hessian, require_fim, is_batched, *args, **kwargs)
    158                  self.model.model_vars.convergence_update),
    159                 feed_dict={self.model.model_vars.convergence_status:
--> 160                                np.repeat(False, repeats=self.model.model_vars.converged.shape[0])
    161                            }
    162             )

~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    954     try:
    955       result = self._run(None, fetches, feed_dict, options_ptr,
--> 956                          run_metadata_ptr)
    957       if run_metadata:
    958         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1178     if final_fetches or final_targets or (handle and feed_dict_tensor):
   1179       results = self._do_run(handle, final_targets, final_fetches,
-> 1180                              feed_dict_tensor, options, run_metadata)
   1181     else:
   1182       results = []

~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1357     if handle is None:
   1358       return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1359                            run_metadata)
   1360     else:
   1361       return self._do_call(_prun_fn, handle, feeds, fetches)

~/miniconda3/envs/diffxpy_tf1/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
   1382                     '\nsession_config.graph_options.rewrite_options.'
   1383                     'disable_meta_optimizer = True')
-> 1384       raise type(e)(node_def, op, message)
   1385 
   1386   def _extend_graph(self):

InvalidArgumentError:  Rank of input (3) must be no greater than rank of output shape (2).
	 [[{{node BroadcastTo}}]]
	 [[full_data/reducible_tensors_eval_ll_jac/ReduceDataset]]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant