Description
I will be referencing the following documentation:
https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/tensor-array
https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/tensor-array-write
as well as the following source code:
src/graph.rs @ line 1195 function get_attr_metadata and the following get_attr_*
src/ops/ops_impl @ line 123054 struct TensorArrayWriteV3 and the implementations
TensorArray is expected to return handle and flow according to the C++ API
These Output types are required for tensor_array_write (tensor_array_write_v3 in my case).
However, going through the source in graph.rs I can only read attributes as native type to recover these outputs, e.g.: &str and f32. TensorArrayV3,s constructor only returns one Output type.
tensor_array_write_v3() has all its generics set as into which is correct, but I cannot retrieve these outputs from the TensorArrayV3::build() constructor which only returns one Output type.
I have tried passing this Output type into tensor_array_write_v3 by cloning my TensorArray object for handle and flow parameters, hoping Tensorflow will associate the handle and flow attributes as Output types but this throws an internal error that is
thread 'layers::test_concat_split' panicked at 'called `Result::unwrap()` on an `Err` value: {inner:0x20a622a30e0, InvalidArgument: Input 'flow_in' passed resource expected float while building NodeDef 'TensorArrayWriteV3' using Op<name=TensorArrayWriteV3; signature=handle:resource, index:int32, value:T, flow_in:float -> flow_out:float; attr=T:type; is_stateful=true>}', src\layers.rs:176:58
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
test layers::test_concat_split ... FAILED
for posterity, this is my function being developed and tested:
///merge layers take in a vector of layers and concatenates them into a single layer.
fn merge<O1: Into<Output>>(
inputs: Vec<O1>,
scope: &mut Scope,
) -> Result<(Output, Operation), Status> {
//create an ops::constant for len inputs
let size = ops::constant(inputs.len() as i32, scope)?;
let axis_zero = ops::constant(0, scope)?;
//create a tensorarray to hold inputs, dont dynamically size so we can statically
//optimize memory allocation shape a little.
let input_tensor: TensorArrayV3 = TensorArrayV3::new()
.dynamic_size(false)
.dtype(DataType::Float);
let input_tensor_op = input_tensor.build(size, scope)?;
//now insert each entry in inputs into the input_tensor
for (i, input) in inputs.into_iter().enumerate() {
//create an tensorarraywrite op for each input
let index_constant = ops::constant(&[i as i32], scope)?;
let writer: Output =
ops::tensor_array_write_v3(
// input_tensor_op.clone().get_attr_string("handle")?.into(),
input_tensor_op.clone(),
index_constant,
input,
//TODO: TensorArray doesnt return flow, hopefully the flow attribute
// of this op will be read when xla builds the graph if I pass the TensorArray here
// input_tensor_op.clone().get_attr_float("flow")?.into(),
input_tensor_op.clone(),
scope,
)?
.into();
}
let group_op = ops::concat::<Operation, Operation>(axis_zero, input_tensor_op, scope)?;
Ok((group_op.clone().into(), group_op.clone()))
}