Open
Description
hello everyone, I'm new in this field and I need help concerning using hls4ml with vivado accelerator backend " imusing vivado2019.2 ", after I build my CNN following tuto 6 and trying to generate the bit file regarding part 7 in tutorial, I'm blocked in impl_1
this process glitch and the bot file not generated, also when I read the log file I found that:
EXPORT IP COMPLETED IN 0h0m23s *****
INFO: [HLS 200-112] Total elapsed time: 1634.17 seconds; peak allocated memory: 2.233 GB.
INFO: [Common 17-206] Exiting vivado_hls at Sat Dec 2 23:28:50 2023...
Vivado synthesis report not found.
Cosim report not found.
Timing report not found.
also this :
[Sun Dec 10 22:37:01 2023] Launched impl_1...
Run output will be captured here: /home/abdo/PycharmProjects/lenet5/qmodel/model_hls4ml/myproject_vivado_accelerator/project_1.runs/impl_1/runme.log
launch_runs: Time (s): cpu = 00:00:21 ; elapsed = 00:00:23 . Memory (MB): peak = 2100.492 ; gain = 219.098 ; free physical = 915 ; free virtual = 3406
# wait_on_run -timeout 360 impl_1
[Sun Dec 10 22:37:01 2023] Waiting for impl_1 to finish (timeout in 360 minutes)...
this is the configuration that i use :
import hls4ml
import tensorflow as tf
from tensorflow.keras.models import load_model
from qkeras.utils import _add_supported_quantized_objects
# Load the model
co = {}
_add_supported_quantized_objects(co)
model = load_model('LeNet5_MNIST_model_n.h5', custom_objects=co)
# Convert the model to HLS using hls4ml
config = hls4ml.utils.config_from_keras_model(model, granularity='name')
config['Model']['ReuseFactor'] = 1
config['Model']['Strategy'] = 'Resource'
config['Model']['Precision'] = 'ap_fixed<16,6>'
#print("-----------------------------------")
#plotting.print_dict(config)
#print("-----------------------------------")
hls_model = hls4ml.converters.convert_from_keras_model(
model, hls_config=config, output_dir='model_hls4ml', backend='VivadoAccelerator', board='pynq-z2',io_type='io_stream',
)
#hls_model.compile()
hls_model.build(csim=False, export=True, bitfile=True)
also this is the function i used to generate my model:
input_shape = (28, 28, 1)
x = x_in = Input(input_shape)
for i, f in enumerate(filters_per_conv_layer):
print(('Adding convolutional block {} with N={} filters').format(i, f))
x = Conv2D(
int(f),
kernel_size=(3, 3),
strides=(1, 1),
kernel_initializer='lecun_uniform',
kernel_regularizer=l1(0.0001),
use_bias=False,
name='conv_{}'.format(i),
)(x)
x = BatchNormalization(name='bn_conv_{}'.format(i))(x)
x = Activation('relu', name='conv_act_%i' % i)(x)
x = MaxPooling2D(pool_size=(2, 2), name='pool_{}'.format(i))(x)
x = Flatten()(x)
for i, n in enumerate(neurons_per_dense_layer):
print(('Adding dense block {} with N={} neurons').format(i, n))
x = Dense(n, kernel_initializer='lecun_uniform', kernel_regularizer=l1(0.0001), name='dense_%i' % i, use_bias=False)(x)
x = BatchNormalization(name='bn_dense_{}'.format(i))(x)
x = Activation('relu', name='dense_act_%i' % i)(x)
x = Dense(10, name='output_dense')(x)
x_out = Activation('softmax', name='output_softmax')(x)
model = Model(inputs=[x_in], outputs=[x_out], name='LeNet5_MNIST')
# Print model summary
model.summary()
+---------------------+------------+---------+--------+
| Layer | Parameters | Weights | Biases |
+---------------------+------------+---------+--------+
| input_1 | 0 | 0 | 0 |
| fused_convbn_0 | 88 | 80 | 8 |
| pool_0 | 0 | 0 | 0 |
| fused_convbn_1 | 1184 | 1168 | 16 |
| pool_1 | 0 | 0 | 0 |
| fused_convbn_2 | 1488 | 1472 | 16 |
| pool_2 | 0 | 0 | 0 |
| flatten | 0 | 0 | 0 |
| dense_0 | 3468 | 3456 | 12 |
| bn_dense_0 | 24 | 0 | 24 |
| dense_act_0 | 0 | 0 | 0 |
| dense_1 | 624 | 576 | 48 |
| bn_dense_1 | 96 | 0 | 96 |
| dense_act_1 | 0 | 0 | 0 |
| output_dense | 114 | 96 | 18 |
+---------------------+------------+---------+--------+
Metadata
Metadata
Assignees
Labels
No labels