Skip to content

Commit 5d0bdb5

Browse files
jmitrevsvloncar
andauthored
make auto the default for layer config (#1016)
* make auto the default for layers * add max_precision, not currently used * add maximum precision in standard precision inference * minimal handling of other types in infer_precision (e.g. for binary) * add more checks for max precision * fix the incorrect setting of reuse factors * update tests to pass backend to config_from_* * fix parameters syntax error introduced in pytest commit * add basic type inference for embedding * add placeholder precision inference for rnn * fix syntax error in test_qkeras * fix up test_trace * don't pass auto in test_attributes * update documentation * update documentation (2) * move some optimizers before infering precision type * move up the channnels_last_converter * put missing precision_merge logic in infer_preicion and delete, reorder optimizers * add type inference to catapult --------- Co-authored-by: Vladimir <[email protected]>
1 parent d63033b commit 5d0bdb5

29 files changed

+363
-189
lines changed

docs/api/configuration.rst

Lines changed: 35 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,18 @@ We currently support two ways of setting hls4ml's model configuration. This page
99

1010
.. contents:: \
1111

12+
The Python API approach is recommended for most users as there are more utilities to help create the configuration dictionaries.
1213

1314
**NOTE:**
1415

1516

1617
*
1718
One important part of ``hls4ml`` to remember is that the user is responsible for the format of the inputs. There is no automatic formatting or normalization so this must be done in the training.
1819

19-
*
20+
..
21+
*
2022
For developers, you might also want to checkout this section: `Detailed configuration in converted hls codes <#detailed-configuration-in-converted-hls-codes>`_.
23+
*Broken link*
2124
2225
----
2326

@@ -31,11 +34,26 @@ Using hls4ml, you can quickly generate a simple configuration dictionary from a
3134
import hls4ml
3235
config = hls4ml.utils.config_from_keras_model(model, granularity='model')
3336
34-
For more advanced and detailed configuration, you can also set them through the created dictionary. For example, to change the reuse factor:
37+
This python dictionary can be edited as needed. A more advanced configuration can be generated by, for example:
38+
39+
.. code-block:: python
40+
41+
import hls4ml
42+
config = hls4ml.utils.config_from_keras_model(
43+
model,
44+
granularity='name',
45+
default_precision='fixed<16,6>',
46+
backend='Vitis')
47+
48+
This will include per-layer configuration based on the model. Including the backend is recommended because some configation options depend on the backend. Note, the precisions at the
49+
higher granularites usually default to 'auto', which means that ``hls4ml`` will try to set it automatically. Note that higher granularity settings take precendence
50+
over model-level settings. See :py:class:`~hls4ml.utils.config.config_from_keras_model` for more information on the various options.
51+
52+
One can override specific values before using the configuration:
3553

3654
.. code-block:: python
3755
38-
config['Model']['ReuseFactor'] = 2
56+
config['LayerName']['fc1']['ReuseFactor'] = 2
3957
4058
Or to set the precision of a specific layer's weight:
4159

@@ -45,6 +63,20 @@ Or to set the precision of a specific layer's weight:
4563
4664
To better understand how the configuration hierachy works, refer to the next section for more details.
4765

66+
Finally, one then uses the configuration to create an hls model:
67+
68+
.. code-block:: python
69+
70+
hls_model = hls4ml.converters.convert_from_keras_model(
71+
model,
72+
hls_config=config,
73+
output_dir="my_project_dir",
74+
io_type='io_stream',
75+
backend='Vitis'
76+
)
77+
78+
See :py:class:`~hls4ml.converters.convert_from_keras_model` for more information on the various options.
79+
4880
----
4981

5082
2. YAML Configuration file

docs/setup.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ To run FPGA synthesis, installation of following tools is required:
5757

5858
* Xilinx Vivado HLS 2018.2 to 2020.1 for synthesis for Xilinx FPGAs
5959

60-
* Vitis HLS 2022.1 or newer is required for synthesis for Xilinx FPGAs using the experimental ``Vitis`` backend.
60+
* Vitis HLS 2022.2 or newer is required for synthesis for Xilinx FPGAs using the ``Vitis`` backend.
6161

6262
* Intel Quartus 20.1 to 21.4 for the synthesis for Intel FPGAs
6363

docs/status.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ Other feature notes:
8181
* ``hls4ml`` is tested on Linux, and supports
8282
* Vivado HLS versions 2018.2 to 2020.1
8383
* Intel HLS versions 20.1 to 21.4
84-
* Vitis HLS versions 2020.2 to 2022.2 (experimentally)
84+
* Vitis HLS versions 2022.2 to 2024.1
8585
* Windows and macOS are not supported
8686
* BDT support has moved to the `Conifer <https://github.com/thesps/conifer>`__ package
8787

hls4ml/backends/catapult/catapult_backend.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,7 @@ def _register_flows(self):
110110
'catapult:inplace_stream_flatten',
111111
'catapult:skip_softmax',
112112
'catapult:fix_softmax_table_size',
113+
'infer_precision_types',
113114
]
114115
optimization_flow = register_flow('optimize', optimization_passes, requires=[init_flow], backend=self.name)
115116

hls4ml/model/optimizer/__init__.py

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,8 @@
3333
register_flow(
3434
'convert',
3535
[
36-
'seperable_to_depthwise_and_conv', # has to be before precision inference
37-
'infer_precision_types',
3836
'channels_last_converter',
37+
'seperable_to_depthwise_and_conv',
3938
'remove_transpose_before_flatten',
4039
'remove_nop_transpose',
4140
'remove_single_channel_transpose',
@@ -45,19 +44,17 @@
4544
'qkeras_factorize_alpha',
4645
'extract_ternary_threshold',
4746
'fuse_consecutive_batch_normalization',
47+
'fuse_batch_normalization',
4848
'replace_multidimensional_dense_with_conv',
4949
'enforce_proxy_model_embedded_config',
50+
'eliminate_linear_activation',
51+
# many of the above optimzers need to be done before this
52+
'infer_precision_types',
5053
],
5154
) # TODO Maybe not all QKeras optmizers belong here?
5255

5356
register_flow(
5457
'optimize',
55-
[
56-
'eliminate_linear_activation',
57-
'fuse_consecutive_batch_normalization',
58-
'fuse_batch_normalization',
59-
'infer_precision_types',
60-
'set_precision_concat',
61-
],
58+
[],
6259
requires=['convert'],
6360
)

0 commit comments

Comments
 (0)