-
Notifications
You must be signed in to change notification settings - Fork 449
Vitis Accelerator IP Flow #1134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
steltze
wants to merge
107
commits into
fastmachinelearning:main
Choose a base branch
from
steltze:vitis_accelerator_ip_flow
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
107 commits
Select commit
Hold shift + click to select a range
312832f
Initial commit
steltze d2b5a15
Set change the backend
steltze 02659dd
Change the accelerator config script
steltze 56296b6
Set the vitis accelerator template
steltze 7dd0173
Set vitis accelerator writer
steltze 6f181b8
Fix writes init
steltze bd2e52e
Include separable convolution resource implementation
steltze b795240
Separate depthwise resource strategy to 3 cases
steltze eeb04d4
Complete vitis accelerator wrapper for io_stream case
steltze 7e47c85
Fix call to wrong backend writer
steltze 5a2a38f
Fix vitis accelerator writer
steltze 99f9429
Fix include in axi wrapper header file writer
steltze b9609dc
Change python-cpp bridge writer
steltze 4f69c16
Fix tlast handling in axis wrapper writer
steltze 014a7b2
Extend convert_data to handle stream type, use that for the bridge
steltze 723073e
Add zcu102 to the supported boards json
steltze 290896b
Fix some c synthesis warnings
steltze c9dfcf2
Group more tests per YAML to reduce the number of envs created
vloncar d3b8e20
Support negative_slope in quantized_relu
vloncar b32984f
[pre-commit.ci] auto fixes from pre-commit hooks
pre-commit-ci[bot] 98273a0
Fix activation check in profiling
vloncar 1640c4b
Stage initial set of changes for the Catapult backend (#956)
dgburnette 2a71a83
[pre-commit.ci] pre-commit autoupdate
pre-commit-ci[bot] 6ac964c
fix unwanted tested file change in #956
calad0i ec95e01
Fix SR backend synth missing variables
bo3z 5de1bf5
Test for SR backend config
vloncar a6fec36
Upsampling support for PyTorch models
vloncar 1b72b19
Split Catapult types into separate file
vloncar 28521d0
Split Quartus types into separate file
vloncar a44707d
Split Vivado types into separate file
vloncar cefab60
Increase precision of Softsign test
vloncar 440901b
Use quantized input in binary CNN test
vloncar c351a02
Add UnspecifiedPrecisionType
vloncar 4d9d35a
Rudimentary optimizer to infer 'auto' precision
vloncar 32ae9b6
Auto precision test
vloncar 932b01e
Sepconv fixes
vloncar 6a65fed
update precision propagation for signed, select im2col for quartus pa…
jmitrevs 41b7e98
Make inferring no_bias a configurable option of the optimizer
vloncar 24253e1
updates to infering precision from qonnx branch
jmitrevs 6ee8189
remove count, become more selective on when True is returned
jmitrevs b5add0c
fix pooling precision
calad0i 665c904
remove typing
calad0i b366d24
Fix avg pooling op check
vloncar f0ca865
Optimizer to remove expensive Transpose that serves as Flatten
vloncar 1e416b5
Generalize removal of Transpose after flatten so it works on 1D as well
vloncar 2a5d8de
Remove transpose of input if n_chan=1
vloncar 3969523
SepConv1d/2d for io_parallel w/ Latency strategy
vloncar 52252ca
Cosmetic parameter config fixes
vloncar be56b93
Tests for SepConv io_parallel
vloncar b0085a1
[pre-commit.ci] pre-commit autoupdate
pre-commit-ci[bot] 44bc8f3
Update pytest docker image to 0.5.4
jmitrevs a7826e0
bump to 0.5.5
jmitrevs 41ab6af
fix pre-commit warning
jmitrevs c0f8d9f
change writing of obsolete ".h5" to ".keras" files
jmitrevs bcfd685
Fix extension test for Keras v3
vloncar 8c09595
Support ParallelizationFactor in SepConv1D/2D
vloncar 11819ac
updated pytest docker image
jmitrevs 39d9232
Don't test io_parallel for Catapult test and reduce the size of test …
vloncar 68a83d6
Add explicit DepthwiseConv tests and simpligy SepConv tests
vloncar 8a9d556
[pre-commit.ci] pre-commit autoupdate
pre-commit-ci[bot] ad86387
Initial commit
steltze 4ea329b
Stage initial set of changes for the Catapult backend (#956)
dgburnette 992b9b7
Rudimentary optimizer to infer 'auto' precision
vloncar 8174465
Sepconv fixes
vloncar 84ff2c6
Optimizer to remove expensive Transpose that serves as Flatten
vloncar 518796d
Remove transpose of input if n_chan=1
vloncar 238e35c
Optimizer to remove expensive Transpose that serves as Flatten
vloncar c10dd82
Remove transpose of input if n_chan=1
vloncar d6fe369
fix up automatic precision inferrence
jmitrevs 7290a29
starting towards being able to split seperable
jmitrevs 13fcf0a
complete implementation of seperable -> dw + pw, untested
jmitrevs 92e7222
make conv_same_pad also trigger on depthwise, varius bug fixes
jmitrevs f12a7ea
add parsing of depth multiplier for 1D depthwise conv
jmitrevs 4d24e4e
Merge remote-tracking branch 'upstream/main' into vitis_accelerator_i…
e2d270e
Finish resolving conficts with main
fa6bd66
Supress removing tar for now
steltze b42210d
Fix csynth and cosim
steltze 1303bba
Fix tcl script to find cosim report
steltze 8d3a1f2
Correct PYNQ Z2 vivado tcl script, bitstream generated
steltze a8e0497
Clean pynq tcl script
steltze 48686d3
Fix compatibility of nnet helper functions with vitis axis
steltze bae450b
Setup vivado tcl script for zcu102
steltze dde9124
Rename backend to VitisAcceleratorIPFLow to prevent conflicts with ke…
steltze 663181f
Fix compatiblity between axi stream and io parallel
steltze e32f4d0
Update pynq driver for zcu102
steltze c52ec75
Run pre-commit
steltze 9d9e645
Remove unused file
steltze 80697c0
Remove unused xclbin generator
steltze f467829
Clean backends init
steltze 4c74550
Fix backend import sequence
steltze 542b950
Start cleaning up code
steltze c78aec2
Start integrating FIFO depth optimizer
steltze 62b5c27
Fix FIFO depth optimizer
steltze d5f2192
Run precommit
steltze 34b0929
Merge branch 'main' into vitis_accelerator_ip_flow
steltze 14b413e
Update build_prj.tcl
steltze 800423f
Merge branch 'main' into vitis_accelerator_ip_flow
steltze 9f1c8b3
Address pr comments and merge main
steltze 4763692
Include tests without fifo optimization and checks for bitstream gene…
steltze e66ad40
Run precommit and remove unused override testbench
steltze f51be88
Fix qonnx test
steltze 85c233c
Fix keras fifo optimization test
steltze b91b641
Fix test documentation
steltze 5bc54d3
Fix vivado project path in the build tcl for zcu102
steltze 0a0d7d1
Skip all tests
steltze da4f8b5
Merge branch 'main' into vitis_accelerator_ip_flow
steltze e55c52e
Link backend fifo optimization options
steltze File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Empty file.
Empty file.
221 changes: 221 additions & 0 deletions
221
hls4ml/backends/vitis_accelerator_ip_flow/passes/fifo_depth_optimization.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,221 @@ | ||
import json | ||
import os | ||
|
||
from hls4ml.model.optimizer.optimizer import ConfigurableOptimizerPass, ModelOptimizerPass | ||
|
||
|
||
def initialize_large_fifos(model, profiling_fifo_depth): | ||
"""Set all FIFO depths equal to a large value so that they can be profiled. | ||
|
||
Args: | ||
model (ModelGraph): The model to which FIFO depth optimization is applied. | ||
profiling_fifo_depth (int): A large non-negative integer, must be larger than the max expected depth of the FIFOs. | ||
|
||
Returns: | ||
Dict[str, int]: A dictionary containing FIFO names as keys and their initial depths as values is returned for | ||
comparison with the optimized depths. | ||
""" | ||
|
||
# filter all the output variables and keep only the internal FIFOs, excluding output objects that are not FIFOs and the | ||
# input and output FIFOs as they can't be profiled and are implementation dependant i.e AXI Stream, AXI Master or | ||
# connected to another IP | ||
vars_to_profile = { | ||
output_variable_name: output_variable | ||
for output_variable_name, output_variable in model.output_vars.items() | ||
if ("VivadoStreamVariable" in str(type(output_variable))) | ||
and output_variable != model.get_output_variables()[0] | ||
and output_variable != model.get_input_variables()[0] | ||
} | ||
|
||
# initialize all the fifos to `profiling_fifo_depth` so that they will be automatically implemented in BRAMs and so | ||
# they will be profiled. Alternatively, "config_dataflow -override_user_fifo_depth profiling_fifo_depth" can be | ||
# used inside build_prj.tcl to override all FIFO depths with the specified value | ||
initial_fifo_depths = {} | ||
for output_variable in vars_to_profile.values(): | ||
if output_variable.pragma: | ||
initial_fifo_depths[output_variable.name] = int(output_variable.pragma[1]) | ||
output_variable.pragma = (output_variable.pragma[0], profiling_fifo_depth) | ||
|
||
inp = model.get_input_variables()[0] | ||
initial_fifo_depths['in_local'] = int(inp.pragma[1]) | ||
inp.pragma = (inp.pragma[0], profiling_fifo_depth) | ||
|
||
outp = model.get_output_variables()[0] | ||
initial_fifo_depths['out_local'] = int(outp.pragma[1]) | ||
outp.pragma = (outp.pragma[0], profiling_fifo_depth) | ||
return initial_fifo_depths | ||
|
||
|
||
def execute_cosim_to_profile_fifos(model): | ||
"""Execute a cosimulation with a testh bench that calls the top function - Vitis IP at **least twice**, | ||
to properly profile the max FIFO depths. The function will momentarily replace the initial test bench | ||
with a suitable one for the optimization, and after the optimizer pass, the original test bench reinitialized. | ||
|
||
Args: | ||
model (ModelGraph): The model to which FIFO depth optimization is applied. | ||
""" | ||
model.write() | ||
|
||
model.build( | ||
reset=False, | ||
csim=False, | ||
synth=True, | ||
cosim=True, | ||
validation=False, | ||
export=False, | ||
vsynth=False, | ||
fifo_opt=True, | ||
) | ||
|
||
return | ||
|
||
|
||
def get_vitis_optimized_fifo_depths(model): | ||
"""Parse the files generated by the cosimulation to retrieve the optimized depths for the FIFOs. | ||
Attention, only the FIFOs between the layers are profiled! | ||
|
||
Args: | ||
model (ModelGraph): The model to which FIFO depth optimization is applied. | ||
|
||
Returns: | ||
Dict[str, int]: A dictionary that contains the FIFO names as keys and the optimized depths as values. | ||
""" | ||
# channel.zip is generated after the cosimulation and contains the chan_status*.csv files | ||
# in the chan_status*.csv files the max depth achieved during cosimulation can be found at the last (4th) line | ||
path_to_zip_file = ( | ||
model.config.get_output_dir() | ||
+ "/" | ||
+ model.config.get_project_name() | ||
+ "_prj" | ||
+ "/solution1/.autopilot/db/channel_depth_info/" | ||
) | ||
|
||
os.system(f"unzip -q -o {path_to_zip_file}channel.zip -d {path_to_zip_file}") | ||
|
||
# the channel_info.csv file contains the mapping of each fifo name (i.e layer4_out_U) to the respective | ||
# chan_status*.csv file | ||
names_file_path = ( | ||
model.config.get_output_dir() | ||
+ "/" | ||
+ model.config.get_project_name() | ||
+ "_prj" | ||
+ "/solution1/.autopilot/db/channel_info.csv" | ||
) | ||
|
||
csv_fifo_depth_files = {} | ||
with open(names_file_path) as names_file: | ||
for line in names_file: | ||
layer_name = line.split(",")[1] | ||
csv_file_name = line.split(",")[3][:-1] | ||
csv_fifo_depth_files[layer_name] = csv_file_name | ||
|
||
optmized_fifo_depths = {} | ||
for layer_name, file_name in csv_fifo_depth_files.items(): | ||
with open(path_to_zip_file + file_name) as chan_status_file: | ||
lines = chan_status_file.readlines() | ||
optmized_fifo_depths[layer_name[:-2]] = int( | ||
lines[-1] | ||
) # remove "_U" from the layer name string and keep the last line of the file that contains the max depth | ||
|
||
return optmized_fifo_depths | ||
|
||
|
||
def generate_depths_file(model, initial_fifo_depths, optimized_fifo_depths): | ||
"""Generate a json file with the names of the FIFOs, the initial depths set by hls4ml and their optimized depths, | ||
for post-processing. The json file is not used by the rest of the pipeline, it is only produced for the user. | ||
|
||
Args: | ||
model (ModelGraph): The model to which FIFO depth optimization is applied. | ||
initial_fifo_depths (Dict[str, int]): A dictionary that contains the FIFO names as keys and the initial | ||
depths as values. | ||
optmized_fifo_depths (Dict[str, int]): A dictionary that contains the FIFO names as keys and the optimized | ||
depths as values. | ||
""" | ||
depths = {} | ||
for fifo_name in initial_fifo_depths.keys(): | ||
depths[fifo_name] = {} | ||
depths[fifo_name]['initial'] = initial_fifo_depths[fifo_name] | ||
depths[fifo_name]['optimized'] = optimized_fifo_depths[fifo_name] | ||
|
||
with open(model.config.get_output_dir() + "/fifo_depths.json", "w") as f: | ||
json.dump(depths, f, indent=4) | ||
|
||
|
||
def set_optimized_fifo_depths(model, optimized_fifo_depths): | ||
"""Set the new optimized FIFO depths. | ||
|
||
Args: | ||
model (ModelGraph): The model to which FIFO depth optimization is applied. | ||
optmized_fifo_depths (Dict[str, int]): A dictionary that contains the FIFO names as keys and the optimized | ||
depths as values. | ||
""" | ||
|
||
# iterate through the layer output FIFOs | ||
for output_variable in model.output_vars.values(): | ||
if ( | ||
("VivadoStreamVariable" in str(type(output_variable))) | ||
or (output_variable.name == 'in_local') | ||
or (output_variable.name == 'out_local') | ||
): | ||
if output_variable.pragma: | ||
|
||
if output_variable.name not in optimized_fifo_depths.keys(): | ||
continue | ||
|
||
filtered_depth = optimized_fifo_depths[output_variable.name] | ||
output_variable.pragma = (output_variable.pragma[0], filtered_depth) | ||
|
||
inp = model.get_input_variables()[0] | ||
inp.pragma = (inp.pragma[0], optimized_fifo_depths['in_local']) | ||
|
||
outp = model.get_output_variables()[0] | ||
outp.pragma = (inp.pragma[0], optimized_fifo_depths['out_local']) | ||
return | ||
|
||
|
||
class FifoDepthOptimization(ConfigurableOptimizerPass, ModelOptimizerPass): | ||
def __init__(self): | ||
pass | ||
|
||
def transform(self, model): | ||
"""Perform FIFO depth optimization between the FIFOs of all layers to reduce resource utilization as the | ||
initial FIFOs set by hls4ml might be larger than required. At the end of the optimization the FIFOs will | ||
have the largest depths achieved during cosimulation without causing any deadlocks between the layers | ||
(producer-consumer), thus no additional delays between the layers. In some cases, this optimization | ||
might lead to bigger FIFOs than initially set by the hls4ml tool in order to prevent deadlocks. | ||
|
||
Args: | ||
model (ModelGraph): The model to which FIFO depth optimization is applied. | ||
|
||
Raises: | ||
ValueError: If the FIFO depth for profiling provided by the user is not a non-negative integer. | ||
RuntimeError: If the IO type is not set to "io_stream". | ||
|
||
Returns: | ||
bool: The execution state of the Optimzer Pass | ||
""" | ||
|
||
# use `large_fifo_depth = 0` to keep the default fifo depth | ||
# consider changing 100_000 either with a very very large value > of any total bram storage space | ||
# or via vitis 2023.2 c-simulation | ||
profiling_fifo_depth = getattr(self, "profiling_fifo_depth", 100_000) | ||
|
||
if not isinstance(profiling_fifo_depth, int) or profiling_fifo_depth <= 0: | ||
raise ValueError("The FIFO depth for profiling (profiling_fifo_depth variable) must be a positive integer.") | ||
|
||
# check axi-stream or io-stream | ||
if not (model.config.get_config_value("IOType") == "io_stream"): | ||
raise RuntimeError("To use this optimization you have to set `IOType` field to `io_stream` in the HLS config.") | ||
|
||
initial_fifo_depths = initialize_large_fifos(model, profiling_fifo_depth) | ||
|
||
execute_cosim_to_profile_fifos(model) | ||
|
||
optimized_fifo_depths = get_vitis_optimized_fifo_depths(model) | ||
|
||
generate_depths_file(model, initial_fifo_depths, optimized_fifo_depths) | ||
|
||
set_optimized_fifo_depths(model, optimized_fifo_depths) | ||
|
||
print("[hls4ml] - FIFO optimization completed") | ||
return False |
14 changes: 14 additions & 0 deletions
14
hls4ml/backends/vitis_accelerator_ip_flow/supported_boards.json
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
{ | ||
"pynq-z2": { | ||
"part": "xc7z020clg400-1", | ||
"tcl_scripts": {"axi_lite": "axi_lite_design.tcl", "axi_stream": "axi_stream_design.tcl"}, | ||
"python_drivers": {"axi_stream": "axi_stream_driver.py"}, | ||
"c_drivers": {} | ||
}, | ||
"zcu102": { | ||
"part": "xczu9eg-ffvb1156-2-e", | ||
"tcl_scripts": { "axi_stream": "axi_stream_design.tcl"}, | ||
"python_drivers": {"axi_stream": "axi_stream_driver.py"}, | ||
"c_drivers": {} | ||
} | ||
} |
117 changes: 117 additions & 0 deletions
117
hls4ml/backends/vitis_accelerator_ip_flow/vitis_accelerator_ip_flow_backend.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,117 @@ | ||
import os | ||
|
||
from hls4ml.backends import VitisBackend, VivadoBackend | ||
from hls4ml.model.flow import register_flow | ||
from hls4ml.report import parse_vivado_report | ||
|
||
|
||
class VitisAcceleratorIPFlowBackend(VitisBackend): | ||
def __init__(self): | ||
super(VivadoBackend, self).__init__(name='VitisAcceleratorIPFlow') | ||
self._register_layer_attributes() | ||
self._register_flows() | ||
|
||
def build( | ||
self, | ||
model, | ||
reset=False, | ||
csim=True, | ||
synth=True, | ||
cosim=False, | ||
validation=False, | ||
export=False, | ||
vsynth=False, | ||
fifo_opt=False, | ||
bitfile=False, | ||
): | ||
# run the VitisBackend build | ||
super().build( | ||
model, | ||
reset=reset, | ||
csim=csim, | ||
synth=synth, | ||
cosim=cosim, | ||
validation=validation, | ||
export=export, | ||
vsynth=vsynth, | ||
fifo_opt=fifo_opt, | ||
) | ||
|
||
# now make a bitfile | ||
if bitfile: | ||
curr_dir = os.getcwd() | ||
os.chdir(model.config.get_output_dir()) | ||
try: | ||
os.system('vivado -mode batch -source design.tcl') # check if this is accepted as a command | ||
except Exception: | ||
print("Something went wrong, check the Vivado logs") | ||
os.chdir(curr_dir) | ||
|
||
return parse_vivado_report(model.config.get_output_dir()) | ||
|
||
def create_initial_config( | ||
self, | ||
board='pynq-z2', | ||
part=None, | ||
clock_period=5, | ||
clock_uncertainty='12.5%', | ||
io_type='io_parallel', | ||
interface='axi_stream', | ||
driver='python', | ||
input_type='float', | ||
output_type='float', | ||
): | ||
''' | ||
Create initial accelerator config with default parameters | ||
|
||
Args: | ||
board: one of the keys defined in supported_boards.json | ||
clock_period: clock period passed to hls project | ||
io_type: io_parallel or io_stream | ||
interface: `axi_stream`: generate hardware designs and drivers which exploit axi stream channels. | ||
`axi_master`: generate hardware designs and drivers which exploit axi master channels. | ||
`axi_lite` : generate hardware designs and drivers which exploit axi lite channels. (Don't use it | ||
to exchange large amount of data) | ||
driver: `python`: generates the python driver to use the accelerator in the PYNQ stack. | ||
`c`: generates the c driver to use the accelerator bare-metal. | ||
input_type: the wrapper input precision. Can be `float` or an `ap_type`. Note: VivadoAcceleratorBackend | ||
will round the number of bits used to the next power-of-2 value. | ||
output_type: the wrapper output precision. Can be `float` or an `ap_type`. Note: | ||
VivadoAcceleratorBackend will round the number of bits used to the next power-of-2 value. | ||
platform: development target platform | ||
|
||
Returns: | ||
populated config | ||
''' | ||
board = board if board is not None else 'pynq-z2' | ||
config = super().create_initial_config(part, clock_period, clock_uncertainty, io_type) | ||
config['AcceleratorConfig'] = {} | ||
config['AcceleratorConfig']['Board'] = board | ||
config['AcceleratorConfig']['Interface'] = interface # axi_stream, axi_master, axi_lite | ||
config['AcceleratorConfig']['Driver'] = driver | ||
config['AcceleratorConfig']['Precision'] = {} | ||
config['AcceleratorConfig']['Precision']['Input'] = {} | ||
config['AcceleratorConfig']['Precision']['Output'] = {} | ||
config['AcceleratorConfig']['Precision']['Input'] = input_type # float, double or ap_fixed<a,b> | ||
config['AcceleratorConfig']['Precision']['Output'] = output_type # float, double or ap_fixed<a,b> | ||
|
||
return config | ||
|
||
def get_default_flow(self): | ||
return self._default_flow | ||
|
||
def get_writer_flow(self): | ||
return self._writer_flow | ||
|
||
def _register_flows(self): | ||
vitis_ip = 'vitis:ip' | ||
writer_passes = ['make_stamp', 'vitisacceleratoripflow:write_hls'] | ||
self._writer_flow = register_flow('write', writer_passes, requires=['vitis:ip'], backend=self.name) | ||
self._default_flow = vitis_ip | ||
|
||
# Register the fifo depth optimization flow which is different from the one for vivado | ||
fifo_depth_opt_passes = [ | ||
'vitisacceleratoripflow:fifo_depth_optimization' | ||
] + writer_passes # After optimization, a new project will be written | ||
|
||
register_flow('fifo_depth_optimization', fifo_depth_opt_passes, requires=['vitis:ip'], backend=self.name) |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why calling
super(VivadoBackend, self)
and notsuper(VitisBackend, self)
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it does not work cause
VitisBackend
already sets thename
. I could find a workaroundThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a result of the strange inheritance structure that we have. We will try to rationalize it in the future, so hopefully this can be updated then. But for now, given the mess our inheritance structure is, I think whatever works is fine.