fastmachinelearning / hls4ml

Machine learning on FPGAs using HLS
https://fastmachinelearning.org/hls4ml
Apache License 2.0
1.24k stars 402 forks source link

Quartus multi out with stream fix #908

Closed calad0i closed 9 months ago

calad0i commented 11 months ago

A# Description

Quartus writer writes uncompilable programs when there are multiple outputs with io_stream. This PR fixes that.

Type of change

Tests

test/pytest/test_multiout_network.py

Test Configuration:

Checklist

jmitrevs commented 10 months ago

Does this work correctly if the multiple outputs have a different type? I am worried that they get forced to result_t, though I didn't check.

jmitrevs commented 10 months ago

It is these lines that make me concerned about the output types when there are multiple outputs: https://github.com/fastmachinelearning/hls4ml/blob/main/hls4ml/model/graph.py#L472-L477 https://github.com/fastmachinelearning/hls4ml/blob/main/hls4ml/model/graph.py#L610-L613

vloncar commented 10 months ago

@jmitrevs Those lines were also problematic in our attempts to split the writing into multiple IPs. There's no practical reason for this feature to exist, it is merely to be human readable from the very ancient times, and in my (limited) tests everything still works if they are removed

jmitrevs commented 10 months ago

I just tried:

def test_multi_clone(model, data, backend: str, io_type: str):
    output_dir = str(test_root_path / f'hls4mlprj_multiout_network_{backend}_{io_type}')
    hls_config = config_from_keras_model(model, "name", default_precision='fixed<32,5>')
    hls_config["LayerName"]["dense_linear"]["Precision"]["result"] = 'fixed<35,5>'
    hls_config["LayerName"]["dense_1_linear"]["Precision"]["result"] = 'fixed<40,5>'
    hls_config["LayerName"]["dense"]["Precision"]["result"] = 'fixed<35,5>'
    hls_config["LayerName"]["dense_1"]["Precision"]["result"] = 'fixed<40,5>'
    model_hls = convert_from_keras_model(
        model, backend=backend, output_dir=output_dir, hls_config=hls_config, io_type=io_type
    )
    model_hls.compile()
    r_hls = model_hls.predict(data)
    r_keras = [x.numpy() for x in model(data)]

    assert np.allclose(r_hls[0], r_keras[0], atol=1e-5, rtol=0)
    assert np.allclose(r_hls[1], r_keras[1], atol=1e-5, rtol=0)

and both outputs are result_t of fixed<35,5>. I could have made a mistake in the test code (and I am not sure why I had to define the linear result like that, too), but the behavior doesn't seem quire right.

calad0i commented 10 months ago

I changed the two places in graph.py, and now the output variables are not overwriting each other. It does not seem to affect other tests in my tests. Though, maybe this should go into a separate PR.

jmitrevs commented 10 months ago

There is still some manual seeding left, if I looked at the code correctly.

jmitrevs commented 9 months ago

@calad0i, can you remove the remaining seed setting?

jmitrevs commented 9 months ago

Generally other that the test_root_path issue, I think this is good. After that is fixed we can merge it.