Open ICValentin opened 1 year ago
Could you build with scons --d
and run it in gdb
to find out which attribute is causing the problem? Each attribute has a field called name_
, you can probe that or you can try and find out what piece of code is asking for that attribute.
Thanks for your responsiveness.
It seems that scons and gdb are not accessible from the Docker. Do you mean the syntax:
class: storage
subclass: cache
is the right one? (I got the previous message output error with this syntax) In this case, I'll have to look for an error in the attributes (finding an other way than gdb).
However I also tried the following syntax and get an error with Accelergy: In arch.yaml:
class: cache
subclass: cache # same error with or without this line (which seems consistent)
Output:
execute:/usr/local/bin/accelergy arch.yaml map.yaml problem.yaml sparse_opt.yaml --oprefix timeloop-model. -o output/model// > timeloop-model.accelergy.log 2>&1
Failed to run Accelergy. Did you install Accelergy or specify ACCELERGYPATH correctly? Or check accelergy.log to see what went wrong
End of timeloop-model.accelergy.log file:
============================================================
Accelergy has encountered an error and crashed. Error below:
============================================================
|| Traceback (most recent call last):
|| File "/usr/local/lib/python3.8/dist-packages/accelergy/accelergy_console.py", line 164, in main
|| run()
|| File "/usr/local/lib/python3.8/dist-packages/accelergy/accelergy_console.py", line 133, in run
|| ert_gen = EnergyReferenceTableGenerator({'parser_version': accelergy_version,
|| File "/usr/local/lib/python3.8/dist-packages/accelergy/ERT_generator.py", line 56, in __init__
|| self.generate_pc_ERT(pc)
|| File "/usr/local/lib/python3.8/dist-packages/accelergy/ERT_generator.py", line 72, in generate_pc_ERT
|| estimated_energy, estimator_name = self.eval_primitive_action_energy(estimation_plug_in_interface)
|| File "/usr/local/lib/python3.8/dist-packages/accelergy/ERT_generator.py", line 154, in eval_primitive_action_energy
|| energy = round(best_estimator.estimate_energy(estimator_plug_in_interface), self.precision)
|| File "/home/workspace/.local/share/accelergy/estimation_plug_ins/accelergy-mcpat-plug-in/mcpat_wrapper.py", line 87, in estimate_energy
|| energy, area = self.query_mcpat(component)
|| File "/home/workspace/.local/share/accelergy/estimation_plug_ins/accelergy-mcpat-plug-in/mcpat_wrapper.py", line 178, in query_mcpat
|| subprocess.call(exec_list, stdout=file)
|| File "/usr/lib/python3.8/subprocess.py", line 340, in call
|| with Popen(*popenargs, **kwargs) as p:
|| File "/usr/lib/python3.8/subprocess.py", line 858, in __init__
|| self._execute_child(args, executable, preexec_fn, close_fds,
|| File "/usr/lib/python3.8/subprocess.py", line 1585, in _execute_child
|| and os.path.dirname(executable)
|| File "/usr/lib/python3.8/posixpath.py", line 152, in dirname
|| p = os.fspath(p)
|| TypeError: expected str, bytes or os.PathLike object, not NoneType
============================================================
Stack with local variables (most recent call last):
============================================================
Frame 2
============================================================
| /usr/lib/python3.8/subprocess.py:858
| TypeError: expected str, bytes or os.PathLike object, not NoneType
| Local var self = <subprocess.Popen object at 0x7f1759c6e790>
| Local var args = [None, '-infile', '/home/workspace/.local/share/accelergy/estimation_plug_ins/accelergy-mcpat-plug-in/properties-cache-read_hit.xml', '-print_level', '5']
| Local var executable = None
| Local var stderr = None
| Local var preexec_fn = None
| Local var close_fds = True
| Local var shell = False
| Local var cwd = None
| Local var env = None
| Local var startupinfo = None
| Local var creationflags = 0
| Local var pass_fds = ()
| Local var encoding = None
| Local var errors = None
| Local var p2cread = -1
| Local var p2cwrite = -1
| 855: self.stderr = io.TextIOWrapper(self.stderr,
| 856: encoding=encoding, errors=errors)
| 857: | ERROR >> 858: self._execute_child(args, executable, preexec_fn, close_fds,
| 859: pass_fds, cwd, env,
| 860: startupinfo, creationflags, shell,
| 861: p2cread, p2cwrite,
============================================================
Frame 1
============================================================
| /usr/lib/python3.8/subprocess.py:1585
| TypeError: expected str, bytes or os.PathLike object, not NoneType
| Local var args = [None, '-infile', '/home/workspace/.local/share/accelergy/estimation_plug_ins/accelergy-mcpat-plug-in/properties-cache-read_hit.xml', '-print_level', '5']
| Local var preexec_fn = None
| Local var close_fds = True
| Local var pass_fds = ()
| Local var cwd = None
| Local var env = None
| Local var executable = None
| 1582: sys.audit("subprocess.Popen", executable, args, cwd, env)
| 1583: | 1584: if (_USE_POSIX_SPAWN
| ERROR >> 1585: and os.path.dirname(executable)
| 1586: and preexec_fn is None
| 1587: and not close_fds
| 1588: and not pass_fds
============================================================
Frame 0
============================================================
| /usr/lib/python3.8/posixpath.py:152
| TypeError: expected str, bytes or os.PathLike object, not NoneType
| Local var p = None
| 149:
| 150: def dirname(p):
| 151: """Returns the directory component of a pathname"""
| ERROR >> 152: p = os.fspath(p)
| 153: sep = _get_sep(p)
| 154: i = p.rfind(sep) + 1
| 155: head = p[:i]
============================================================
(the executable variable should not be set to None)
Is this last syntax make sense? (It seems that I'm a little bit lost!)
Best regards.
The first way of specifying is correct:
class: storage # this is what timeloop cares about, it is just a storage unit
subclass: cache # this is what accelergy cares about, it is a storage unit that has an underlying cache implementation
It might be helpful to isolate which tool is having trouble here.
Did you see ERT/ART produced as output with the format above? -- If so, that means some internal attributes are incorrectly specified for Timeloop to understand. Then we need to dig further in terms of which attribute is causing the problem. -- If not, that means Accelergy had trouble understanding the design, either not recognizing the plug-in or not recognizing the cache component.
Another way to isolate the problem is to run accelergy only, 'accelergy
The ERT/ART files have not been produced, both with timeloop-model and accelergy commands. So Accelergy is involved.
When I run accelergy (accelergy arch.yaml map.yaml problem.yaml sparse_opt.yaml -o output/model/
), I get this output:
_ _
/ \ ___ ___ ___| | ___ _ __ __ _ _ _
/ _ \ / __/ __/ _ \ |/ _ \ '__/ _` | | | |
/ ___ \ (_| (_| __/ | __/ | | (_| | |_| |
/_/ \_\___\___\___|_|\___|_| \__, |\__, |
|___/ |___/
Info: generating outputs according to the following specified output flags...
Please use the -f flag to update the preference (default to all output files)
{'ERT': 1, 'ERT_summary': 1, 'ART': 1, 'ART_summary': 1, 'energy_estimation': 1, 'flattened_arch': 1}
Info: config file located: /home/workspace/.config/accelergy/accelergy_config.yaml
config file content:
{'version': 0.3, 'compound_components': [], 'estimator_plug_ins': ['/usr/local/share/accelergy/estimation_plug_ins', '/home/workspace/.local/share/accelergy/estimation_plug_ins'], 'primitive_components': ['/usr/local/share/accelergy/primitive_component_libs', '/home/workspace/.local/share/accelergy/primitive_component_libs'], 'table_plug_ins': {'roots': ['/usr/local/share/accelergy/estimation_plug_ins/accelergy-table-based-plug-ins/set_of_table_templates']}}
Warn: Cannot recognize the top key "mapping" in file map.yaml
Warn: Cannot recognize the top key "problem" in file problem.yaml
Warn: Cannot recognize the top key "sparse_optimizations" in file sparse_opt.yaml
Info: Parsing file arch.yaml for architecture info
Info: Found non-numeric expression 45nm. Available bindings: {'technology': '45nm'}
WARN: Failed to evaluate "45nm". Setting system.technology="45nm". Available bindings: {'technology': '45nm'}
Info: Found non-numeric expression hp. Available bindings: {'clockrate': 200, 'datawidth': 64, 'device_type': 'hp', 'cache_type': 'dcache', 'size': 32768, 'associativity': 4, 'data_latency': 15, 'block_size': 256, 'mshr_size': 64, 'write_buffer_size': 256, 'tag_size': 9, 'technology': '45nm'}
WARN: Failed to evaluate "hp". Setting variables.device_type="hp". Available bindings: {'name': 'LLC', 'class': 'storage', 'subclass': 'cache', 'attributes': {'clockrate': 200, 'datawidth': 64, 'device_type': 'hp', 'cache_type': 'dcache', 'size': 32768, 'associativity': 4, 'data_latency': 15, 'block_size': 256, 'mshr_size': 64, 'write_buffer_size': 256, 'tag_size': 9, 'technology': '45nm'}}
Info: Found non-numeric expression dcache. Available bindings: {'clockrate': 200, 'datawidth': 64, 'device_type': 'hp', 'cache_type': 'dcache', 'size': 32768, 'associativity': 4, 'data_latency': 15, 'block_size': 256, 'mshr_size': 64, 'write_buffer_size': 256, 'tag_size': 9, 'technology': '45nm'}
WARN: Failed to evaluate "dcache". Setting variables.cache_type="dcache". Available bindings: {'name': 'LLC', 'class': 'storage', 'subclass': 'cache', 'attributes': {'clockrate': 200, 'datawidth': 64, 'device_type': 'hp', 'cache_type': 'dcache', 'size': 32768, 'associativity': 4, 'data_latency': 15, 'block_size': 256, 'mshr_size': 64, 'write_buffer_size': 256, 'tag_size': 9, 'technology': '45nm'}}
Info: Found non-numeric expression LPDDR4. Available bindings: {'type': 'LPDDR4', 'width': 64, 'block_size': 1, 'datawidth': 64, 'read_bandwidth': 1, 'write_bandwidth': 1, 'technology': '45nm'}
WARN: Failed to evaluate "LPDDR4". Setting variables.type="LPDDR4". Available bindings: {'name': 'MainMemory', 'class': 'DRAM', 'attributes': {'type': 'LPDDR4', 'width': 64, 'block_size': 1, 'datawidth': 64, 'read_bandwidth': 1, 'write_bandwidth': 1, 'technology': '45nm'}}
Info: primitive component file parsed: /usr/local/share/accelergy/primitive_component_libs/primitive_component.lib.yaml
Info: primitive component file parsed: /usr/local/share/accelergy/primitive_component_libs/pim_primitive_component.lib.yaml
Info: primitive component file parsed: /usr/local/share/accelergy/primitive_component_libs/soc_primitives.lib.yaml
Warn: No compound component classes specified, architecture can only contain primitive components
Info: Found non-numeric expression 5ns. Available bindings: {'depth': 32, 'width': 64, 'block_size': 1, 'datawidth': 64, 'read_bandwidth': 1, 'write_bandwidth': 1, 'multiple_buffering': 1, 'n_ports': 2, 'cluster_size': 1, 'technology': '45nm', 'latency': '5ns'}
WARN: Failed to evaluate "5ns". Setting system.PEandCache.PE.Buffer.latency="5ns". Available bindings: {'depth': 32, 'width': 64, 'block_size': 1, 'datawidth': 64, 'read_bandwidth': 1, 'write_bandwidth': 1, 'multiple_buffering': 1, 'n_ports': 2, 'cluster_size': 1, 'technology': '45nm', 'latency': '5ns'}
Info: estimator plug-in identified by: /usr/local/share/accelergy/estimation_plug_ins/accelergy-cacti-plug-in/cacti.estimator.yaml
Info: estimator plug-in identified by: /usr/local/share/accelergy/estimation_plug_ins/dummy_tables/dummy.estimator.yaml
Info: estimator plug-in identified by: /usr/local/share/accelergy/estimation_plug_ins/accelergy-aladdin-plug-in/aladdin.estimator.yaml
Info: estimator plug-in identified by: /usr/local/share/accelergy/estimation_plug_ins/accelergy-table-based-plug-ins/table.estimator.yaml
table-based-plug-ins Identifies a set of tables named: test_tables
Info: estimator plug-in identified by: /home/workspace/.local/share/accelergy/estimation_plug_ins/accelergy-mcpat-plug-in/mcpat.estimator.yaml
============================================================
Accelergy has encountered an error and crashed. Error below:
============================================================
|| Traceback (most recent call last):
|| File "/usr/local/lib/python3.8/dist-packages/accelergy/accelergy_console.py", line 164, in main
|| run()
|| File "/usr/local/lib/python3.8/dist-packages/accelergy/accelergy_console.py", line 133, in run
|| ert_gen = EnergyReferenceTableGenerator({'parser_version': accelergy_version,
|| File "/usr/local/lib/python3.8/dist-packages/accelergy/ERT_generator.py", line 56, in __init__
|| self.generate_pc_ERT(pc)
|| File "/usr/local/lib/python3.8/dist-packages/accelergy/ERT_generator.py", line 72, in generate_pc_ERT
|| estimated_energy, estimator_name = self.eval_primitive_action_energy(estimation_plug_in_interface)
|| File "/usr/local/lib/python3.8/dist-packages/accelergy/ERT_generator.py", line 154, in eval_primitive_action_energy
|| energy = round(best_estimator.estimate_energy(estimator_plug_in_interface), self.precision)
|| File "/home/workspace/.local/share/accelergy/estimation_plug_ins/accelergy-mcpat-plug-in/mcpat_wrapper.py", line 87, in estimate_energy
|| energy, area = self.query_mcpat(component)
|| File "/home/workspace/.local/share/accelergy/estimation_plug_ins/accelergy-mcpat-plug-in/mcpat_wrapper.py", line 178, in query_mcpat
|| subprocess.call(exec_list, stdout=file)
|| File "/usr/lib/python3.8/subprocess.py", line 340, in call
|| with Popen(*popenargs, **kwargs) as p:
|| File "/usr/lib/python3.8/subprocess.py", line 858, in __init__
|| self._execute_child(args, executable, preexec_fn, close_fds,
|| File "/usr/lib/python3.8/subprocess.py", line 1585, in _execute_child
|| and os.path.dirname(executable)
|| File "/usr/lib/python3.8/posixpath.py", line 152, in dirname
|| p = os.fspath(p)
|| TypeError: expected str, bytes or os.PathLike object, not NoneType
============================================================
Stack with local variables (most recent call last):
============================================================
Frame 2
============================================================
| /usr/lib/python3.8/subprocess.py:858
| TypeError: expected str, bytes or os.PathLike object, not NoneType
| Local var self = <subprocess.Popen object at 0x7f30bc3187c0>
| Local var args = [None, '-infile', '/home/workspace/.local/share/accelergy/estimation_plug_ins/accelergy-mcpat-plug-in/properties-cache-read_hit.xml', '-print_level', '5']
| Local var executable = None
| Local var stderr = None
| Local var preexec_fn = None
| Local var close_fds = True
| Local var shell = False
| Local var cwd = None
| Local var env = None
| Local var startupinfo = None
| Local var creationflags = 0
| Local var pass_fds = ()
| Local var encoding = None
| Local var errors = None
| Local var p2cread = -1
| Local var p2cwrite = -1
| 855: self.stderr = io.TextIOWrapper(self.stderr,
| 856: encoding=encoding, errors=errors)
| 857: | ERROR >> 858: self._execute_child(args, executable, preexec_fn, close_fds,
| 859: pass_fds, cwd, env,
| 860: startupinfo, creationflags, shell,
| 861: p2cread, p2cwrite,
============================================================
Frame 1
============================================================
| /usr/lib/python3.8/subprocess.py:1585
| TypeError: expected str, bytes or os.PathLike object, not NoneType
| Local var args = [None, '-infile', '/home/workspace/.local/share/accelergy/estimation_plug_ins/accelergy-mcpat-plug-in/properties-cache-read_hit.xml', '-print_level', '5']
| Local var preexec_fn = None
| Local var close_fds = True
| Local var pass_fds = ()
| Local var cwd = None
| Local var env = None
| Local var executable = None
| 1582: sys.audit("subprocess.Popen", executable, args, cwd, env)
| 1583: | 1584: if (_USE_POSIX_SPAWN
| ERROR >> 1585: and os.path.dirname(executable)
| 1586: and preexec_fn is None
| 1587: and not close_fds
| 1588: and not pass_fds
============================================================
Frame 0
============================================================
| /usr/lib/python3.8/posixpath.py:152
| TypeError: expected str, bytes or os.PathLike object, not NoneType
| Local var p = None
| 149:
| 150: def dirname(p):
| 151: """Returns the directory component of a pathname"""
| ERROR >> 152: p = os.fspath(p)
| 153: sep = _get_sep(p)
| 154: i = p.rfind(sep) + 1
| 155: head = p[:i]
============================================================
It seems that the McPAT plug-in is recognized but is involved in the error.
To resume:
The right syntax is:
class: storage
subclass: cache
With this syntax, when I run timeloop-model, I get the hereafter error:
timeloop-model: include/model/attribute.hpp:62: T model::Attribute<T>::Get() const [with T = long unsigned int]: Assertion `specified_' failed.
Aborted (core dumped)
And no output files (nothing in output/model/ and no timeloop-model.accelergy.log file).
When I run accelergy, I get the output of the previous message. Still no outputs files (nothing in output/model/ and no timeloop-model.accelergy.log file).
With these two tests, I have either the attribute issue or the cache recognition issue... :(
Thanks again for your responsiveness. It is really appreciated.
On the Timeloop side the issue is that size
isn't a recognized attribute. You can set the size using the sizeKB
, entries
or (width
and depth
) attributes. We should have better error messages, sorry about that.
A made some tests:
First, replacing size: 32768
by entries: 32768
and then by sizeKB: 32768
.
I get this error:
execute:/usr/local/bin/accelergy arch.yaml map.yaml problem.yaml sparse_opt.yaml --oprefix timeloop-model. -o output/model// > timeloop-model.accelergy.log 2>&1
Failed to run Accelergy. Did you install Accelergy or specify ACCELERGYPATH correctly? Or check accelergy.log to see what went wrong
with that line at the end of the timeloop-model.accelergy.log file:
ERROR: attributes size for compound class cache must be specified in architecture description
Secondly, I tried width: 32768
and I get this:
timeloop-model: include/model/attribute.hpp:62: T model::Attribute<T>::Get() const [with T = long unsigned int]: Assertion `specified_' failed.
Aborted (core dumped)
Thirdly, I tried adding the depth (1024*32 = 32768, 1024 being the size of a cache line)
width: 1024
depth: 32
Of course I get this:
ERROR: data storage width: 1024 block_size: 256 word_bits: 64
timeloop-model: src/model/buffer.cpp:257: static model::BufferLevel::Specs model::BufferLevel::ParseSpecs(config::CompoundConfigNode, uint32_t, bool): Assertion `width % (word_bits * block_size) == 0' failed.
Aborted (core dumped)
So I tried just formally adding the depth:
width: 32768
depth: 1
I get the first error with the same line in the timeloop-model.accelergy.log file:
ERROR: attributes size for compound class cache must be specified in architecture description
And finally, I tried to add size (seems that accelergy want it)
size: 32768
width: 32768
depth: 1
And I got the same error, with a timeloop-model.accelergy.log file equivalent to the one I gave two messages above.
To resume, the size attribute seems to be asked by Accelergy ; the width attribute seems to be important for this assertion (width % (word_bits * block_size) == 0
) to be true, even if I never get this error when width was not mentioned ; just adding attributes with size does not change the problem.
--"We should have better error messages, sorry about that." The other side of the coin! :)
Thanks for the detailed logs.
One quick solution to this conflict is to run Accelergy with the correct accelergy attributes first to generate the ERT/ART. Then you can update the architecture to have the attributes Timeloop needs, sending the generated ERT/ART files into the timeloop runs. In this case, timeloop will not invoke Accelergy since it sees the ERT/ART already generated. Note that for a single set of hardware, the ERT/ART only needs to be generated once, so you can keep using the timeloop-compatible spec for different workloads.
I get the idea. It could resolve a part of the issue, but accelergy still does not work (ERT/ART files not generated):
When I use the size attribute (the one asked by the accelergy logs in every cases) and use the accelergy command on that architecture, I still get the above error (https://github.com/NVlabs/timeloop/issues/183#issuecomment-1377073525).
-> no line telling that an attribute is missing like this one ERROR: attributes size for compound class cache must be specified in architecture description
, but a problem in the execution (executable
variable should not be set to None
apparently).
With the McPAT plugin recognized and a syntax that seems correct, could it be that the problem is out of my area of action?
We will need to go into the plug-in a little -- the plug-in itself is not complex, it serves as a caller to the McPAT tool, which does the heavy lifting. From the log, it seems that the McPAT plug-in fails when it calls McPAT: https://github.com/Accelergy-Project/accelergy-mcpat-plug-in/blob/master/mcpat_wrapper.py#L178
To isolate the problem, can you try to specify the attributes as shown in the example provided by the McPAT plug-in? The spec here is what has been used to test the plug-in: https://github.com/Accelergy-Project/accelergy-mcpat-plug-in/blob/master/test/mcpat_wrapper_test.py#L93
With your last comment, I realized that I thought the McPAT tool was installed with the McPAT plug-in (my bad). It was for sure the source of most of my issues.
So I tried to install it in the Docker but I can't build it ! (make package is not installed in the Docker and I don't have the right to installed it myself). Perhaps I should change my approach and use an administrator machine instead of the Docker, but it's quite complicated to set up in my laboratory.
However, I found this line in the Dockerfile (line 10) of the accelergy-timeloop-infrastructure repository:
&& apt-get install -y --no-install-recommends make \
Is it normal that I can't access to the make command?
Moreover, I thought I could modify the Dockerfile to add McPAT installation and create a new image. Any recommendations or rules for this idea? (I don't want to create problems!) For example, I guess I will have to change some of the following lines (a new name for a new image...):
# Labels
LABEL org.label-schema.schema-version="1.0"
LABEL org.label-schema.build-date=$BUILD_DATE
LABEL org.label-schema.name="Accelergy-Project/accelergy-timeloop-infrastructure"
LABEL org.label-schema.description="Infrastructure setup for Timeloop/Accelergy tools"
LABEL org.label-schema.url="http://accelergy.mit.edu/"
LABEL org.label-schema.vcs-url="https://github.com/Accelergy-Project/accelergy-timeloop-infrastructure"
LABEL org.label-schema.vcs-ref=$VCS_REF
LABEL org.label-schema.vendor="Wu"
LABEL org.label-schema.version=$BUILD_VERSION
LABEL org.label-schema.docker.cmd="docker run -it --rm -v ~/workspace:/home/workspace timeloopaccelergy/accelergy-timeloop-infrastructure"
I think building a new docker would be the easier approach since you would like to add more tools to the environment. I wouldn't worry about the labels first when testing whether the build is successful.
The more important changes should be adding the plug-in and the McPAT tool. The McPAT tool needs to be built first and added to the docker, and the process is similar to the process of setting up the cacti tool (which was necessary for the cacti plug-in). It would be helpful to understand the setup for cacti here: https://github.com/Accelergy-Project/accelergy-timeloop-infrastructure/blob/4a958cf1311609765179c1f5500ae19d44f0432f/Dockerfile#L20
And you can do the same to McPAT as well.
Hello,
After few days working on the new Docker image, I finally have the McPAT tool and plug-in, and with the size attribute the accelergy command is working well (ERT and ART are generated). But when I use this files on the timeloop commands (model and mapper) with the width and depth attributes, I get a new kind of error:
execute:/usr/local/bin/accelergy arch.yaml problem.yaml map.yaml sparse_opt.yaml --oprefix timeloop-model. -o output/model// > timeloop-model.accelergy.log 2>&1
timeloop-model: src/model/buffer.cpp:1275: model::EvalStatus model::BufferLevel::ComputeScalarAccesses(const CompoundDataMovementInfo&, const CompoundMask&, double, bool): Assertion `(specs_.instances.Get() % specs_.cluster_size.Get()) == 0' failed.
Aborted (core dumped)
(cluster_size of my buffer level is set to 1 in my case, so the assertion should succeed?)
Note that an architecture without cache component is working correctly on this new image (model and mapper), which is a good achievement, thank you very much for that !
Is this error due to an other installation/syntax problem or is it just a question of sizing attributes like _clustersize?
Moreover, I got this error several time because of some of my terrible mappings :
ERROR: couldn't map level MainMemory: runtime partition needed but not supported (NOT IMPLEMENTED YET)
I can easily avoid it modifying the map file but I'm not sure to have understood its meaning, and I was just wondering what are you planning to implement exactly about this error?
Thank you for all your help. I hope to share with you the Docker McPat additions and some cache architecture examples (when it will work!).
Yes, if cluster_size
is set to 1 then the assertion should succeed. Could you use gdb to figure out what the numerator and denominator are and why the assertion is failing?
It seems that the cluster_size variable is set to 2, even when I specify cluster_size: 1
in the cache description (I'm sure it is about the cache since I check other attribute values).
The instances variable is set to 1 (which corresponds to my architecture).
The attribute key name is cluster-size
, not cluster_size
. We apologize for the inconsistencies in attributes using underscores vs. hyphens in our key names.
Still, that doesn't explain why the attribute value is set to 2, because the default should be 1. But in the meantime, please try changing the key to cluster-size
and see if that fixes this issue.
I change the underscore to a hyphen and it worked!
I've also done some tests and got this error: ERROR: block_size * word_bits * cluster_size != storage width
which explain why the cluster-size attribute was not set to 1, it was set to match this assertion.
So with some adjustments in my attribute's values, I finally have a functional architecture with a cache component!
Here is a part of my code (it's an example), for whoever it could help:
- name: LLC
class: storage
subclass: cache
attributes:
device_type: hp
cache_type: dcache
clockrate: 200 #MHz
data_latency: 15
datawidth: 64 # double
size: 32768 # total size = width * depth = 2^s
width: 256 # width (line size) = datawidth * block_size * cluster-size
depth: 128
block_size: 4
cluster-size: 1 # instances_number % cluster-size == 0
associativity: 4 # 2^n
tag_size: 9 # 22 - s + n
mshr_size: 4 # maximum outstanding requests
write_buffer_size: 256
n_rd_ports: 1
n_wr_ports: 1
n_rdwr_ports: 1
n_banks: 1
Hello,
I just requested a pull on the accelergy-timeloop-infrastructure to add the McPAT plug-in feature. I also created a public image online (DockerHub) of the new infrastructure, for those who might be interested: https://hub.docker.com/r/vi270662/accelergy-timeloop-infrastructure-with-mcpat
Regards
Hello,
I'm coming back to you because of my latest researches and use of SparseLoop.
If I understand correctly, it is currently not possible to perform miss and hit cache counts from SparseLoop (Accelergy recognizes these two actions but only for energy estimates).
Is there a way to integrate the misses and hits into SparseLoop easily from your interface or do we have to do it ourselves?
Best regards.
I want to make sure I understand the question.
Just as a reminder, Timeloop/Sparseloop models explicitly-managed memories that exploit reuse through tensor tiling. Such a model can serve as a reasonable proxy for a perfectly-managed cache for statically analyzable workloads, but it cannot precisely reproduce the behavior of an LRU-like stateful replacement algorithm.
Are you asking for a simple translation from the metrics that are emitted today (reads, fills, updates) to cache hits/misses for a cache modeled in this (i.e., perfectly-managed) way? If so, then yes it's a straightforward translation (fills == misses).
Or are you asking for a stateful LRU-like model? If so, then no Timeloop/Sparseloop does not do that today. In principle you can feed the results of NestAnalysis into a new stateful uarch model. That may be an interesting and useful extension but it's not in our plans at the moment.
I was asking for the second point. It was just to make sure I hadn't missed anything, and your answer matches what I understood. If I have to develop something in this direction for the caches in the interest of my thesis, I will keep you informed.
Thank you very much for your responsiveness.
Hello,
I’m a French PhD student working on the design of a memory sub-system for sparse data accesses. I just began my study of the state-of-the-art and found SparseLoop, which is a really useful platform for me to classify existing sparse accelerators and simulate my future design. I have already worked a few weeks on it.
I used the Docker in the repository https://github.com/Accelergy-Project/accelergy-timeloop-infrastructure, and successfully ran SparseLoop.
For my work, I would like to use the cache component that can be simulated with the McPAT plug-in https://github.com/Accelergy-Project/accelergy-mcpat-plug-in. Since this plug-in is not installed in the Docker image, I installed it locally and verified that it is recognised by checking the timeloop-model.accelergy.log file. Then I implemented a simple architecture to test the cache component but I am facing various errors.
I suspect it might be a syntax issue, since I did not find any complete examples using the cache component. For example, I don’t know if I have to consider the cache as a subclass of the storage class (didn’t work). The architecture I am testing is composed of (in this order) a main memory, a cache (last level), a register (buffer) and a MAC (quite simple). It worked fine without the cache. I am sending you the files of my architecture and the .log file with the latest error I got (using the filesender link below), and the console I/O:
Input:
timeloop-model arch.yaml map.yaml sparse_opt.yaml problem.yaml -o output/model/
Output:https://filesender.renater.fr/?s=download&token=b3184b49-a79e-46d1-8558-97a65053a8ca (the link expires on January 21, 2023) (no output files have been created in the output/model/ directory)
Note that, because of access rules in the Docker, I had to install the McPAT plug-in in my workspace, and then add the path to a file in the .config directory, that I had to create before pulling the first Docker image to have the rights to modify it. The plug-in is recognized by Accelergy so I guess this is not the source of the problem, but I’m mentioning it just in case.
I was wondering if you have some examples which uses the cache component that you can share or whether you might have a solution to this problem?
Best regards.