trilinos / Trilinos

Primary repository for the Trilinos Project
https://trilinos.org/
Other
1.19k stars 559 forks source link

Trilinos: Add support for scalable, advanced architecture performance testing #5619

Closed jjellio closed 2 years ago

jjellio commented 4 years ago

Performance tracking seems to come up frequently, with various groups all hand-rolling approaches. Trilinos as a product would benefit from an extensible testing framework that supports performance testing. Since we utilize Cmake, Ctest, and Cdash in many places (e.g., projects beyond Trilinos have adopted this), I way forward for performance tracking would be to leverage workflows that projects are familiar with.

At the heart of this issue are a few basic requirements:

  1. The ability for product owners to describe performance tests
  2. The ability to run the performance test in suitable manner (e.g., multi-node, with proper process binding, thread settings, and device utilization)
  3. The ability to save seemingly arbitrary data
  4. The ability for diverse product teams to access and visualize that data how they see fit

The prior list seems a bit lofty, but 2 is actively being tackled in issue #5598. 3 is semi-solved - CDash is used daily by teams to assess whether builds completed and unit tests succeeded. Where CDash is lacking is 1) exposing app specific data; and 2) providing robust reporting tools, which is effectively 4; and 3) gathering data that is suitable for performance analysis.

To that end, I think leveraging the tools and people already in place is scalable way forward to enable diverse performance tracking.

To explore this, I wrote some python scripts that slurp CDash builds and runs. From that data, I can then populate a database to track all sorts of things. Since CTest effectively stores the output of each run, I can then parse and retain application specific timings. (e.g., associate Teuchos Timers with the data already present in CDash). With that data, I then setup a docker Grafana instance to visualize the data.

The above process worked. I can visualize teuchos timer information for specific apps (all unit tests), over time for the past 8 months.

Incidentally, Irina (@ikalash) has a dashboard where she is using CTest to drive performance testing for Mini-EM, and she and Jerry are mining the CDash for some performance data. So it seems the approach is semi-attractive (not perfect, but it meshes with workflows that we are accustomed to).

If we can performance testing as tractable as defining a unit_test, then customers such as ATDM which build and run on production machines can utilize the approach define and run performance tests. This lets apps define the tests, and ultimately view the results. As a support-type person, this puts me in the position to focus on providing better performance tracking.

Where the above process is falling short is in two areas:

  1. We cannot define a 'performance' tests. ) Performance testing is only partially about running the binary, of equal importance is the metadata that allows you classify and interpret the results you get. CTest supports the idea of 'properties', but trying to add those to a test after defining the test is didn't work - and isn't a acceptable workflow. ) This issue argues for an enhancement to Tribits to allow for arbitrary properties to be passed and set on a test. *) Trilinos (technically Tribits) provides the functionality to provide a 'Performance' test category, and Chris and Ross have worked out getting that to work. (PR #5583)
  2. We cannot use CTest to run these tests in performance-oriented way *) As-is, using Ctest to evaluate scalability is a no-go. Irina has made it work by grabbing allocations equal to the max node size needed - on larger (more production focused machines) that type of utilization would hurt your queue priority.

Thoughts on addressing some of the issues:

For CTest/Tribits, I would like to be able to define a performance test with arbitrary metadata. E.g.,

add_test(NAME <name>
         [CONFIGURATIONS [Debug|Release|...]]
         [WORKING_DIRECTORY dir]
         # command is just the binary + args (what goes infront of this is generated by a suitable module)
         COMMAND <command> [arg1 [arg2 ...]]
         # allow arbitrary properties to be attached to the test (e.g., could call set_properties)
         TEST_PROPERTIES num_nodes=2;cores_per_proc=4;mesh_name=coolTest_01_l1;problem_name=FIND_WALDO_IN_BOX_OF_EM_PULSES;key1=value1;.....
)

The properties attached to a test may be used by CMake/CTest to run the job. That is, this type of metadata subsumes the idea of PROCESSORS or explicit arguments. Instead, a list of key/values can be passed.

Ultimately, I want these properties to get posted to CDash. Currently, 'Processors' is added a 'Measurement', and if I look at the json for 'test', I can look at test->measurement-> {name : Processors, value : 4, type : double/numeric } . I need these metadata later when I look at the test and try to present information. I do not want to have write custom parsing stuff in my scripts to decode a command line. Plus, not all information needed is present on the command line. For example, SPARC's command line may look like srun ... sparc -i sparc.inp. I could parse the nodes, procs, and cores from SRUN, but clearly I know nothing of which mesh or science the performance test performed.

The idea of arbitrary key/values allows the apps to annotate runs with app specific info, which can then be used later to generate app specific queries E.g., How does the app scale as Waldo is distributed across nodes.

I can surely write more, or provide examples. I believe I have listed two actionable items, and hopefully presented a compelling case for how this approach can lead to sustainable tracking over time.

@ikalash @csiefer2 @jhux2 @bartlettroscoe @kddevin

I am missing from people to mention (Jerry, David Poliakof, and Paul)

ikalash commented 4 years ago

Tagging Jerry: @jewatkins.

jhux2 commented 4 years ago

@pwxy @DavidPoliakoff

jewatkins commented 4 years ago

I agree, if we can construct something around cmake, ctest, cdash, I think that would be ideal. Adding some more relevant discussion points based on our email thread:

Data storage:

Post processing/Visualization (i.e. constructing timelines, scalability plots):

DavidPoliakoff commented 4 years ago

I'll post the comparison to what LLNL has done, @jjellio has heard this before. This isn't a "all (or any) of what's being said here is wrong," in fact I really like the ease of the CTest/CDash based model, but just to provide a point of comparison from another site that's looking at the same problems.

At LLNL, the philosophy being pushed or this and other performance concerns was having a tool which is tightly integrated into the application (a "ubiquitous tool"). Such a tool

  1. Is always in some way on, it's never/almost never ifdef'd out
  2. Is controlled by the application as it runs through an API (./my_hydro_app --track-performance would configure the tool to do what we're talking about here)
  3. Would be told what it needs to know by the application, metadata like "problem size" should be declared through an API

At LLNL, that tool is called Caliper, and when configured to do performance tracking over time, the combination is called Spot (ironically, Spot came out of some prototyping done in an internship at Sandia Livermore). That workflow is live in the major hydro apps, next-gen ATDM code, and at least one utility library, it's really spreading like wildfire. We had it live when the Spectre/Meltdown thing happened and had some really awesome (also terrifying) graphs of what happened to the performance of our codes as patches were applied (I was asked to stop sending people emails with phrases like "the building is on fire"). At this point, LLNL has some really top-notch people doing web development to make nice visualizations or a lot of the problems you care about.

Where it's weaker than the CDash solution (in my opinion) is that the data storage solution is a lot iffier, we were spitting out Caliper-specific files to disk and then using a Caliper utility to slurp those up. When I left we were looking at moving to SQL (my hope) or HDF5 (the project lead's hope, for valid reasons I happen to disagree with), but I'm not sure where we're going, I have a conversation active with my old boss to see what's changed. Being an "SQL is love, SQL is life" kind of guy, I really like what James is doing in terms of keeping data in a database.

What James and I are looking at is using CDash in order to populate all the data Spot would need to operate so that people can see "oh, here's what it looks like in the end," and go from there. I'll also be shilling Caliper or other purposes as well (it's a really great swiss-army-knife tool), so we might wind up converging somewhere in the middle.

jewatkins commented 4 years ago

caliper seems like it might tie nicely with kokkos-tools. Everything of importance is probably wrapped in some kokkos hook at this point.

jjellio commented 4 years ago

I think what David points out can be broken into pieces: 1) Data generation. 2) Data presentation

Data generation is via Caliper. I think it is fair to pair Caliper to Kokkos Profiling (KokkosP). And in a more basic sense, to Teuchos. Suppose that CTest drives the process, so long as the output of whatever is posted to Stdout, there is nothing that would stop a tool from parsing that. It just means that for performance runs, we may pass a flag '--enable performance-tracking'. In many many ways, the way Caliper is integrated into their code is very very similar to how Teuchos timers are pervasive in Trilinos codes.

As for data presentation, if we can aggregate data into a sane way. (SQL works fine), then any visualization tool can pull from that.

For now, I think we should KISS (Keep Is Simple, Stupid!). If we can demonstrate that we can parse Teuchos timers in a scalable, automated fashion; Then we will secure credibility for something more ambitious - E.g., parsing KokkosP space-time output, or adding Trilinos functionality to dump memory high water marks, peak power, or aggregate energy, flops per watt, etc...

I think if we can embrace a workflow that Sandians feel comfortable with, this will encourage product owners to use the tool. Irina has something functional now. If we can just learn from the issues I've run into (listed here), and the insights from Jerry and Irina, I think we can get something off the ground rather fast. (I am already working with Irina's data in my prototype).

bartlettroscoe commented 4 years ago

equal importance is the metadata that allows you classify and interpret the results you get.

Seems like getting ctest/cdash to support arbitrary key/value pairs should be doable. We can scope this out and add this to the FY20 Kitware contract (still being scoped out right now).

But if you are willing to read the test STDOUT to get the TEuchos Timers, could you not just read these properties printed at the top of the test output? For example, when using TRIBITS_ADD_ADVANCED_TEST(), you get a header for the test like shown here showing:

Advanced Test: PanzerDofMgr_scaling_test

Selected Test/CTest Propeties:
  CATEGORIES = BASIC
  PROCESSORS = 4
  TIMEOUT    = DEFAULT

Note that "CATEGORIES" is not a CTest property. It is only defined and understood by TriBITS. Why not just extend TRIBITS_ADD_ADVANCED_TEST() to allow the user to define any arbitrary set of key/value pairs and then print them to STDOUT like this and then pull them down from CDash?

We cannot use CTest to run these tests in performance-oriented way

We can define a "test" anyway we want with ctest/tribits. Each test could do its own unique batch allocation and then wait blocking for it to run and complete. Ctest really already supports the idea of batch systems with the TIMEOUT_AFTER_MATCH test property. You just need to submit the job in wait mode. For example, you can just run a ctest -j100000 and ctest submit all of the batch jobs at once and then just wait for them to run in the queue and complete. Then using TIMEOUT_AFTER_MATCH allows for ctest to report accurate wall-clock times.

bartlettroscoe commented 4 years ago

Irina reports cdash as not being reliable.

Can someone provide some evidence for that? Our usage of CDash for ATDM shows it is pretty robust.

jjellio commented 4 years ago

@bartlettroscoe

Even if I chose to parse the the advanced output, it still is not sufficient. (I want app devs to annotate their tests with whatever metadata they want. - Meaning I wouldn't know all of the key/values to pull out, unless it dump something structured like Json, where I could grab the whole thing)

We can do this. I think Tribits can do it.

I figured out how to add arbitrary fields, but I have to do it manually: If I edit a CTest file (for example):

# editing inside a build directory!
./packages/kokkos/core/unit_test/CTestTestfile.cmake

I can add:

set_tests_properties(KokkosCore_UnitTest_Serial_MPI_1 PROPERTIES MEASUREMENT "num_nodes=1")
set_tests_properties(KokkosCore_UnitTest_Serial_MPI_1 PROPERTIES MEASUREMENT "mesh_name=HiFire1_L0")
set_tests_properties(KokkosCore_UnitTest_Serial_MPI_1 PROPERTIES MEASUREMENT "procs_per_node=4")

Now, if I run -D NightlyTest -R KokkosCore_UnitTest_Serial_MPI_1

Then look at Testing/…/Test.xml

I can see these ‘measurements’ added

    <NamedMeasurement type="text/string" name="mesh_name">
            <Value> HiFire1_L0</Value>
    </NamedMeasurement>
    <NamedMeasurement type="text/string" name="num_nodes">
            <Value>1</Value>
    </NamedMeasurement>
    <NamedMeasurement type="text/string" name="procs_per_node">
            <Value>4</Value>
    </NamedMeasurement>
    <Measurement>

(edited to fix white space)

This atleast shows I can coerce these annotations into the XML that probably is exactly what CDash slurps.

Could Tribits' add_test be amended to take a list of Key/Value pairs, and have them 'set as properties'? Btw, if I look at the CTest file, it has the test definition, followed by a line with the PROCESSORS's property set... so somehow these properties calls are being written to CTest files... I don' t think you 'set them' inside a CMakeLists.txt file.... I think they need to be written so CTest picks them up.

jjellio commented 4 years ago

Ross, I pulled the Mutrino CDash for Haswell, and have unit test data going back to Dec 2018.

It would be nice if the CDash maintainers could keep the data as long as possible.

jjellio commented 4 years ago

Btw, my prior use of MEASUREMENT is a bit of a hack. It would be nice if CTest/CDash could make it more natural to attach key/value pairs to a test. (They aren't measurements in my case, but it looks like I can stash the data there, and from a user perspective, they would just pass a list of Key/values to Tribits' add_test)

DavidPoliakoff commented 4 years ago

@jewatkins : yep, I certainly thought so ;) https://github.com/kokkos/kokkos-tools/pull/62

ikalash commented 4 years ago

@bartlettroscoe : regarding CDash being unreliable - perhaps "unreliable" isn't the best word. My understanding is the CDash data lives somewhere on a server, and we have a set of licenses for various CDash sites. A given license may have a quote as to how much data can be stored under it. I think some of the CDash sites are configured to throw away data that is N days, months, or years old. This is the case for the external CDash site that Albany uses for example (https://my.cdash.org/viewSubProjects.php?project=Albany ) - if you go back to May under "Calendar", the dashboard is blank even though we had tests running and posting there. We've also had cases where the disk space for that same external site crashes and we lose all the data that is still currently stored (John Perseo looked into this for us some time ago when it happened - you could ask him about the details), as well as the data not getting uploaded due to a quota (that we pay for) being exceeded - I think this is probably why the data gets purged currently after some time now (to avoid exceeding a quota). @gahansen may be able to comment more on this, being the one who set up the CDash sites for Albany and having dealt with some of these issues after I pointed them out.

It could be that the Trilinos CDash site supports more than the Albany one, and the issues haven't been encountered.

One thing to keep in mind is if we start adding performance testing for a lot of applications, there might be a lot more output to store on CDash than for unit tests, which may cause quotas to be exceeded faster than they would otherwise.

bartlettroscoe commented 4 years ago

When we talk about pulling data off of CDash, we are talking about the CDAsh API that returns JSON data, right? You can see where we do that with the TriBITS cdash_analyze_and_report.py tool:

bartlettroscoe commented 4 years ago

@ikalash said:

My understanding is the CDash data lives somewhere on a server, and we have a set of licenses for various CDash sites. A given license may have a quote as to how much data can be stored under it.

We have our own CDash sites at SNL and we can accommodate huge amount of data. CDash is free so there is no reason not to host your own CDash site.

bartlettroscoe commented 4 years ago

It would be nice if the CDash maintainers could keep the data as long as possible.

I talked with Kitware about phased data retention strategies where some types of data would be keep for a long time (even years) while less critical but large data (like detailed STDOUT output) could be purged as you go back in time.

ikalash commented 4 years ago

@bartlettroscoe : I think the rationale for having an external site was to allow non-Sandians to see the output from some of the Albany nightlies, and also to push to the site (e.g. we have collaborators at RPI who had nightlies at some point). I feel like we did have an issue with our internal CDash site going down and losing data at some point too (we have an internal Albany CDash site as well) - @gahansen would be able to say more about that. In general the internal site is a lot more stable than the external one, I agree.

bartlettroscoe commented 4 years ago

I think the rationale for having an external site was to allow non-Sandians to see the output from some of the Albany nightlies, and also to push to the site (e.g. we have collaborators at RPI who had nightlies at some point).

@ikalash, our sites testing.sandia.gov/cdash and testing-dev.sandia.gov/cdash are 100% open to be read and pushed to. Try going to:

on your phone for example.

bartlettroscoe commented 4 years ago

Just to clarify, I see CDash as a convenient way to store and extract various types of data about our testing. But long-term refined/filtered performance data retention should likely go into another system where it will be very fast to query and can store for many years.

ikalash commented 4 years ago

@bartlettroscoe: yes I know. This is why I don't know why Glen set up the external one. There must have been a reason at the time.

jjellio commented 4 years ago

CDash is not intended as the data repository. CTest and CDash is the interface to the various data. Why CTest? Because workflows already use it. Why CDash? Because it encapsulates, transmits, and stores data in a predictable fashion.

My tool, pulls from CDash, takes what I want, organizes it into SQL tables, and stores it on another server. The CDash data as provided is not rich enough to do applications specific scalability or performance tracking. My tool reads the 'CDash->test->output' which is effectively stdout, and from that it looks for the last occurrence of a Teuchos Timer table (or stacked timer tree).

David Poliakof points out that this could be extended at a later time to parse other things (perhaps KokkosP or Caliper).

The visualization of the data parsed into the database is then a separate issue. I've stood something up that uses Grafana, but Spot from LLNL or any other web-based visualization tool could probably work. (I chose Grafana because it supports using SQL queries to filter data)

To make everything work, there are missing pieces.

  1. Runs lack app specific annotations (mesh name, problem type, science performed...) the kind of things you need to distinguish identically parallelized runs from each other.
  2. Runs lack sufficient runtime-parallel decomp info. Knowing how many MPI processes there were is insufficient. You need info such as node counts, devices uses per process, threads per process, and cores per process (at minimum)
  3. CTest cannot drive performance runs as-is. It does not annotate data sufficiently, and more importantly, with large runs you want the 'test' decoupled from the 'build' A 1024 node job may sit in a queue for days. A logical approach is installation testing. And if installation testing is used, then will it append all needed info. E.g., what is the Git SHA used to build? Compilers, etc...

Assuming the pieces above fall into place, then various app teams can express a 'performance test', install the app, and have CTest run it correctly for meaningful tracking. (then sometime later, they do another build, and the tests repeat.)

When a test is run is not the key information, it is what version of the code ran . To this end, installation needs to ensure that a CTest + CDash thing can happen, and that suitable data is associated with a run.

bartlettroscoe commented 4 years ago

If it takes days to run the performance tests, then you would just have to run once a week or in a loop where you pull, build, invoke tests (blocking until they all run), submit to CDash, and then start again (however long that takes)

You don't need to install things in order to know the version info on CDash. You can get that directly from CDash, even for multi-repo projects like SPARC (as it sends the SPARCRepoVersion.txt file to CDash). I can demonstrate.

In any case, it looks like we are about out of Kitware funding for FY19 so this will have to wait until FY20 (unless we get a small bump up in funding for FY19 to start talking about this). But it is good to document this stuff and get this into the backlog for the FY20 contract.

jjellio commented 4 years ago

It doesn't take days to run a performance test, it can days to get an allocation to run the 20 minute performance test.

Noone in their right mind is going to queue Build + Run on Trinity with 1024 nodes for 12 hours. So you can build the app in 11 hours (serially), and then spend 20 minutes doing actual 1024 node runs.

The prior point is why I want CTest to support the idea of being run outside of a build. If I do that, then how would CDash behave? There is no part of CTest that prescribes it be embedded inside a build. It seems like so long as you can write CTests, you can execute ctest on that directory.

Anyway - I think we are talking past each other in many areas. It would probably be easier for me show you how I have this setup.

bartlettroscoe commented 4 years ago

CTest can run outside of a build. You can create a dummy cmake project and just call add_test() to run anything you want. You can do that with TriBITS too. You can have a dummy package with nothing but tests that run (but don't build anything).

jjellio commented 4 years ago

What about CDash though? Could you post to a dashboard, if you didn't make a build? (Looking at the JSON, it seems their stuff relies on associating a 'test' with a 'build' <-- Which is great!)

Let's make a concrete example:

Recast into this issue. The bolded spots are this issue.

The workflow is actually not that different that what we have now... which is my point. If we can enable this workflow to provide performance testing, then we get an extensible way to drive tracking that allows multiple customers to buy in.

Details such as what data we parse is not listed, because those are things that performance tracking people could work with apps on. The value proposition is that Apps define their performance tests, our existing workflows run/aggregate that data, and that a small group of people work to maintain the glue that holds it together.

jjellio commented 4 years ago

And it's worth pointing, the arbitrary key/value data seems doable without any significant changes. We can attach them as 'measurements', and hopefully CDash will store them.

The harder problem is how to define tests that need various number of nodes, and have CTest know how to run them.

bartlettroscoe commented 4 years ago

The harder problem is how to define tests that need various number of nodes, and have CTest know how to run them.

We should be able to define ctest tests by asking ctest to run a script which submits and blocking batch job that defines all of this info. Since we can write that script that ctest runs anyway we want, we should be able to define a system that does that without any changes to CTest. We can even provide TriBITS support for that if it can be implemented generically (with plugin support for the various backend batch systems).

jjellio commented 4 years ago

Ross, the meta-data I mention here, that should be used to define that script. My point, is that the metadata is useful for both running and for tracking the run.

If the script generates the run lines/batch submissions, then users aren't doing that (good). And it ensures they are done consistently. Logging the values into 'measurements' is just a nice way to have them annotated in the CDash JSON.

The 'Properties' fields on CTests are rather restrictive. You can't add arbitrary properties. But you can add 'measurements'.

If those properties were a bit more flexible, then we can add properties such as 'processors', which means something to Kitware and Tribits, but also things like 'cores_per_proc', or 'mesh_name' - metadata that would be useful for analyst later on.

I think this is 100% doable w/CTest. I don't think it would be robust to ask users to know this level of detail, nor to expect them to do abunch of 'extra' things.

The 'extra' things I want to ask of them, are adding some meta-data. Things like 'num_nodes', 'num_mpi_procs', 'cores_per_proc', 'devices_per_proc', 'threads_per_process', 'thread_binding' (think OMP_PLACES, core or thread, etc...)

If CTest can support these types of 'properties', which extends the existing hard coded, specific key/values, then we could build more advanced systems using Kitware's product.

If the techniques are thought-out properly, we can end up with some fundamental tools that we can use to compose the type of things I suggest here. (such as generating clever batch scripts, and launching jobs in a performance-oriented way)

This is all very exciting, I can see how this can evolve into a system that product owners at SNL will actually use, and it will distribute the load of implementing/using the features to the people best suited (apps defining their performance test and annotating). System-type people like me supporting the tools that make the data accessible, and then apps able to consume and visualize the data independent of people like me.

bartlettroscoe commented 4 years ago

Things like 'num_nodes', 'num_mpi_procs', 'cores_per_proc', 'devices_per_proc', 'threads_per_process', 'thread_binding' (think OMP_PLACES, core or thread, etc...)

We can provide that info for each system as part of the ATDM Trilinos configuration system. There are just a handfull of systems we care about so that is easy.

The only thing that should vary test-by-test is 'thread_per_process'. That is why in #2422, I proposed a NUM_THREADS_PER_PROC <numThreadsPerProc> argument for TRIBITS_ADD[_ADVANCED]_TEST() (which is meaningless for GPUs but okay).

More details to work out but if you are willing to read and parse the STDOUT from CDash (which you have to do to get Teuchos timer info), I think we can do all of this without any changes to CTest or CDash (but using "Measurements" is okay too).

jjellio commented 4 years ago

'threads_per_process' is one variable.

num_nodes num_gpus cores_per_process

It would make a whole lotta sense to have CTest accept these kinds of properties (which are rather arbitrary) - which is why my request for an interface to annotate a job with Key/Values makes sense.

Moreover,

CTest should expose the ability to list ENV variables the user wants exported.

E.g.,

add_test(...
...
ENV "key=val;key=val...."
)

That is, let the user do whatever they want. Batch systems and MPIRUN support definining ENVs on the command line. CTest can simple export the list of Key/values inside the magic wrapper script.

And, if that info is annotated as a 'property' then it is easy to parse from the JSON

jjellio commented 4 years ago

It would also be advantagous if these changes happen at CTest/CMake level (e.g., Kitware). That avoids a dependence on Tribits (which isn't really needed)

bartlettroscoe commented 4 years ago

That avoids a dependence on Tribits (which isn't really needed)

But something has to write the batch driver scripts for the particular system (e.g. SLURM, LFS, etc.). CMake/CTest is not going to do that for you.

jhux2 commented 4 years ago

It would also be advantagous if these changes happen at CTest/CMake level (e.g., Kitware). That avoids a dependence on Tribits (which isn't really needed)

There are developers on this thread who work on projects that don’t use Tribits. I think that is why @jjellio is advocating for ctest to support this.

Sent with GitHawk

bartlettroscoe commented 4 years ago

FYI: Had long conversation with @jjellio about this yesterday. From our conversation yesterday, it does not look like any changes need to be made to CTest or CDash to support what is needed. I think we can do all of this in TriBITS (or a resuable CMake module in TriBITS that can be used without full adoption of TriBITS ).

jhux2 commented 4 years ago

or a resuable CMake module in TriBITS that can be used without full adoption of TriBITS

@bartlettroscoe Thanks! This is important for projects that use cmake but not TriBITS.

bartlettroscoe commented 4 years ago

@jjellio,

Responding to your email below following up from our meeting from 8/14/2019 ...

Yes, please invite me to the Sept. 10 meeting and let me know if you have any questions or any ideas about how TriBITS, CTest, and/or CDash can be used (or modified) to support this work. (I am adding a note on this to the FY20 work. We will due what we can with the funding we have.)


From: Elliott, James John Sent: Thursday, August 15, 2019 3:02 PM To: Bartlett, Roscoe A Subject: Followup from discussion on performance tracking

Ross, thanks for the chat. I got several useful bits out of it. Below are some action items I came up with.

Action Items:

1) James use Tribits set properties to add measurement see if that works (Will do with Chris Siefert's test setup)

2) James - make sure Ross is on the September 10th meeting.

3) Ross, consider the general idea. Flaws/weaknesses/suggestions -open-ended item, I'd like to keep you in the loop as this progresses, and ultimately present something at TUG.

4) Ross, think about where/when Kitware would be helpful - E.g., LDAP Dashboards

5) James, look at 'notes' JSON. Think about how to add a URLs for version comparisons and how/which data to store

github-actions[bot] commented 2 years ago

This issue has had no activity for 365 days and is marked for closure. It will be closed after an additional 30 days of inactivity. If you would like to keep this issue open please add a comment and/or remove the MARKED_FOR_CLOSURE label. If this issue should be kept open even with no activity beyond the time limits you can add the label DO_NOT_AUTOCLOSE. If it is ok for this issue to be closed, feel free to go ahead and close it. Please do not add any comments or change any labels or otherwise touch this issue unless your intention is to reset the inactivity counter for an additional year.

github-actions[bot] commented 2 years ago

This issue was closed due to inactivity for 395 days.