These are notes from the Task and Activity Group planning meeting held at MICRO 45 in Vancouver, British Columbia, Canada. Please see the CSA workshop web site for more information: http://csa.cs.pitt.edu 12/02/12, Vancouver BC 30+ attendees (split 50/50 between new and previous participants). The goal was to review the earlier CSA workshop and SC BOF session, followed by starting the discussion for two Task and Activity Groups (TAGs): community building and infrastructure (integrating the tools). ** Community Building TAG Notes ** Several questions/issues were rasied, including: (1) Attendees felt there was value in building the community, and establishing standards for tools/experimental results: * interest in evaluating which tools are most appropriate for a particular type of study; * desire for an infrastructure where algorithms can be compared; * need to determine how to make infrastructure components reusable -- best practices for APIs are needed; * need to determine standards on transparency of simulation results, and similarly for reproducibility of simulations; * need to distinguish between experiment (non-production codes) and production code that could be integrated into the tool source tree; (2) An important issue is how to provide incentives for meeting standards. The group felt that "if you build it they will come" to push the community by force will not work. The system will be attractive and used/useful if quality of the repository and tools is high. An alternative view held that an effort must get conference program committees and journal steering committees to buy-in: Code must be available if paper accepted (after a certain period of time), or submit code with submission, which builds the prior art into a system. (3) The repository needs to provide an easy way to access the code from the results published in papers. There was concern that a researcher might lose the competitive edge by publishing results in a repository. That is, if the research has a simulator but doesn't share it, then the researcher can easily extend it and compare to it. However, if the simulator is shared, then the competitive advantage is lost. (4) Checking in experiments into the repository, distinguishing between experimental code and production code, privacy issues was brought up again. These are clearly important (critical?) aspects. On the other hand, there seems to be some interest toward "policing" the community in standards, which will bring its own difficulties. This includes determining which simulators are the "best" to use for comparisons, determining which techniques are "fundamental" for comparisons, determining which level of simulation accuracy is appropriate for a particular experiment, etc. ** Infrastructure TAG Notes ** The infrastructure, i.e., interfaces among tools, should be completed in stages: don’t attempt to bite off everything at once. We need a mechanism to post patches common to many users, experiments, and workloads (reinforced by multiple people). The questions/issues raised in the Infrastructure discussion include: (1) What about systems that require licenses, how do we deal with those? (2) Type and quality of simulators/tools: * classes of simulators should be enumerated, and fidelity of simulation should be a metric (e.g., cycle-accurate to functional, and various classes in between, distinguish between modules operating at different fidelities); * validation of a good simulation methodology is important (benchmarking a simulator against a trusted model); * full interoperability of all simulators is not achievable (3) How does an open source model impact the infrastructure? * Industry simulators many not be open, how to handle those? * Separate the simulator and the model? * Should publications be required to use an open simulation model? (again policy issues arise) * Force industry papers to use open models? Compare proprietary simulator with open simulator to ensure fidelity is transferred? (4) What can the repository do to help the infrastructure? * provides details on experiment, otherwise can’t figure out how to compare with prior art * reduces the barrier to entry for simulation work * provides rewards for people who build tools (which are not currently sufficient, although some disagreed). * provides standard models that matches published art, allowing oversight of simulators * provides a standard set of tools (does standardized tools limit academic freedom or provide solid ground to build on?) (5) How does validation fit in? * Suggestion to validate standard simulators against actual designs (or silicon). Industry suggests this, academics ask whether this is truly necessary. * Correlation of results to “gold standards” brings the question of what level is appropriate. * Suggestion: create a TAG to decide what is good or bad for gold standards and validation (maybe this is to be solved down the line). ** Action items identified at the meeting ** 1. Build rev0 of a repository 2. Provide an initial seed of tools in this repository 3. Continue the discussion, but we also need to take tangible steps forward (repository). Aim for a repository release that works (of what? Virtual machine?), standardize releases, rebuildability, address legal issues.