2012 Standards Discussion: Difference between revisions
From canSAS
AdrianRennie (talk | contribs) No edit summary |
No edit summary |
||
Line 35: | Line 35: | ||
* ARR: What have we learnt and what more can we gain from the[[ Glassy Carbon Round Robin ]] and polystyrene [[ Latex Round Robin ]] exercises? | * ARR: What have we learnt and what more can we gain from the[[ Glassy Carbon Round Robin ]] and polystyrene [[ Latex Round Robin ]] exercises? | ||
PDB I think these have shown that the current agreement is as expected withing 10 or 20% in most cases (which is all | |||
the technique really claims to be good to if you read the old papers) I think the real opportunity now is to see if we | |||
can go beyond that and figure out how to get agreement regularly at the 5% level. That probably means the community | |||
will have to understand a lot of the subtler issues that have been shoved under the rugs to date and can come in from | |||
instrument hardware improvement to the analysis software improvements -- my 2c worth:-) | |||
[[Category: canSAS 2012]] | [[Category: canSAS 2012]] |
Revision as of 18:32, 25 July 2012
- The following is the agenda of work posted under business for canSAS-2012. Please add comments and expand on details here:
- Purpose and goals: Intercomparison of data measured on the same sample with different instruments and different techniques (SAXS, SANS, light scattering etc.) can prove valuable in a number of ways. In particular it aids understanding of details of the experimental methods and it can help assess reliability. In a similar way, looking at results of data reduction or analysis generated with different software can provide valuable information about performance and verification of methodology. Specifically these activities should:
- Provide Quality Assurance/Quality Control,
- Improve (reduce) uncertainties of SAS measurements in general,
- Help each facility continuously improve performance and quality of data.
- We will discuss what types of tests are interesting/important:
- Beam intensity standards - there are several different ways to quantify this
- Standards to test resolution
- Absolute intensity calibrations,
- Materials for Q calibration,
- etc. etc,
- Standards are not just measurements:
- Software comparison - do we derive the same results from different computer programs?
- Analysis methods may be similar or different (e.g. modelling versus transforms versus calculation of invariants)
- Different procedures use different approximations - are these documented?
- Approximations rather than the most elaborate calculations may be useful? Under what circumstances?
- How do analysis programs interpret data? What do they assume if data (such as uncertainty or resolution) is missing?
- Some other related issues:
- Inelastic,
- Multiple scattering,
- Wavelength contamination,
- Grazing incidence scattering - standards,
- Detector efficiencies at different wavelengths,
- limits in signal to noise - how weak a signal can be reliably extracted,
- etc.
- Outcomes needed are:
- A written plan to sustain long term effort in this area
- This should describe how to seed, co-ordinate and publicise “ad-hoc” projects,
- Assess how frequently exercises can be undertaken?
- Define good ways to disseminate/share results. This will including “advertising” projects and using them as input for other activities.
- We should aim to define a list of 2 or 3 projects for work in the near term. This should include a plan of action and participants for each.
- We shoud have a plan for presentation at SAS 2012. (This might just be an announcement of the plan and see who wants to participate?)
- Purpose and goals: Intercomparison of data measured on the same sample with different instruments and different techniques (SAXS, SANS, light scattering etc.) can prove valuable in a number of ways. In particular it aids understanding of details of the experimental methods and it can help assess reliability. In a similar way, looking at results of data reduction or analysis generated with different software can provide valuable information about performance and verification of methodology. Specifically these activities should:
- ARR suggests: that we might discuss how people will be able to meet the ideas in the article by Jacques et al that describes guidelines for publication of SAS data from biological macromolecules. There is an accompanying editorial. Are there ideas for modifications to these guidelines? (D. A. Jacques, J. M. Guss, D. I. Svergun and J. Trewhella 'Publication guidelines for structural modelling of small-angle scattering data from biomolecules in solution' Acta Cryst. (2012), D68, 620-626. doi:10.1107/S0907444912012073)
- ARR: What have we learnt and what more can we gain from theGlassy Carbon Round Robin and polystyrene Latex Round Robin exercises?
PDB I think these have shown that the current agreement is as expected withing 10 or 20% in most cases (which is all the technique really claims to be good to if you read the old papers) I think the real opportunity now is to see if we can go beyond that and figure out how to get agreement regularly at the 5% level. That probably means the community will have to understand a lot of the subtler issues that have been shoved under the rugs to date and can come in from instrument hardware improvement to the analysis software improvements -- my 2c worth:-)