AnalysisGroupD: Difference between revisions
From canSAS
(Created page with "NOTES FOR GROUP D XXXXXX Leading") |
No edit summary |
||
Line 1: | Line 1: | ||
NOTES FOR GROUP D | NOTES FOR GROUP D | ||
Tim Snow Leading | |||
==Modelling 2D data== | |||
* Everyone agreed that modelling 2D data has the potential to deliver more information than analysing 1D data, however, routinely analysing 2D data is not currently done | |||
* Fitting of 2D data requires careful thought if down sampling is required as features could be missed, conversely fitting a Pilatus 2M image will take a long time | |||
==Simulations and evolving algorithms== | |||
* Projects coupling simulation with data fitting exist, although are typically in the earlier stages of development (i.e. SASSIE) | |||
* The next generation of algorithms are anticipated to revolve around either optimised simulation fitting or machine learning | |||
* Both approaches require a computing resource beyond a single desktop/laptop | |||
* Collaborations with institution HPC or national HPC are likely required | |||
* Additionally, links to commercial HPC/machine learning could be used for data fitting | |||
==Automation== | |||
* Automation of data acquisition requires care as many variables change dynamically | |||
** Automatic alignment of some sample environments is challenging to human operators, let alone algorithms | |||
** Deducing when 'good' data has been obtained is tricky, especially if the sample is liable to beam damage | |||
** Determining an experimental endpoint could prove tricky too | |||
* Automation of static scans is a good starting point | |||
* Automatic data reduction should follow on from DAQ, however, fitting will likely require human intervention | |||
* Automatic creation and archival of sample meta-data would prove highly useful for machine learning | |||
==Collaborative funding strategies== | |||
* Scientific funding is, as ever, extremely tight (however, also under increasingly close scrutiny as well...) | |||
* Combining efforts and collaborating is likely to give governments / funding bodies good reason to fund projects as facilities / institutions can point towards shared (i.e. free(ish)) resources and get 'more bang for your buck' | |||
* Finding common areas would be a positive step | |||
** Data analysis | |||
** Data fitting | |||
** Identification of data quality | |||
** Ways to automate different SAS measurements/techniques | |||
** Code development | |||
** Code maintenance/archiving | |||
** Experimental meta-data | |||
** Computing resource sharing (i.e. HPC) | |||
* Events, such as canSAS, a good way to start such links | |||
* Attending conferences either nationally or internationally a good way to form links with institutions, companies or funding bodies as well as other researchers |
Latest revision as of 15:38, 14 June 2017
NOTES FOR GROUP D
Tim Snow Leading
Modelling 2D data
- Everyone agreed that modelling 2D data has the potential to deliver more information than analysing 1D data, however, routinely analysing 2D data is not currently done
- Fitting of 2D data requires careful thought if down sampling is required as features could be missed, conversely fitting a Pilatus 2M image will take a long time
Simulations and evolving algorithms
- Projects coupling simulation with data fitting exist, although are typically in the earlier stages of development (i.e. SASSIE)
- The next generation of algorithms are anticipated to revolve around either optimised simulation fitting or machine learning
- Both approaches require a computing resource beyond a single desktop/laptop
- Collaborations with institution HPC or national HPC are likely required
- Additionally, links to commercial HPC/machine learning could be used for data fitting
Automation
- Automation of data acquisition requires care as many variables change dynamically
- Automatic alignment of some sample environments is challenging to human operators, let alone algorithms
- Deducing when 'good' data has been obtained is tricky, especially if the sample is liable to beam damage
- Determining an experimental endpoint could prove tricky too
- Automation of static scans is a good starting point
- Automatic data reduction should follow on from DAQ, however, fitting will likely require human intervention
- Automatic creation and archival of sample meta-data would prove highly useful for machine learning
Collaborative funding strategies
- Scientific funding is, as ever, extremely tight (however, also under increasingly close scrutiny as well...)
- Combining efforts and collaborating is likely to give governments / funding bodies good reason to fund projects as facilities / institutions can point towards shared (i.e. free(ish)) resources and get 'more bang for your buck'
- Finding common areas would be a positive step
- Data analysis
- Data fitting
- Identification of data quality
- Ways to automate different SAS measurements/techniques
- Code development
- Code maintenance/archiving
- Experimental meta-data
- Computing resource sharing (i.e. HPC)
- Events, such as canSAS, a good way to start such links
- Attending conferences either nationally or internationally a good way to form links with institutions, companies or funding bodies as well as other researchers