Assessing learning in informal science programs

Assessing learning in informal science programs

Photo:  Kurt Hickman / Stanford News Service

Assessing informal (out-of-school) science programs

Informal science programs can be found in a multitude of places, including but not limited to universities, museums, parks, zoos, aquariums, schoolyards, and farms.

Learning is a personal and long-term, cumulative process that happens over time. The goal of informal science programs is rarely to teach something specific to the masses, but rather to adapt the content to each individual, helping to expand their knowledge and ignite their curiosity.

The process required to achieve learning in informal, free-choice environments was summarized the National Research Council’s (NRC) Committee on Learning Science in Informal Environments (Bell et al. 2009) in the following six steps:

Man and woman standing in cornfield examining paper bagHelp learners to

  1. develop an interest in science;
  2. understand science knowledge;
  3. engage in scientific reasoning;
  4. develop an appreciation of science;
  5. participate in scientific activities; and
  6. identify as someone who knows about, uses, and perhaps even contributes to science.

Thorough assessment of informal learning programs requires longitudinal studies to capture the cumulative nature of learning (Rennie, 2007). This fact, combined with their open-ended learning objectives and short-term nature, makes informal learning programs difficult to assess.

Forms of assessment

Formative vs. Summative (Cromack and Savenye 2007)

Formative assessment of learning is an ongoing process. The goal is to improve a class (program, etc.) as it is happening. For example, in-class clickers or mid-quarter feedback surveys can be used to assess how well students are grasping concepts and enjoying the experience, and the results can then be used to improve the class.

Summative assessment of learning, on the other hand, happens after the fact. The goal is to assess how successfully a class (program, etc.) was taught. For example, final grades can be used to determine how much students have learned. While summative assessments cannot be used to improve classes as they are happening, the results of these assessments can be used to improve future classes.

Formative assessments are very valuable tools for improving teaching practices. However, they also require a lot of work. Teaching classes dynamically requires teachers to continually modify their lesson plans in response to student feedback and performance.

Direct vs. Indirect (Cromack and Savenye 2007)

Direct assessments measure observable evidence of learning, such as grades and class performance. Indirect assessments, on the other hand, rely on self-reflection by students, for example, reporting changes in attitude, behavior, or confidence.

Formative and summative assessments can be either direct or indirect.

Considering both the effort required by teachers, as well as the benefits derived by students, it seems that the most reasonable approach to assessment involves a balance of all types of assessment.

Assessment methods for science programs

A stake and tape measure mark a plot in a tree study at Jasper Ridge Biological Preserve.

Currently, some common methods used to assess informal science programs include (Bell et al, 2009):

  • self-reported data (indirect, formative or summative);
  • follow-up emails or phone calls (indirect, summative); and
  • comparison of pre- and post- knowledge (or behavior) surveys (direct, summative).

A wide range of comparison surveys can be implemented for any given study. Several examples, ranging from most to least difficult to implement, include (Friedman et al. 2008):

  • Randomized controlled trial: this assessment method requires the comparison of pre- and post-lesson tests for two groups of participants. One group is given the materials necessary to be able to answer the test questions, provided that the material is actually effective, while the other ‘control’ group is not offered those materials. Although generally effective at assessing programs, this method is expensive and can be potentially taxing for participants.
  • Randomized post-only design: this technique is similar in nature to the randomized controlled trial, except it only involves a post-lesson test. As a result, it less expensive and taxing for participants. However, the results of this assessment method cannot account for the differences in base knowledge of participants.
  • Comparisons without control groups: comparisons, e.g., pre-post tests, can still offer some insight into the effectiveness of programs without using control groups. These comparisons can be made from the results of surveys, direct questions, card-sorting tasks, concept maps, etc.

When performing assessments, especially in informal environments, it is important to consider the diversity of the audience – assessments should be inclusive. These programs often attract people of all ages, ethnicities, and cultural backgrounds, some with learning disabilities, some coming with no prior science knowledge, and others coming with science PhDs. This makes informal programs particularly hard to assess, but it also makes them especially exciting.

Methods like self-reported data, follow-up questions, and comparison analyses are effective because they are able to document learning over time, illustrating that people learn different things, and often providing evidence that learning continues beyond the institutional visit.

However, it is important to note that the above methods, when performed on their own, are subject to bias as they involve direct participant contact. To improve the assessment, it's best to include some form of non-interventionist assessment, such as (Bell et al, 2009):

  • discourse analysis – analysis of emotion during museum visits (indirect, formative or summative); and
  • tracking studies (indirect, summative).

Here's an example of a tracking study:  Ross and Gillespie (2009) carried out a study to assess the learning of conservational behaviors from a modern immersive zoo exhibit at the Lincoln Park Zoo. Under the assumption that learning occurs at ‘successful’ exhibits, where a ‘successful’ exhibit is one that engages audiences and maintains their attention for some time, they tracked visitor to measure engagement. Data was collected over a 9-week period in the fall of 2003, on 338 visitors. Visitors were considered ‘engaged’ if they faced an exhibit and remained stopped within 6 feet of it for more than 2 seconds. The ‘holding power’ was calculated as the mean time that visitors were scored as ‘‘engaged’’ with that particular exhibit.  

In addition, a quality assessment requires preservation of the context of learning. The noticeable presence of a researcher or any type of recording equipment can alter the learner’s experience (Rennie, 2007) and thus, should be avoided, if possible.

A regular assessment of programs is vital in order for them to remain successful.

Have you ever assessed, or wanted to assess, a class or program to improve it for the future? Please share your thoughts on assessment methodologies below.

 

Mandy McLean is a graduate student in Environmental Earth System Science.

Assessment References

Bell, Philip, Bruce Lewenstein, Andrew W Shouse, and Micheal A. Feder. 2009. Learning Science in Informal Environments: People, Places, and Pursuits. Washington, DC: The National Academies Press. http://www.nap.edu/catalog.php?record_id=12190.

Cromack, Jamie, and Wilhemina Savenye. 2007. “Learning About Learning in Computational Science and Science, Technology, Engineering and Mathematics (STEM) Education.” Microsoft Research: A White Paper. http://research.microsoft.com/en-us/collaboration/papers/learning_about_learning.pdf.

Friedman, Alan J, Patricia B Campbell, Lynn D Dierking, Barbara N Flagg, and Randi Korn. 2008. “Framework for Evaluating Impacts of Informal Science Education Projects”. Washington, DC: National Science Foundation. http://caise.insci.org/uploads/docs/Eval_Framework.pdf.

Rennie, Léonie J. 2007. Chapter 7: Learning Science Outside of School. “Learning science Outside of School.” In Handbook of Research on Science Education, ed. S. K. Abell and N. G. Lederman, 125-167. Mawah, NJ: Lawrence Erlbaum Associates.

Ross, Stephen R, and Katie L Gillespie. 2009. “Influences on Visitor Behavior at a Modern Immersive Zoo Exhibit.” Zoo Biology 28 (5) (September): 462–72. doi:10.1002/zoo.20220. http://www.ncbi.nlm.nih.gov/pubmed/19821504.