Photo: Kurt Hickman / Stanford News Service
Informal science programs can be found in a multitude of places, including but not limited to universities, museums, parks, zoos, aquariums, schoolyards, and farms.
Learning is a personal and long-term, cumulative process that happens over time. The goal of informal science programs is rarely to teach something specific to the masses, but rather to adapt the content to each individual, helping to expand their knowledge and ignite their curiosity.
The process required to achieve learning in informal, free-choice environments was summarized the National Research Council’s (NRC) Committee on Learning Science in Informal Environments (Bell et al. 2009) in the following six steps:
Help learners to
Thorough assessment of informal learning programs requires longitudinal studies to capture the cumulative nature of learning (Rennie, 2007). This fact, combined with their open-ended learning objectives and short-term nature, makes informal learning programs difficult to assess.
Formative vs. Summative (Cromack and Savenye 2007)
Formative assessment of learning is an ongoing process. The goal is to improve a class (program, etc.) as it is happening. For example, in-class clickers or mid-quarter feedback surveys can be used to assess how well students are grasping concepts and enjoying the experience, and the results can then be used to improve the class.
Summative assessment of learning, on the other hand, happens after the fact. The goal is to assess how successfully a class (program, etc.) was taught. For example, final grades can be used to determine how much students have learned. While summative assessments cannot be used to improve classes as they are happening, the results of these assessments can be used to improve future classes.
Formative assessments are very valuable tools for improving teaching practices. However, they also require a lot of work. Teaching classes dynamically requires teachers to continually modify their lesson plans in response to student feedback and performance.
Direct vs. Indirect (Cromack and Savenye 2007)
Direct assessments measure observable evidence of learning, such as grades and class performance. Indirect assessments, on the other hand, rely on self-reflection by students, for example, reporting changes in attitude, behavior, or confidence.
Formative and summative assessments can be either direct or indirect.
Considering both the effort required by teachers, as well as the benefits derived by students, it seems that the most reasonable approach to assessment involves a balance of all types of assessment.
Currently, some common methods used to assess informal science programs include (Bell et al, 2009):
A wide range of comparison surveys can be implemented for any given study. Several examples, ranging from most to least difficult to implement, include (Friedman et al. 2008):
When performing assessments, especially in informal environments, it is important to consider the diversity of the audience – assessments should be inclusive. These programs often attract people of all ages, ethnicities, and cultural backgrounds, some with learning disabilities, some coming with no prior science knowledge, and others coming with science PhDs. This makes informal programs particularly hard to assess, but it also makes them especially exciting.
Methods like self-reported data, follow-up questions, and comparison analyses are effective because they are able to document learning over time, illustrating that people learn different things, and often providing evidence that learning continues beyond the institutional visit.
However, it is important to note that the above methods, when performed on their own, are subject to bias as they involve direct participant contact. To improve the assessment, it's best to include some form of non-interventionist assessment, such as (Bell et al, 2009):
Here's an example of a tracking study: Ross and Gillespie (2009) carried out a study to assess the learning of conservational behaviors from a modern immersive zoo exhibit at the Lincoln Park Zoo. Under the assumption that learning occurs at ‘successful’ exhibits, where a ‘successful’ exhibit is one that engages audiences and maintains their attention for some time, they tracked visitor to measure engagement. Data was collected over a 9-week period in the fall of 2003, on 338 visitors. Visitors were considered ‘engaged’ if they faced an exhibit and remained stopped within 6 feet of it for more than 2 seconds. The ‘holding power’ was calculated as the mean time that visitors were scored as ‘‘engaged’’ with that particular exhibit.
In addition, a quality assessment requires preservation of the context of learning. The noticeable presence of a researcher or any type of recording equipment can alter the learner’s experience (Rennie, 2007) and thus, should be avoided, if possible.
A regular assessment of programs is vital in order for them to remain successful.
Have you ever assessed, or wanted to assess, a class or program to improve it for the future? Please share your thoughts on assessment methodologies below.
Mandy McLean is a graduate student in Environmental Earth System Science.
Bell, Philip, Bruce Lewenstein, Andrew W Shouse, and Micheal A. Feder. 2009. Learning Science in Informal Environments: People, Places, and Pursuits. Washington, DC: The National Academies Press. http://www.nap.edu/catalog.php?record_id=12190.
Cromack, Jamie, and Wilhemina Savenye. 2007. “Learning About Learning in Computational Science and Science, Technology, Engineering and Mathematics (STEM) Education.” Microsoft Research: A White Paper. http://research.microsoft.com/en-us/collaboration/papers/learning_about_learning.pdf.
Friedman, Alan J, Patricia B Campbell, Lynn D Dierking, Barbara N Flagg, and Randi Korn. 2008. “Framework for Evaluating Impacts of Informal Science Education Projects”. Washington, DC: National Science Foundation. http://caise.insci.org/uploads/docs/Eval_Framework.pdf.
Rennie, Léonie J. 2007. Chapter 7: Learning Science Outside of School. “Learning science Outside of School.” In Handbook of Research on Science Education, ed. S. K. Abell and N. G. Lederman, 125-167. Mawah, NJ: Lawrence Erlbaum Associates.
Ross, Stephen R, and Katie L Gillespie. 2009. “Influences on Visitor Behavior at a Modern Immersive Zoo Exhibit.” Zoo Biology 28 (5) (September): 462–72. doi:10.1002/zoo.20220. http://www.ncbi.nlm.nih.gov/pubmed/19821504.