Algorithm visualization (AV) is a computer science education technology introduced in order to facilitate teaching and learning of the design, the behavior, and the analysis of algorithms and data structures. In an effort to better understand the role of visualization and engagement in computer science education, an engagement taxonomy has been recently proposed (Naps et al., 2003), and several studies have been conducted in order to evaluate the efficacy of AVs from an educational point of view and to quantify the difference between the effects of using AVs at different levels of the taxonomy itself (see, for example, Grissom, McNally & Naps, 2003). Along this line of research, some studies have been more recently conducted in order to evaluate the joint effects of collaborative learning and AVs at different levels of the engagement taxonomy (Mikko-Jussi, Myller & Korhonen, 2009). The results presented in this paper have been obtained while trying to continue this latter kind of analysis. In particular, the experiment described in this paper was performed to evaluate the difference in efficacy of using AVs in individual and collaborative learning situations. To this aim, some students of the first year of an undergraduate program in computer science were asked, at the end of a course on algorithms and data structures, to participate in an experiment oriented towards validating the following hypothesis: the efficacy of using AVs is greater in the case of an individual learning environment than in the case of a collaborative environment. The rational behind this hypothesis is that the usefulness of AVs might be somehow compensated by the collaboration between students, while this is not true in the case of individual learning. The results obtained indeed do not support this hypothesis (even though no statistically significant result was obtained concerning the hypothesis in its integrity). However, two unexpected results were observed, which are the main contributions of this paper. The first result is that, independently of the learning environment, the students who had access to AVs performed worse than the other students while dealing with theoretical questions concerning the visualized algorithm. This result is statistically significant. Hence, this can be considered a first collateral effect of using AVs: by focusing their attention to the execution of the algorithm, the student might not give sufficiently importance to the theory behind the algorithm itself. A similar result was reported by Montero, Díaz & Aedo (2010), where visualization appeared useless to understand abstract concepts of object-oriented programming, such as classes or structural relationships, but was very positive to get a better comprehension of more concrete concepts such as objects and instances, methods and invocations. The second result, instead, concerns with the way students answered one of the three questions included in the experiment test. Indeed, 21 students of the 24 that used AVs answered to the question by pictorially showing the final result of the algorithm execution, while only five students over 21 that did not use AV felt the necessity to show this graphical information. This can be considered a second collateral effect of using AVs: by getting used to “see” the execution of the algorithm, the student might feel the need to show the result of this execution while answering a question (even when this is not explicitly required by the question itself).

On two collateral effects of using algorithm visualizations / P. Crescenzi; A. Malizia; M.C. Verri; P. Diaz; I. Aedo. - In: BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY. - ISSN 0007-1013. - ELETTRONICO. - 42:(2011), pp. E145-E147. [10.1111/j.1467-8535.2011.01220.x]

On two collateral effects of using algorithm visualizations

CRESCENZI, PIERLUIGI;VERRI, MARIA CECILIA;
2011

Abstract

Algorithm visualization (AV) is a computer science education technology introduced in order to facilitate teaching and learning of the design, the behavior, and the analysis of algorithms and data structures. In an effort to better understand the role of visualization and engagement in computer science education, an engagement taxonomy has been recently proposed (Naps et al., 2003), and several studies have been conducted in order to evaluate the efficacy of AVs from an educational point of view and to quantify the difference between the effects of using AVs at different levels of the taxonomy itself (see, for example, Grissom, McNally & Naps, 2003). Along this line of research, some studies have been more recently conducted in order to evaluate the joint effects of collaborative learning and AVs at different levels of the engagement taxonomy (Mikko-Jussi, Myller & Korhonen, 2009). The results presented in this paper have been obtained while trying to continue this latter kind of analysis. In particular, the experiment described in this paper was performed to evaluate the difference in efficacy of using AVs in individual and collaborative learning situations. To this aim, some students of the first year of an undergraduate program in computer science were asked, at the end of a course on algorithms and data structures, to participate in an experiment oriented towards validating the following hypothesis: the efficacy of using AVs is greater in the case of an individual learning environment than in the case of a collaborative environment. The rational behind this hypothesis is that the usefulness of AVs might be somehow compensated by the collaboration between students, while this is not true in the case of individual learning. The results obtained indeed do not support this hypothesis (even though no statistically significant result was obtained concerning the hypothesis in its integrity). However, two unexpected results were observed, which are the main contributions of this paper. The first result is that, independently of the learning environment, the students who had access to AVs performed worse than the other students while dealing with theoretical questions concerning the visualized algorithm. This result is statistically significant. Hence, this can be considered a first collateral effect of using AVs: by focusing their attention to the execution of the algorithm, the student might not give sufficiently importance to the theory behind the algorithm itself. A similar result was reported by Montero, Díaz & Aedo (2010), where visualization appeared useless to understand abstract concepts of object-oriented programming, such as classes or structural relationships, but was very positive to get a better comprehension of more concrete concepts such as objects and instances, methods and invocations. The second result, instead, concerns with the way students answered one of the three questions included in the experiment test. Indeed, 21 students of the 24 that used AVs answered to the question by pictorially showing the final result of the algorithm execution, while only five students over 21 that did not use AV felt the necessity to show this graphical information. This can be considered a second collateral effect of using AVs: by getting used to “see” the execution of the algorithm, the student might feel the need to show the result of this execution while answering a question (even when this is not explicitly required by the question itself).
2011
42
E145
E147
P. Crescenzi; A. Malizia; M.C. Verri; P. Diaz; I. Aedo
File in questo prodotto:
File Dimensione Formato  
Crescenzi_et_al-2011-British_Journal_of_Educational_Technology.pdf

accesso aperto

Tipologia: Pdf editoriale (Version of record)
Licenza: Tutti i diritti riservati
Dimensione 46.29 kB
Formato Adobe PDF
46.29 kB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/582706
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact