Abstract: This introductory workshop is particularly addressing students novel to the analysis of verbal data. We will discuss and practice the analysis of processes of (computer-supported) collaborative learning processes on the basis of verbal data. We will start from basic considerations of sampling and organizing verbal data. Moreover, we will discuss the unit of analysis and problematize segmentation and granularity of segments from thick descriptions and propositional analyses to analysis of sequences of arguments and discourse structures. We will discuss and practice developing and applying coding schemes, incl. walking through the quality criteria for such a qualitative-quantitative approach of analyzing verbal data. Finally, we will discuss recent and potential future developments of analyzing talk of collaborative learners, such as automatization of verbal data analysis, measuring convergence in talk, and analyzing learning in communities.
|Prof. Dr. Armin Weinberger (Universität des Saarlandes, Germany)|
Bio: I am head of the Department of Educational Technology and Knowledge Management at Saarland University. Formerly, I was Associate Professor at the Department of Instructional Technology, University of Twente, the Netherlands. I was lecturer and research fellow at the University of Munich, the University of Tübingen, and the Knowledge Media Research Center, Germany. My main research interests are educational technology, technology-enhanced learning, computer-supported collaboration scripts, argumentative knowledge construction, cross-cultural education, and methodological issues in small group learning. I have a 10-year experience in designing, implementing, and investigating computer-supported collaboration scripts. I have developed a framework for analyzing argumentative knowledge construction and methods to analyze knowledge convergence and divergence in online discourse.
Abstract: Because much argumentation data are composed of counts or rubric scores reflecting an ordinal scale of measurement, use of generalized linear models (GLMs), involving Poisson or ordinal regression, and/or which adjust for skew, are often applicable to argumentation data. These techniques also potentially provide more statistical power, which is important given modest levels of reliability often associated with argumentation coding. GLM techniques can now be implemented in SPSS and most other statistical software. This workshop will give participants a conceptual overview of GLM techniques, a discussion of my research studies on argumentation which used these techniques, and step-by-step instructions on how to use these techniques in SPSS. If there is time, I will also give a brief overview of Generalized Estimating Equations, which can be used to control for statistical dependencies among observations.
|Dr. E. Michael Nussbaum, Department of Educational Psychology and Higher Education, University of Nevada, Las Vegas (USA)|
Bio: Dr. E. Michael Nussbaum is the Director of the Learning and Technology Program at the University of Nevada, Las Vegas., and a professor in the Department of Educational Psychology and Higher Education. He holds a Ph.D. in Educational Psychology from Stanford University (1997). He has researched issues in argumentation and education for the last 20 years, specifically those related to science and socio-scientific issues such as climate change. (He recently also led development of a NSF-funded educational computer game on climate change called Losing the Lake.) In addition to these substantive interests, he has written on methodological and statistical issues related to argumentation, and recently published a textbook called Categorical and Nonparametric Data Analysis: Choosing the Best Statistical Technique (2015; Taylor and Francis).
Abstract: The key insight communicated through this tutorial is that if we can understand the connection between socio-psychological processes and language by means of the social signals encoded in them, we can structure computational models of language interactions more effectively. This tutorial will be composed of a theoretical component and a hands on component. In the theoretical component, I will give an overview of work related to the connection between discourse and learning. In the hands-on component, I will offer instruction on use of a freely downloadable tool for facilitating the application of machine learning to natural language data called LightSIDE that provides a convenient GUI environment for novice users of text classification technology easily run text extraction and classification experiments. On top of that, LightSIDE serves as a vehicle for dissemination of new techniques for effective application of machine learning to text mining, including novel feature extraction techniques.
|Dr. Carolyn Rosé, School of Computer Science at Carnegie-Mellon University (Pittsburgh, USA)|
Bio: Dr. Carolyn Rosé is an Associate Professor of Language Technologies and Human-Computer Interaction in the School of Computer Science at Carnegie Mellon University. Her research program is focused on better understanding the social and pragmatic nature of conversation, and using this understanding to build computational systems that can improve the efficacy of conversation between people, and between people and computers. In order to pursue these goals, she invokes approaches from computational discourse analysis and text mining, conversational agents, and computer supported collaborative learning. She serves on the executive committee of the Pittsburgh Science of Learning Center and the co-leader of its Social and Communicative Factors of Learning research thrust. She also serves as President Elect of the International Society of the Learning Sciences. She serves as Associate Editor of the International Journal of Computer Supported Collaborative Learning and the IEEE Transactions on Learning Technologies.
Abstract: How do we assess individuals' capacity, as well as disposition, to think scientifically? The content we ask them to think about can be wide-ranging, certainly, but so are the skills. They range from the most fundamental of coordinating claims and evidence and inductive inference, to distinguishing evidence and explanation, distinguishing anecdotal and statistical evidence, coordinating multiple causes, and engaging in scientific argumentation to support, challenge, and weigh claims. Detecting patterns in large sets of behavioral data can be informative. But here we advocate as an essential supplement, and illustrate, close investigation of the reasoning underlying individuals' judgments.
|Dr. Deanna Kuhn, Teachers College, Columbia University (New York, USA)|
Bio: Deanna Kuhn is professor of psychology and education at Teachers College Columbia University. She holds a Ph.D. from University of California, Berkeley, in developmental psychology. She has been editor of the journal Cognitive Development, editor of the journal Human Development, and co-editor of the last two editions of the Cognition, Perception and Language volume of the Handbook of Child Psychology. She has published widely in psychology and education, in journals ranging from Psychological Review to Harvard Educational Review. She is an author of 4 major books -- The development of scientific thinking skills, The skills of argument, Education for thinking, and, most recently, Argue with me: Argument as a path to developing students’ thinking and writing. Her research is devoted to identifying and determining how best to nurture the intellectual skills that will prepare young people for lifelong learning, work, and citizenship.
Abstract: The workshop "Using eye tracking to capture cognitive processing" is an introductory workshop. Starting with the provision of information about the history and types of eye trackers and important measures, various fields of application are introduced. The goal of the workshop is to establish a basic understanding of eye tracking methodology and, building on this, to enable the learners to develop operationalizations of cognitive processes that are relevant in their own research. Examples of current eye tracking research in the learning sciences will be presented. The participants will have the opportunity to get some hands-on experience with a head-mounted eye tracker and the corresponding analysis software.
|PD Dr. Christof Wecker, Ludwig-Maximilians-Universität München (Germany)|
|Dr. Markus Bolzer, Ludwig-Maximilians-Universität München (Germany)|
Bio PD Dr. Christof Wecker: Christof Wecker is an assistant professor at the Chair of Education and Educational Psychology at the Ludwig-Maximilians-Universität München. His research focuses on instructional activities such as using computer-based slide presentations or fading solution steps during the demonstration of skills, on the role of argumentation for knowledge acquisition, on methodological foundations of use-inspired research and on the prerequisites of evidence-based practice in education. The methodological repertoire employed ranges from meta-analyses to experimental studies in field settings as well as in the laboratory, including eye tracking. He teaches courses in research methods and in the field of learning and instruction in degree programs in Education, Psychology, and the Learning Sciences as well as in teacher training programs.
Bio Dr. Markus Bolzer: Markus Bolzer (*1981) earned his M. Sc. in Educational Sciences in 2010 from the University of Regensburg. Ever since he has been working at the Chair of Education and Educational Psychology of Prof. Dr. Frank Fischer at the Ludwig-Maximilians-Universität in Munich, where he obtained his Doctorate in Educational Sciences and Psychology in 2014 (summa cum laude). His research focuses on the application of eye-tracking to investigate mindful cognitive processing of written peer feedback in academic writing and revision. As part of his post-doctorate position with the chair, he is currently involved in a BMBF funded research project on inquiry learning in cooperation with the Medical Clinic of the University of Munich. He is a freelance SPSS-statistics consultant and an active member of EARLI and ISLS.
Abstract: Measuring successful scientific reasoning, we will argue, might profit from investigating agents’ appreciation of social and normative aspects of action, knowledge, and belief in social interactions.
Normativity is typically understood as involving some kind of “oughtness” and generality: Agents ought to perform certain acts or refrain from doing so, not just in particular situations, but in analogous contexts, too. How can we measure an understanding of normativity and what is its importance for understanding epistemology? In this workshop, we will look at key psychological foundations of normativity – most notably collective intentionality (e.g., the ability to engage in shared intentional activities) – and methods from developmental psychology used to assess children’s understanding of normativity. We will focus on children’s spontaneous verbal and non-verbal reactions to norm violations in social interactions and explore interrelations between normativity and epistemology.
|Dr. Marco Schmidt, Max Planck Institute for Evolutionary Anthropology (Leipzig, Germany)|
Bio: Marco Schmidt is a Postdoctoral Fellow at the Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany. He studies the developmental origins of human normativity and their conceptual prerequisites, with a particular focus on mechanisms of norm acquisition, norm enforcement, and norm creation. Furthermore, he investigates the interrelations between moral, epistemic, social-cognitive, and prosocial development. His work employs a range of different empirical methods, such as interactive behavioral tasks and looking time paradigms. Schmidt received his Ph.D. in psychology from the University of Leipzig.