Home / Event / Sharon Oviatt – Enhancing Creative Intelligence with Multimodal Technologies

Sharon Oviatt – Enhancing Creative Intelligence with Multimodal Technologies

Mar
8
Date: Tuesday, 8th March 2016
Time: 12:00 AM
Location:

Sharon Oviatt (Incaa Designs)

sharon

Time: 11.00 – 12.30am, Tue 8 March 2016

Location: CB08.02.006

In this talk, I will discuss parallel research advances toward developing new digital tools for stimulating and also assessing human cognition, creativity, and learning. During the last decade, multimodal interfaces based on combined new media have eclipsed keyboard-based graphical interfaces as the dominant worldwide computer interface. In addition to supporting mobility, they enable more natural and expressively powerful input to computers (touch, speech, writing), and also major performance advantages. In recent research, we asked the novel question:

Can a computer input tool per se have an impact on human cognition?

That is, if the same person completes the same task and we only change the computer input tool they use
(e.g., keyboard versus digital pen), will this alone affect their ability to think and reason effectively? If so,
how large will the impact be, and why will it occur? Recent findings reveal that more expressively
powerful input tools can substantially stimulate cognition— including correct problem solving, accurate
inferential reasoning, and creative ideation. I’ll discuss one study in which using a digital pen improved
people’s fluent production of science ideas by 38%, compared with a keyboard. Similar examples of
facilitation generalize across different types of thinking and reasoning, content domains, user populations
and ability levels, computer hardware, and evaluation metrics. I’ll summarize how and why this occurs
when people use more expressively powerful interfaces that are capable of conveying multiple modalities
and representations.

In parallel research, we asked:

When using these more expressively powerful interfaces, can analysis of
communication patterns provide an accurate window on human cognition?

Multimodal learning analytics is an emerging field that analyzes students’ communication patterns during
speech, writing, nonverbal movement, and combined multimodal communication. Compared with clickstream
analysis, these richer analytic techniques have successfully predicted students’ level of domain
expertise, with accuracies over 90%. In one study, analysis of signal-level features during dynamic
writing with a digital pen yielded 92% correct classification of students by their expertise level— with no
content analysis required. These analyses were conducted using the Math Data Corpus, in which
collaborating students teamed to jointly solve mathematics problems that varied in difficulty. From the
perspective of design innovation and commercialization, these particular analytic results have shown the
corporate world that signal-level writing features could be processed automatically while students use pen
systems that exist today. The race is now on to prototype innovative educational applications, coupled
with richly informative analytics, to develop unique corporate products.

Professor Sharon Oviatt is internationally known for her multidisciplinary work on multimodal
and mobile interfaces, human-centered interfaces, educational interfaces and learning analytics. She has
been recipient of the inaugural ACM-ICMI Sustained Accomplishment Award, National Science
Foundation Special Creativity Award, and ACM-SIGCHI CHI Academy award. She has published over
160 scientific articles in a wide range of venues, and is an Associate Editor of the main journals and
edited book collections in the field of human-centered interfaces. Her recent books include The Design of
Future Educational Interfaces (2013, Routledge) and The Paradigm Shift to Multimodality in
Contemporary Computer Interfaces (2015, Morgan Claypool). She currently is editing The Handbook of
Multimodal-Multisensor Interfaces (forthcoming in 2017, ACM Books). Related to today’s talk, Sharon
was a founder of the ACM international conference series on Multimodal Interfaces (ICMI), and also its
satellite series of Data-Driven Grand Challenge Workshops on Multimodal Learning Analytics.

Top