Same school, new name. The School of Informatics and Computing is changing its name effective January 11, 2023. Learn more about the name change

Human-Centered Computing Department Menu

Externally Funded Research Projects

This is an evolving list of selected, externally-funded research projects led by faculty in the Human-Centered Computing Department.  Potential collaborators, faculty, and students at all levels are encouraged to contact the investigators to know more about the projects and explore research collaboration opportunities to get involved.

NSF Career: Conversational User Interfaces to Support Older Adults’ Social Wellness

Funding Source: National Science Foundation (2022)
Principal Investigator: Aqueasha Martin-Hammond

NSF LogoMaintaining social wellness through a sustained network of in-person relationships is essential to preserving health. Yet, as individuals age, remaining socially active can become challenging. For example, many older adults relocate for retirement, experience changes in mobility, or begin losing family and friends, which may weaken the fabric of their social network. Because of this, older adults are at risk of experiencing feelings of loneliness and social isolation, which may lead to declines in mental and physical health. This project will investigate the potential of emerging conversational user interfaces such as voice assistants and chatbots to help older adults stay connected with the world around them, thus improving their social wellness. By engaging older adults in the design process and adapting state-of-the-art conversational design methods, this project will develop empirically validated novel strategies for personalized conversational interfaces that leverage real-world social connections and motivate older adults to engage in social activities with other people. The project will also yield new curriculum, experiential learning opportunities, community engagement initiatives, and tools to aid reflective thinking of the impact of artificially intelligent technologies on an increasingly aging society.

This project leverages social-behavioral theories to identify and examine a new family of conversational interaction approaches for supporting older adults in maintaining and building social relationships with others. To do so, the team will employ participatory methods that leverage close community collaborations with senior organizations and older adults. The research will include three main aims. First, the team will generate an empirical account of the needs of older adults to maintain social wellness in an interconnected world, along with situations that pose risks of isolation. Second, the team will explore conversational design principles to support collaborative two-way dialogues that encourage engagement in social wellness activities, with particular attention to people’s concerns about privacy and aspects of the situation where they are using the system. Finally, the team will develop and evaluate a novel platform implementing tested conversational strategies to motivate older adults’ participation in socially meaningful activities. This project will advance knowledge of the uses, challenges, and real-world barriers of designing and deploying conversational user interfaces to support older adults’ social wellness, contributing to human-centered artificial intelligence, aging, and conversational design. The research is expected to reveal new approaches that can benefit older adults’ social wellness while decreasing the risk of social isolation.

Learn more about this project

Designing Autonomy -Preserving Interactions in Intelligent Assistants for Older Adults

Funding Source: National Science Foundation (2021)
Principal Investigator: Aqueasha Martin-Hammond

NSF LogoThe growing availability of intelligent assistants (IAs) such as Siri and Alexa has prompted exploration of how such devices can assist older adults with daily care tasks. Yet, the potential convenience of IAs notwithstanding, current approaches are limited in part because they fail to provide users with enough sense of control over their data and interactions.

The lack of agency is particularly problematic for carrying out health information tasks; older adults find it challenging both to understand the data returned by the IA and to determine who has access to the critical personal and health data they share. This project will advance our understanding of how to design IAs that positively influence older adults’

sense of autonomy when using them to carry out health information tasks at home. To this end, the research team will leverage existing partnerships with senior organizations and informal caregivers in the Indianapolis community to ascertain specifics about their existing interactions and challenges with IAs; these findings will then be used to establish and evaluate novel design principles that can support more transparent conversational IA interactions for health information management.

The work will involve two main thrusts. The first will identify the important values that shape the help-seeking beliefs of older adults in existing informal caregiver relationships and settings; based on those findings, the team will engage older adults to collaboratively design new IA interaction strategies aligned to their needs and expectations during health information tasks. In the second thrust, system prototyping and user-centered evaluation activities will identify the potential and limitations of the resulting design strategies in addressing human autonomy needs and IA acceptance among older adults, examples to include a heightened focus on providing alternative information sources and answers as a response to users’ questions, and providing pathways to access privacy settings and personal data sharing preferences during interactions with the IA.

Learn more about this project

NSF CAREER: Family Resilience Technologies: Augmenting Caregiving Coordination Systems for Health Crisis Response

Funding Source: National Science Foundation (2021)
Principal Investigator: Andrew Miller

NSF LogoThis research explores how social technologies can support and augment the ability of family caregivers to effectively communicate and coordinate during hospitalization, and develop sociotechnical principles that inform the design of next-generation caregiving coordination technologies for family resilience during a health crisis. During such a crisis, families must adjust and adapt, taking on new roles and staying connected while they coordinate care and support each other. A major protective factor is the family’s resilience, or ability to withstand and rebound from disruptive life challenges, emerging strengthened and more resourceful. Systems that enable computer-supported cooperative work hold promise to support and augment family resilience, but they are designed using assumptions that may not hold true in the case of family health crisis.

Through a combination of rigorous qualitative investigations, family-driven design, and hospital-based technology deployments, this research will address three specific aims: (1) Modeling needs and design opportunities for family resilience technologies; (2) Collaboratively designing and prototyping next-generation caregiving coordination technologies that augment the ability of families to coordinate care; and (3) Developing, deploying, and evaluating a suite of caregiving coordination tools for families of seriously ill children that embodies family resilience principles, helps family members cope with evolving needs, and enables a variety of support practices. This work will be grounded in the case of pediatric cancer, the leading cause of disease-related deaths in youth, but also has significant implications for family caregiving in diverse contexts.

The research involves the participation of graduate students in an emerging and promising new area of research, and will support interdisciplinary collaboration between the medical and computing communities.

Learn more about this project

CRII: III: Capturing Dynamism in Causal Relationships: A New Paradigm for Relationship Extraction from Text

Funding Source: National Science Foundation (2020)
Principal Investigator: Sunandan Chakraborty

NSF LogoText mining made important advances in methods to convert vast and unstructured text data into knowledge. However, the current paradigm of relationship extraction has one major limitation: it models snapshots of information but fails to capture the fundamentally dialogic and dynamic nature of knowledge: conflicting findings, inconsistent discoveries, refutations, contradictions, reinforcements or confirmations, all changing over time. This project aims to capture such fundamental dynamics of knowledge, specifically focusing on causal relationships. The objective of this project is to identify cues of causal knowledge from text data, quantify the strength of the causal relationship, and model its dynamics over changing conditions. Ultimately, the project aims at modelling a more holistic view of the knowledge extracted from text. As text data is extensively used by researchers and practitioners from different domains of national importance, including, medicine and health, economics, public policy, journalism, the results of this project seek to provide the foundation to offer practitioners new ways to understand the evolving nature of the causal relationships present in large text datasets. Specifically, the novel approaches developed in the project will be applied to explore public health data to determine how changing climatic, political, economic conditions may affect the mental and physical health of the population in different geographic areas.

The project activities include the development of a novel model of causal relationship extraction that leverages a unified deep learning framework combining both semantic and syntax cues. This approach will utilize the key syntactical features of a sentence represented by the grammar relationships between noun, verbs and other parts of speech through graphical or tree-like models. This work will determine whether the sentence features a structure that signals causality. Moreover, the sequential component of the model will utilize the semantics and identify the influence of certain words in the sentence to characterize the nature of the causal relationship expressed in the text. This task will capture the strength of the relationship (e.g., using cues like “extremely likely”, “definitely”), any supporting or opposing evidences (e.g., “will lead to” or “does not lead to”), and will identify conditional cues (e.g., “in the presence of”) etc. Quantifying such qualitative properties will lead to the second innovation of this project – causal distance. Causal distance is a time-variant metric that will denote the magnitude of causality between two entities as well as capture the dynamism of the relationship by modifying itself over time with changing conditions or new evidences. Collectively, the advances pursued in this projects will further enhance our understanding of the novel computational approaches needed to unearth and reason on cues of causal relationships embedded in large text data sets.

Learn more about this project

CAREER: Towards Trustworthy Analytics

Funding Source: National Science Foundation (2020)
Principal Investigator: Khairi Reda

NSF LogoTools that create visualizations of data are increasingly important for discovery and decision-making in a range of domains, from science and engineering to commerce. Data analysts use these tools to rapidly slice and dice their data, often inspecting a large number of visualizations in the process. Though useful for exploration, these visualizations can also expose random data fluctuations, which could be mistaken for real patterns. If analysts are not careful in interpreting these apparent patterns, they could inadvertently make false discoveries or take incorrect decisions.

The goal of this research is to reduce the risk from spurious patterns arising in interactive data analyses. The project comprises three stages: (1) developing techniques for capturing analyst beliefs, expectations, and intentions as they conduct visual analysis; (2) using this data to develop algorithms that forecast the reliability of emerging visualizations; and (3) evaluating strategies for communicating the risk of false patterns.

The resulting techniques will be validated and incorporated in tools for detecting RNA modifications from noisy sequencing data, in collaboration with bioinformatics researchers. The expected impact of this project is to aid analysts in assessing the reliability of insights, while guarding against visualizations that seem convincing but that are likely to be misleading. This in turn could broaden the adoption of visual analytics tools, increase the confidence in conclusions, and potentially reduce the incidence of false discovery.

As part of this research, the team will develop interactive educational materials for training students in reliable data-driven inference. These learning modules will be disseminated in a format that allows customization by data science instructors for inclusion into existing curricula. Lastly, the project will provide opportunities for graduate research training and incorporate K-12 outreach activities that introduce young learners to data science.

Learn more about the project

Manipulating Text in Screenless Environments

Funding Source: National Science Foundation (2019)
Principal Investigator: Davide Bolchini 

NSF LogoKeyboards on mobile devices display characters visually, presenting challenges for people who are blind, as well as for all users in situations where it is inconvenient or unsafe to hold a phone in order to communicate. Existing approaches to mobile accessibility for eyes-free text entry do not fully solve this problem, because they remain screen-centric; manipulating text generally requires interacting with visual keypads that, at most, read aloud keys upon touching.

A major unsolved challenge is how to manipulate text aurally and silently, in a way that frees users from a visual display. The goal of this project is to expand our understanding of text interaction from a screen-centric paradigm to screenless, aural environments. To this end, the work will establish and evaluate principles for aural text manipulation that do not rely on a reference screen, but rather operate entirely over the auditory channel.

The project will proceed in two phases. The first will identify strategies for the auditory arrangement of the character set that operates over time rather than within the visual-spatial constraints of the screen. The conceptual advances will inform the foundation for a new class of auditory keyboards untethered to a visual display.

In the second phase researchers will focus on conducting iterative and comparative evaluation studies with sighted and blind participants on system prototypes of auditory keyboards, to examine how people can aurally manipulate text in screenless contexts.

This research is significant because it is expected to contribute novel strategies to augment the ability of blind people to perform text-entry by circumventing direct interaction with screen-based devices. By breaking free from the constraints of mobile visual displays, project outcomes will have broader applicability to support aural text manipulation for sighted users in situations where it is inconvenient or unsafe to stay glued to the screen while typing, or where text manipulation needs to be discreet, silent, and concealed from view.

The project will directly engage people who are blind and visually impaired from three partner organizations in Indiana, as well as students at Indiana University from underrepresented groups, to co-create and evaluate new options to interact with text while bypassing the screen.

Learn more about the project

Building Research Capacity by Technological Interventions in Support of Mixed-Ability Workplaces

Funding Source: National Science Foundation (2019)
Principal Investigator: Francesco Cafaro

NSF LogoPeople with cognitive and physical disabilities encounter significant difficulties when entering the workforce and consistently report lower employment and pay than people without disabilities. Even after securing employment, people with disabilities often face misunderstandings or negative attitudes from their co-workers. This impacts their ability to be productive and to learn the skills required for their profession.

This project is situated in mixed-ability workplaces (i.e., settings in which workers without disabilities and workers with different types of disabilities collaborate). The researchers will explore challenges that emerge during training sessions and everyday work. They then will investigate how theories that connect cognition with physical action can be used to design novel technologies to augment the level of accessibility of mixed-ability workplaces and the success of all workers.

The goal of this planning project is to build research capacity for generating fundamental, convergent research in support of mixed-ability workplaces—and ultimately to boost the diversity of the U.S. workforce.

Learn more about the project

 

Building Mutual Expertise for Physical Accessibility in Workplaces

Funding Source: National Science Foundation (2019)
Principal Investigator: Erin Brady

NSF LogoPeople with disabilities are less likely than people without disabilities to be hired for jobs or remain employed. Part of the reason for this disparity is that employees with disabilities encounter accessibility problems that make workplaces inaccessible to them. Many of these problems are solvable: for example, a blind employee cannot view a computer screen, but can install screen reading software to read digital text aloud.

However, not all employers are aware of accessibility problems that exist in their workplaces, and they may not know how to implement specific accommodations that would help employees with disabilities.  The overall goal of this proposal is to identify and evaluate principles for socio-technical systems, which facilitate knowledge exchange to solve accessibility problems in the workplace.

This investigation will identify existing practices and attitudes of people with disabilities and employers around problem-solving for accessibility, through individual interviews with disabled employees and job-seekers, and co-workers and employers without disabilities.

Then researchers will model intellectual gaps in accessibility problem-solving among individuals with varying levels of knowledge about accessibility.  Next, researchers will identify strategies for overcoming those gaps, by facilitating a series of small collective access groups where workplace stakeholders—with and without disabilities—collaborate to brainstorm about solutions to accessibility problems.

The researchers will generate and validate principles for mutual knowledge exchange in socio-technical systems by designing lightweight technology probes to evaluate strategies for knowledge exchange around accessibility. This research will contribute to our understanding of how disability impacts experiences in workplaces, and how technologies can be designed to improve these experiences by building mutual expertise.

Learn more about the project

 

Parent-2-Parent: Supporting Dyadic Caregiving Coordination in the Hospital

Funding Source: National Science Foundation (2019)
Principal Investigator: Andrew Miller

NSF LogoWhen a child is hospitalized, existing support networks spring into action. In this unanticipated, stressful situation, parents must quickly absorb complicated clinical information and take on new caregiving tasks, without abandoning other responsibilities. Furthermore, within-family communication and coping strategies of the parents, or dyad, are significant predictors of post-hospitalization health outcomes.

Human-Computer Interaction and Computer-Supported Cooperative Work researchers are actively studying how technology can support coordination between children and their care team, and between parents and children. However, little research has focused on how technology can support and augment the parenting dyad during hospitalization. Current technologies are not optimized for these contexts, and may even compound family stress if not carefully designed and implemented.

The goal of this project is to establish fundamental design strategies for social computing that enable effective dyadic caregiving coordination in unanticipated, stressful situations, such as the hospitalization of one’s child, and show how those technologies could support decision-making and information sharing amid high degrees of uncertainty. The project will identify and demonstrate how social technologies can support and augment the ability of parenting dyads to effectively communicate and coordinate during hospitalization. Through a combination of rigorous qualitative investigations, parent-driven design sessions, and hospital-based pilot technology deployments, the investigator will address two specific aims:

  1. Characterize current communication and coordination practices within parenting dyads
  2. Envision, demonstrate, and verify dyadic caregiving support technologies

This work will be grounded in the case of pediatric cancer, the leading cause of disease-related deaths in youth. However, the proposed work has implications for dyadic caregiving in diverse healthcare contexts.

Learn more about the project

Aiding Reasoning about Correlation and Causation

Funding Source: National Science Foundation (2019)
Principal Investigator: Francesco Cafaro

NSF LogoPeople are increasingly exposed to data and datasets in everyday life, in domains from health and science to news and policy. This raises important questions about how to help non-specialists make sense of data—in particular, around understanding how to think about correlation and causation. These concepts can be slippery; confounding correlation with causation may lead people to assume causality when there is none, but correlations do often provide precious hints to causation. This project investigates how theories of cognition that emphasize the relationship between thinking and physical action can be used to design full-body and tangible ways to interact with data-based museum installations that prime people to think in ways that improve their understanding of causation and correlation. The goal is to develop bridges between theories of embodied cognition, the design of data visualizations, human-data interactions and long-term learning effects about science, technology, engineering, and mathematics concepts discussed in the installations. The project will be deployed in real contexts, having direct potential impacts on visitors’ understanding, and will be used to inform educational curricula in ubiquitous computing and design for informal learning.

Learn more about this project

Concept-Driven Visual Analysis

Funding Source: National Science Foundation
Principal Investigator: Khairi Reda

NSF LogoMost visualization tools have been designed to support open-ended exploration of patterns in data; though useful for some tasks, this exploratory model does not fit well when analysts have existing models or hypotheses. This project aims to support a “concept-driven” analysis style in which analysts can share their existing conceptual models with the system, which uses those models to generate visualizations that allow the analyst to explore places where the models and data disagree and develop revised models that reconcile those discrepancies. To do this, the research team will design a number of prototype techniques for communicating conceptual models, algorithms for selecting visualizations and data features that best match those models, and interfaces that highlight discrepancies and provide tools for analysts to dig into the data around them. If successful, these concept-driven analyses will provide better ways for scientists and other analysts with existing models to leverage data while reducing the risk of confirmation biases in which people choose analyses that don’t show where their existing models are wrong. The project will also enable the research team to learn more about the ways people come to form and express expectations about data. Lastly, project will provide opportunities for graduate research training as well as tools to support K-12 outreach workshops that introduce younger students to data science.

Learn more about the project

Navigation Principles for Entirely Auditory Keyboards

Funding Source: Google Faculty Research Award (2018)
Principal Investigator: Davide Bolchini

Google LogoThe screen-centric mode of mobile text entry enables the blind and visually impaired (BVI) to embrace a mobile-first world, but poses two major limits. First, it forces the BVI to keep out their phone at all times (increasing the risk for the device to fall or be robbed). Second, it requires such a fine-motor, two-hand manipulation that makes it inconvenient to type while one hand is busy holding a cane or a guide dog. To combat these shortcomings, we are exploring alternatives to touch typing by investigating principles to navigate entirely auditory keyboards based on rapid streams of text-to-speech characters controllable by in-air gestures. Through a series of design and evaluative studies with the BVI we will examine the limits and potential of screenless typing to augment their ability to perform text entry. Our work contributes to untether the BVI from a too restrictive interaction mode and enable mobile text entry by fully leveraging their auditory cognitive bandwidth. This project also leverages the collaboration of our faculty with BOSMA enterprises and the Indiana School for the Blind and Visually Impaired (ISBVI).

Learn more about the project

Designing Collaborative and Transparent Work Information Systems

Funding Source: National Science Foundation
Principal Investigator: Lynn Dombrowski

NSF LogoResearch advances have enabled innovations in collaborative work information systems that can keep track of employees and contractors in domains such as ride sharing. For instance, after drivers login to begin a work session, riding sharing computers keep track of cars, customers, and ride locations. Such systems enable accurate payments to drivers and create work histories that are shared among managers, drivers, and clients. In many other work domains, such as home care, delivery, farm work, and child care, transparent collaborative information systems do not exist, leaving work environments open to inaccurate compensation, conflict over work requirements or behavior, or even exploitation. This project will examine the needs of workers, employers, and managers for collaborative and shared reporting, and opportunities for innovative technology to create such systems. The project will lead to fundamental understanding of collaboration in work information for traditional and new forms of work that currently lack accurate and transparent measures of the work hours, effort, or performance on which compensation is based.

This project requires fundamental research and application development in three potential key intervention areas: (1) technologies to enable worker education concerning worker and employer rights and responsibilities; (2) systems that could collect shared work data for workers, employers, and managers while protecting individual privacy and confidentiality, and (3) empirical evaluations to assess effectiveness as well as understand potential risks or undesirable indirect consequences of work-related data collection. Researchers will prototype and test these systems mainly in work environments, such as farm work, custodial work, and restaurants services. The project will lead to a better understanding of the potential for technology to help create better jobs for workers, and aid managers and employers to create responsive, transparent, and equitable business and work environments.

Learn more about the grant

CHIPS: Computer High: Informatics Project for Success

Funding Source: Indianapolis Public Schools (2015)
Principal Investigator: Steve Mannheimer

Indianapolis Public Schools LogoThe CHIPS program delivers a series of hands-on/brains-on, after-school learning sessions for Harshman Middle School students. The CHIPS team is composed of eight (8) IPS teachers, four (4) instructors from the School of Informatics and Computing (SoIC) at IUPUI, plus three SoIC students. CHIPS sessions last two hours and are held once a week (currently on Mondays) at Harshman. In this first year of the CHIPS project, the SoIC teachers have led the way, designing and delivering weekly learning sessions, with IPS teachers serving in a support role. In the CHIPS project, students learn basic concepts in computer science and informatics. Informatics may be easily understood as applied computer science.

Help for Cancer Caregivers

Caregiver Action Network LogoFunding Source: Cancer Caregivers Action Network (2014)
Principal Investigator: Skip Comer

Help for Cancer Caregivers is an interactive, web-based support tool for caregivers of cancer patients.  Through this project, the interactive tool has undergone design and database structure revisions.  IU was primarily responsible for syncing the tracking feature to the new Near Space design with HFCC 2.0.

From Critique to Collaboration: A Fundamental Rethinking of Computerized Clinical Alerts

Funding Source: National Science Foundation (2013)
Principal Investigator: Davide Bolchini

NSF LogoThe safe prescribing of patient medications via computerized physician order entry (CPOE) routinely relies on drug safety alerts. The most common type of such alerts, drug-drug interaction (DDI) warnings, are a basic form of clinical decision support, but their effectiveness remains surprisingly low: up to 96% of such warnings are ignored by physicians on a daily basis. Non-compliance to DDI leads to increased risk of prescribing unsafe medications, which may cause serious health complications and even death. The goal of the project is to uncover, demonstrate and evaluate novel principles for effective and novel alert design that are based on what physicians consider important when taking advice from peers in the context of their clinical activities. First, through a series of formative studies in clinical settings, we are investigating principles that accompany trusted physician-to-physician advice regarding appropriate medication prescribing. Second, we are ideating and demonstrating novel designs for computerized drug safety guidance that elicit physician trust and a sense of collaboration. Finally, through a series of comparative evaluation studies, we are evaluating the impact of the proposed strategies on physician compliance and experience. Learn more

Augmenting Screen-Reader Navigation by Linkless Dialogues

Funding Source: Google Faculty Research Award (2013)
Principal Investigator: Davide Bolchini

Google LogoIn this Google-funded research project, we are exploring advanced web navigation paradigms for the blind and visually-impaired that enable them to directly access important sections of a page or site through a simple vocabulary of shortcut commands. To accomplish this goal, we are conducting formative studies with blind and visually-impaired users to caractherize the navigation problems they face when using screen reader to browse large-scale, information-intensive web applications. We are then using this knowledge to introduce, prototype and empirically evaluate novel approaches to screen-reader navigation that speed up information finding and exploratory browsing, both at the page level and at the site level. Learn more

Audeme Games and Aural Semantics

Funding Source: Google Faculty Research Award (2011)
Principal Investigator: Steve Mannheimer

Google LogoOur project will expand understanding of the semantic and cognitive value of non-speech sound. Non-speech sound is an underutilized platform for cognition and semantic meaning. Based on our years of research with the Indiana School for the Blind and Visually Impaired (ISBVI), we believe that very short (2-5 sec.) collages of sound effects (SFX) and music (called “audemes”) function as powerful mnemonic prompts to improve student recall of complex essays of educational content, even after 5 months. This power leverages the intuitive human ability to recognize (i.e. to remember) the world as a “soundscape” of aural signs, e.g. the whistling teakettle or the chug-chugging of a large engine, etc. Our one-year project will establish the effectiveness of audemes and audeme sequences to enhance K-12 textbooks. Our project will begin by using natural language processing to distill a glossary of semantically powerful words and phrases from existing K-12 textbooks.

Audemes, Metaphors and Aural Games: A Pathways Project to Make STEM Engaging for the Blind and Visually Impaired

Funding Source: National Science Foundation (2011)
Principal Investigator: Steve Mannheimer

NSF LogoThe project investigates the design, development and dissemination of metaphoric aural sound symbols (audemes), audeme dictionary and riddle audeme games to teach scientific concepts to 75-100 students who are blind and visually impaired (BVI). A number of research questions are included. How do audemes and sequences function as metaphors of STEM concepts? Which audeme game structures and strategies work best to engage BVI students? How do audemes and audeme games impact STEM education?  The audeme-to-concept-to-audeme dictionary will build scientific concepts using standards, state-approved science textbooks, teachers and students. They will also examine secondary words that are associated with the science concepts by mining textbooks, identifying tertiary concepts, and establishing a preliminary dictionary of audemes. A team of education, students and professionals who are experts will design the audemes after multiple iterations. Learn more

Online Modules for Family Voices

Funding Source: Family Voices, Inc. (2011)
Principal Investigator: Joe Defazio

In this project, Dr. Joe Defazio partnered with Dr. Stephen Viewhig to create a series of online training modules for Family Voices International. Dr. Defazio and two graduate students from the Media Arts and Science program worked with Dr. Veiwhig and Family Voices to produce, test, and activate the Family Leadership online resource portal. The team produced the web portal using the “Functional and Content Specifications” document provided by Family Voices of Indiana. They also produced two training videos based on content received from Family Voices of Indiana. The team also secured stakeholders from Family Voices to participate in usability tests during the design cycle.

Navigating the Aural Web

Funding Source: National Science Foundation (2010)
Principal Investigator: Davide Bolchini

NSF LogoThis project aims at establishing novel design strategies for the aural navigation of complex web information architectures, where users exclusively or primarily listen to, rather than look at, content and navigational prompts. We are iteratively creating and refining aural design solutions for listening-based back and history navigation and advanced aural browsing in large collections. We are then evaluating the impact of these navigation strategies on the user experience. We are exploring the potential and limits of the aural navigation paradigms to enhance the effectiveness of web navigation by performing a series of evaluation studies involving visually-impaired participants using screen readers (at the Indiana School for the Blind in Indianapolis) and sighted participants using mobile devices. Learn more

Creatures Classified! An exploration of cataloging creatures across the galaxy

Funding Source: UC Irvine (HASTAC.MacArthur Foundation) (2010)
Principal Investigator: Mathew Powers

Students are training to become intergalactic speciologists with their training starting on an Earth Type planet. The starting mission is to find and classify 20 species of animals from a variety of environments and identify them on the basis of kingdom, phylum, etc. Armed with a science field journal, players use it to navigate terrain, avoid hazards and threatening creatures, and collect data necessary to complete the mission. Players will classify species based on evolutionary, and physical characteristics of the creature. As players navigate through their training they will complete a mission on different planets with increasingly more complex creatures to identify, classify, and interact with. Players will visit eight unique planets in total. Each planet will have informational buoys that will arm the players with information about how to preserve and save their own environment and how fragile balanced ecosystems are.

The Acoustic User Interface for the Blind

Funding Source: Nina Mason Pulliam Charitable Trust (2007)
Principal Investigator: Steve Mannheimer

Nina Mason Trust LogoProf. Steve Mannheimer and his project team developed an acoustic user interface to enable students at the Indiana School for the Blind and Visually Impaired to better access, navigate and manage educational content delivered through the Internet or through personal computers. This interface was conceptualized and evaluated through a long-term collaboration with the students and staff of the ISBVI on computer workstations equipped with touch screens. User testing and evaluation data gained in the process was used to substantiate scholarly publications and future grant applications to other public and private agencies.