Research

Biologically inspired computer vision

A hierarchical neuronal network
Computer vision researchers often try to enable artificial systems to accomplish tasks that seem effortless for living beings. But the foundations of most popular and effective computer vision techniques lie in domain of mathematics, statistics and probability. This project aims at developing a novel architecture for supporting biologically-inspired vision. Much of the complexity of biological vision systems comes from their vastly interconnected neurophysiological structures, and simulating that on computers with a suitable level of abstraction, while maintaining biological plausibility, is our primary challenge. [viz]

CAPTIVE: A Cube with Augmented Physical Tools

Cube seen from the user's point of view with a model of the human heart augmented inside
High-dimensional information has been of interest in visualization research for decades. Interaction techniques for performing different tasks with such information, like selecting, exploring, filtering, et al., provide a rich avenue for research. We present CAPTIVE - a desktop see-through augmented reality system designed to enable tangible bimanual interaction with 3-dimensional data. We use a wireframe cube with colored corners for tracking, thus allowing the interior volume to be used as a constrained space for physical exploration. [viz]

Low-level analytics models of cognition

A stylized view of a concentration game board used in one of the studies
Cognitive state and situational context affect what microstrategies are used to complete given tasks. We use this causality as a basis to gauge mental state by observing low-level motor movements as reflected in input device usage (mouse, keyboard, etc.). Spatio-temporal patterns in these streams of data help us identify different usage conditions (e.g. playing for speed vs. accuracy in a casual game), and contrast normal behavior with dishonest and deceptive behavior. Cognitive models are used to explain strategies in use that might result in the observed patterns. [viz]

Accessible flowcharts

A flowchart labeled with detected text regions
Diagrammatic representations are an effective way of highlighting non-linear structures that exist within processes and hierarchies. Unfortunately, such representations are almost completely inaccessible to persons with vision impairment (PWVIs). Most diagrams published on the web are in rasterized image formats; those in vector formats are optimized for display purposes, including layout and visual style information, and make it difficult for a PWVI to extract semantically relevant information from them easily. Printed books are another source of rasterized diagrams. Modern OCR products may be able to extract text content from diagrams, but recovering structural information is still non-trivial. This project focuses on parsing diagrams to generate a structured representation of the content, and then present it using some novel multi-modal interaction techniques (currently we limit our scope to flowcharts).

CAVIAR: Computer-vision Assisted Vibrotactile Interface for Accessible Reaching

Overview diagram of the CAVIAR system
Computers can make the everyday world more accessible to PWVIs in many different ways. One such possibility, explored in this project, is to help users reach objects within their peripersonal space. Using computer vision for detecting and recognizing objects, and a vibrotactile device for guidance, we are able to demonstrate an end-to-end system that can be used in day-to-day settings. We use an Android smartphone for all our processing and custom-built wristband with vibration motors that connects wirelessly to the phone. The technology components used are themselves widely available, and the algorithms well-established, but their orchestration into a coherent system has been a challenge for us. The computing capability required on mobile devices for such an application to become practical is expected to be available in the very near future, if not already out there. Refinement in design can help make such a system minimally invasive, yet incredibly useful.

Pointing at smart objects outdoors

A test suject pointing with a custom measurement glove on
With the increasing use of geo-spatial information for locating persons and objects, new applications for cellphones and other mobile devices are emerging. One class of such applications falls under the concept of augmented reality (AR). AR applications often require users to point a device, and present relevant information depending on the direction of pointing. While pointing is a common human action, its dynamic profile and other properties make it both suitable as well as difficult for mobile applications to use as input. In this project, we study pointing characteristics using a custom-built measurement glove and an Android smartphone, and demonstrate how these results can be used to design a system of smart objects that respond in some fashion when pointed to.