Author

Farhan Faisal

Date of Award

12-2003

Level of Access Assigned by Author

Open-Access Thesis

Degree Name

Master of Science (MS)

Department

Spatial Information Science and Engineering

Advisor

Max J. Egenhofer

Second Committee Member

Anthony Stefanidis

Third Committee Member

Silvia Nittel

Abstract

People typically communicate by pointing, talking, sketching, writing, and typing. Pointing can be used to visualize or exchange information about an object when there is no other mutually understood way of communication. Despite its proven expressiveness, however, it has not yet become a frequently used modality to interact with computer systems. With the rapid move towards the adoption of mobile technologies, geographic information systems (GISs) have a particular need for advanced forms of interaction that enable users to query the geographic world directly. To enable pointing-based query system on a handheld device, a number of fundamental technical challenges have to be overcome. For such a system to materialize we need models stored in the device's knowledge base that can be used as surrogate of real world objects. These computations, however, assume that (1) the pointing direction matches with the line-of-sight and (2) the observations about location and direction are precise enough so that a computational model will determine the same object as what the user points at. Both assumptions are not true. This thesis, therefore, develops an efficient error compensation model to reduce the discrepancy between the line-of-sight of the eye and the pointer direction. The model is based on a coordinate system centered at the neck and distances measured from neck to eye, neck to shoulder, shoulder to handheld pointer, and the pointing direction. An experiment was conducted using a gyro-enhanced sensor and three subjects who pointed at marked targets in a given room. It showed that the error compensation algorithm significantly reduces errors in pointing with arms outstretched.

Share