Content area
Abstract
Much existing work in text to scene generation focuses on generating static scenes, which leaves aside entire word classes such as motion verbs. This thesis introduces a system for generating animated visualizations of motion events by integrating dynamic semantics into a formal model of events, resulting in a simulation of an event described in natural language. Visualization, herein defined as a dynamic three-dimensional simulation and rendering that satisfies the constraints of an associated minimal model, provides a framework for evaluating the properties of spatial predicates in real-time, but requires the specification of values and parameters that can be left underspecified in the model. Thus, there remains the matter of determining what, if any, the “best” values of those parameters are. This research explores a method of using a three-dimensional simulation and visualization interface to determine prototypical values for underspecified param- eters of motion predicates, built on a game engine-based platform that allows the development of semantically-grounded reasoning components in areas in the intersection of theoretical reasoning and AI.