Daniel Bauer’s research in Natural Language Processing aims to develop systems that can accurately interpret natural language in a multimodal environment and in linguistic discourse. Bauer’s focus in on the semantics of natural language and on how to efficiently translate between surface text, syntax, and semantics. His work uses deep, formal representations of language meaning. Such representations can be helpful in applications ranging from machine translation to natural language user interfaces and will eventually lead to more intelligent language processing systems.
Bauer’s research concentration is on syntactic and semantic parsing. His work combines machine learning techniques with linguistically inspired formal models of the relationship between text, syntax, and semantics. It focuses on learning such models automatically, as well as efficient algorithms for inferring good semantic interpretations. One particular model uses hyperedge replacement graph grammars to construct meaning representations. Bauer is also interested in language that is “grounded” in other modalities, such as formal knowledge bases, 3D scenes, vision, and robots. He investigates how the relationship between language and the objects or concepts it refers to can be modeled and used in NLP systems.