 |  | Sessions
take place in auditorium 10 unless otherwise indicated.Chair:
Shuly Wintner Chair:
Shuly Wintner | 09:30- | 10:30
| Johan Bos, U Edinburgh, UK Invited
talk: Generating speech recognition grammars with compositional semantics
from unification grammars In
this talk I will introduce a method to compile unification
grammars into speech recognition packages. I will focus on how to transfer
semantic operations (such as functional application), and in particular
discuss the difficulties that arise for left-recursive productions. Used in
practical applications, the resulting speech grammars will associate
general domain independent meaning representations with recognised strings,
making a subsequent traditional parsing processing step redundant. I will
conclude with presenting some promising practical results. |
| 11:00- | 11:30
| Mike Daniels, Ohio State U, USA Detmar
Meurers, Ohio State U, USA Improving
the efficiency of parsing with discontinuous
constituents We
discuss a a generalization of Earley's algorithm to grammars
licensing discontinuous constituents of the kind proposed by the so-called
linearization approaches in Head-Driven Phrase Structure Grammar. We show
that one can replace the standard indexing on the string position by
bitmasks that act as constraints over possible coverage bitvectors. This
improves efficiency of edge access and reduces the number of edges by
constraining prediction to those grammar rules which are compatible with
known linearization properties. The resulting parsing algorithm does not
have to process the right-hand side categories in the order in which they
cover the string, and so one can obtain a head-driven strategy simply by
reordering the right-hand side categories of the rules. The resulting
strategy generalizes head-driven parsing in that it also permits the
ordering of non-head categories. |
| 11:30- | 12:00
| Katrin Erk, Saarland U, Germany Geert-Jan
M. Kruijff, Saarland U, Germany A
constraint-programming approach to parsing with resource-sensitive
categorial grammar Parsing
with resource-sensitive categorial grammars is an
NP-complete problem. The traditional approach to parsing with such grammars
is based on generate and test and cannot avoid this high worst-case
complexity. This paper proposes an alternative approach, based on
constraint programming: Given a grammar, constraints formulated on an
abstract interpretation of the grammar's logical structure are used to
prune the search space during parsing. The approach is provably sound and
complete. Calculations of its complexity show significant potential
improvements on efficiency. |
| 12:00- | 12:30
| Chris Fox, U Essex, UK Shalom
Lappin, King's College London, UK Carl
Pollard, Ohio State U, USA First-order,
Curry-typed logic for natural language
semantics The
paper presents Property Theory with Curry Typing (PTCT) where
the language of terms and well-formed formulae are joined by a language of
types. In addition to supporting fine-grained intensionality, the basic
theory is essentially first-order, so that implementations using the theory
can apply standard first-order theorem proving techniques. Some extensions
to the type theory are discussed, including the possibility of adding type
polymorphism |
| 14:00- | 14:30
| Barbara Gawronska, U Skövde, Sweden Employing
cognitive notions in multilingual summarization of news
reports The
paper presents an approach to automatic text understanding
inspired by speech act theory and cognitive semantics, especially by the
notion of `mental spaces' (Fauconnier 1985), and by Pustejovsky's (1991a,
1991b, 1995) notion of `qualia' and his definition of formal and telic
hyponymy. This approach is employed in an experimental system for
understanding of news reports and multilingual generation of news
summaries. The system, implemented in Prolog and Delphi, aims at analyses
of English news reports in the domain of the world's news (military
conflicts, terrorists attacks, natural disasters) and generation of
summaries in Swedish, Danish, and Polish. The paper focuses on the
understanding component and on the possibility of using WordNet as the main
lexical knowledge resource for English. An appropriate semantic analysis
and a successful summarization of English input texts require some
modifications of the hyper-/hyponymy and holo-/meronymy relations that are
encoded in WordNet. A combination of a cognitive analysis of certain
lexical and phrasal categories (speech act phrases, epistemic phrase,
prepositions) with qualia-based re-formulations of WordNet hierarchies is
proposed and tested. |
| 14:30- | 15:00
| Marilisa Amoia, Saarland U, Germany Claire
Gardent, LORIA Nancy, France Stephan
Thater, Saarland U, Germany Using
set constraints to generate distinguishing
descriptions Algorithms
such as (van Deemter 2000) which generate
distinguishing descriptions for sets of individuals using positive,
negative and disjunctive properties, do not always generate a minimal
description. In this paper, we show that such an approach is cognitively
inappropriate in that the descriptions produced might be unnecessary long
and ambiguous and/or epistemically redundant. We then present an
alternative, constraint-based algorithm which does produce minimal
descriptions and compare its performance with the incremental
algorithm. |
| 15:00- | 15:30
| Balder ten Cate, U Amsterdam, The Netherlands Chung-chieh
Shan, Harvard U, USA Question
answering: From partitions to Prolog We
implement Groenendijk and Stokhof's partition semantics of
questions in a simple question answering algorithm. The algorithm is sound,
complete, and based on tableau theorem proving. The algorithm relies on a
syntactic characterization of answerhood: Any answer to a question is
equivalent to some formula built up only from instances of the question. We
prove this characterization by translating the logic of interrogation to
classical predicate logic and applying Craig's interpolation
theorem. |
| 16:00- | 16:30
| Henning Christiansen, U Roskilde, Denmark Abductive
language interpretation as bottom-up deduction A
translation of abductive language interpretation problems into
a deductive form is proposed and shown to be correct. No meta-level
overhead is involved in the resulting formulas that can be evaluated by
bottom-up deduction, e.g., by considering them as Constraint Handling
Rules. The problem statement may involve background theories with integrity
constraints, and minimal contexts are produced that can explain a discourse
given. |
Chair:
Shuly Wintner Room:
Restaurant TBA |
|