Recent Advances in Parsing Technology

Recent Advances in Parsing Technology
Author: H. Bunt
Publisher: Springer Science & Business Media
Total Pages: 436
Release: 1996-08-31
Genre: Computers
ISBN: 079234152X

In Marcus (1980), deterministic parsers were introduced. These are parsers which satisfy the conditions of Marcus's determinism hypothesis, i.e., they are strongly deterministic in the sense that they do not simulate non determinism in any way. In later work (Marcus et al. 1983) these parsers were modified to construct descriptions of trees rather than the trees them selves. The resulting D-theory parsers, by working with these descriptions, are capable of capturing a certain amount of ambiguity in the structures they build. In this context, it is not clear what it means for a parser to meet the conditions of the determinism hypothesis. The object of this work is to clarify this and other issues pertaining to D-theory parsers and to provide a framework within which these issues can be examined formally. Thus we have a very narrow scope. We make no ar guments about the linguistic issues D-theory parsers are meant to address, their relation to other parsing formalisms or the notion of determinism in general. Rather we focus on issues internal to D-theory parsers themselves.

New Developments in Parsing Technology

New Developments in Parsing Technology
Author: H. Bunt
Publisher: Springer Science & Business Media
Total Pages: 408
Release: 2006-01-27
Genre: Computers
ISBN: 1402022956

Parsing can be defined as the decomposition of complex structures into their constituent parts, and parsing technology as the methods, the tools, and the software to parse automatically. Parsing is a central area of research in the automatic processing of human language. Parsers are being used in many application areas, for example question answering, extraction of information from text, speech recognition and understanding, and machine translation. New developments in parsing technology are thus widely applicable. This book contains contributions from many of today's leading researchers in the area of natural language parsing technology. The contributors describe their most recent work and a diverse range of techniques and results. This collection provides an excellent picture of the current state of affairs in this area. This volume is the third in a series of such collections, and its breadth of coverage should make it suitable both as an overview of the current state of the field for graduate students, and as a reference for established researchers.

Parsing Techniques

Parsing Techniques
Author: Dick Grune
Publisher: Springer Science & Business Media
Total Pages: 677
Release: 2007-10-29
Genre: Computers
ISBN: 0387689540

This second edition of Grune and Jacobs’ brilliant work presents new developments and discoveries that have been made in the field. Parsing, also referred to as syntax analysis, has been and continues to be an essential part of computer science and linguistics. Parsing techniques have grown considerably in importance, both in computer science, ie. advanced compilers often use general CF parsers, and computational linguistics where such parsers are the only option. They are used in a variety of software products including Web browsers, interpreters in computer devices, and data compression programs; and they are used extensively in linguistics.

Dependency Parsing

Dependency Parsing
Author: Sandra Kübler
Publisher: Morgan & Claypool Publishers
Total Pages: 128
Release: 2009
Genre: Computers
ISBN: 1598295969

Dependency-based methods for syntactic parsing have become increasingly popular in natural language processing in recent years. This book gives a thorough introduction to the methods that are most widely used today. After an introduction to dependency grammar and dependency parsing, followed by a formal characterization of the dependency parsing problem, the book surveys the three major classes of parsing models that are in current use: transition-based, graph-based, and grammar-based models. It continues with a chapter on evaluation and one on the comparison of different methods, and it closes with a few words on current trends and future prospects of dependency parsing. The book presupposes a knowledge of basic concepts in linguistics and computer science, as well as some knowledge of parsing methods for constituency-based representations. Table of Contents: Introduction / Dependency Parsing / Transition-Based Parsing / Graph-Based Parsing / Grammar-Based Parsing / Evaluation / Comparison / Final Thoughts

Trajectories through Knowledge Space

Trajectories through Knowledge Space
Author: Lawrence A. Bookman
Publisher: Springer Science & Business Media
Total Pages: 284
Release: 2012-12-06
Genre: Computers
ISBN: 1461527805

As any history student will tell you, all events must be understood within their political and sociological context. Yet science provides an interesting counterpoint to this idea, since scientific ideas stand on their own merit, and require no reference to the time and place of their conception beyond perhaps a simple citation. Even so, the historical context of a scientific discovery casts a special light on that discovery - a light that motivates the work and explains its significance against a backdrop of related ideas. The book that you hold in your hands is unusually adept at presenting technical ideas in the context of their time. On one level, Larry Bookman has produced a manuscript to satisfy the requirements of a PhD program. If that was all he did, my preface would praise the originality of his ideas and attempt to summarize their significance. But this book is much more than an accomplished disser tation about some aspect of natural language - it is also a skillfully crafted tour through a vast body of computational, linguistic, neurophysiological, and psychological research.

Natural Language Processing: The PLNLP Approach

Natural Language Processing: The PLNLP Approach
Author: Karen Jensen
Publisher: Springer Science & Business Media
Total Pages: 326
Release: 2012-12-06
Genre: Computers
ISBN: 1461531705

Natural language is easy for people and hard for machines. For two generations, the tantalizing goal has been to get computers to handle human languages in ways that will be compelling and useful to people. Obstacles are many and legendary. Natural Language Processing: The PLNLP Approach describes one group's decade of research in pursuit of that goal. A very broad coverage NLP system, including a programming language (PLNLP) development tools, and analysis and synthesis components, was developed and incorporated into a variety of well-known practical applications, ranging from text critiquing (CRITIQUE) to machine translation (e.g. SHALT). This books represents the first published collection of papers describing the system and how it has been used. Twenty-six authors from nine countries contributed to this volume. Natural language analysis, in the PLNLP approach, is done is six stages that move smoothly from syntax through semantics into discourse. The initial syntactic sketch is provided by an Augmented Phrase Structure Grammar (APSG) that uses exclusively binary rules and aims to produce some reasonable analysis for any input string. Its `approximate' analysis passes to the reassignment component, which takes the default syntactic attachments and adjusts them, using semantic information obtained by parsing definitions and example sentences from machine-readable dictionaries. This technique is an example of one facet of the PLNLP approach: the use of natural language itself as a knowledge representation language -- an innovation that permits a wide variety of online text materials to be exploited as sources of semantic information. The next stage computes the intrasential argument structure and resolves all references, both NP- and VP-anaphora, that can be treated at this point in the processing. Subsequently, additional components, currently not so well developed as the earlier ones, handle the further disambiguation of word senses, the normalization of paraphrases, and the construction of a paragraph (discourse) model by joining sentential semantic graphs. Natural Language Processing: The PLNLP Approach acquaints the reader with the theory and application of a working, real-world, domain-free NLP system, and attempts to bridge the gap between computational and theoretical models of linguistic structure. It provides a valuable resource for students, teachers, and researchers in the areas of computational linguistics, natural processing, artificial intelligence, and information science.

Predicative Forms in Natural Language and in Lexical Knowledge Bases

Predicative Forms in Natural Language and in Lexical Knowledge Bases
Author: P. Saint-Dizier
Publisher: Springer Science & Business Media
Total Pages: 381
Release: 2013-03-09
Genre: Language Arts & Disciplines
ISBN: 9401727465

This volume is a selection of papers presented at a workshop entitled Predicative Forms in Natural Language and in Lexical Knowledge Bases organized in Toulouse in August 1996. A predicate is a named relation that exists among one or more arguments. In natural language, predicates are realized as verbs, prepositions, nouns and adjectives, to cite the most frequent ones. Research on the identification, organization, and semantic representa tion of predicates in artificial intelligence and in language processing is a very active research field. The emergence of new paradigms in theoretical language processing, the definition of new problems and the important evol ution of applications have, in fact, stimulated much interest and debate on the role and nature of predicates in naturallangage. From a broad theoret ical perspective, the notion of predicate is central to research on the syntax semantics interface, the generative lexicon, the definition of ontology-based semantic representations, and the formation of verb semantic classes. From a computational perspective, the notion of predicate plays a cent ral role in a number of applications including the design of lexical knowledge bases, the development of automatic indexing systems for the extraction of structured semantic representations, and the creation of interlingual forms in machine translation.

Text- and Speech-Triggered Information Access

Text- and Speech-Triggered Information Access
Author: Steve Renals
Publisher: Springer
Total Pages: 203
Release: 2003-09-09
Genre: Education
ISBN: 3540451153

This book presents revised versions of the lectures given at the 8th ELSNET European Summer School on Language and Speech Communication held on the Island of Chios, Greece, in summer 2000. Besides an introductory survey, the book presents lectures on data analysis for multimedia libraries, pronunciation modeling for large vocabulary speech recognition, statistical language modeling, very large scale information retrieval, reduction of information variation in text, and a concluding chapter on open questions in research for linguistics in information access. The book gives newcomers to language and speech communication a clear overview of the main technologies and problems in the area. Researchers and professionals active in the area will appreciate the book as a concise review of the technologies used in text- and speech-triggered information access.

Taking Scope

Taking Scope
Author: Mark Steedman
Publisher: MIT Press
Total Pages: 325
Release: 2011-11-23
Genre: Language Arts & Disciplines
ISBN: 026230063X

A novel view of the syntax and semantics of quantifier scope that argues for a “combinatory” theory of natural language syntax. In Taking Scope, Mark Steedman considers the syntax and semantics of quantifier scope in interaction with negation, polarity, coordination, and pronominal binding, among other constructions. The semantics is “surface compositional,” in that there is a direct correspondence between syntactic types and operations of composition and types and compositions at the level of logical form. In that sense, the semantics is in the “natural logic” tradition of Aristotle, Leibniz, Frege, Russell, and others who sought to define a psychologically real logic directly reflecting natural language grammar. The book reunites the generative-transformational tradition initiated by Chomsky—which views the formal syntactic component as entirely autonomous—-with the older, strongly lexicalist, construction-based tradition, which has sought to define a more lingistically transparent theory of meaning representation. Steedman offers a logical formalism that relates directly to the surface form of language and to the process of inference and proof that it must support. Such a natural logic, although formal by definition, should be allowed to grow organically from attested language phenomena rather than be axiomatized a priori in terms of any standard logic. Steedman also considers the application of natural semantic interpretations to practical natural language processing tasks, emphasizing throughout the elimination of traditional quantifiers from semantic formalism in favor of devices such as Skolem terms and structure-sharing among representations in processing.