475:
preceded by access to lexical recognition and retrieval, and then followed by syntactic processing that considers a single syntactic result of the parsing, only returning to revise that syntactic interpretation if a potential problem is detected. The opposing, more contemporary model theorizes that within the mind, the processing of a sentence is not modular, or happening in strict sequence. Rather, it poses that several different syntactic possibilities can be considered at the same time, because lexical access, syntactic processing, and determination of meaning occur in parallel in the brain. In this way these processes are integrated.
1321:
718:, where provision is made for code relocation during the forward pass, and the fix-ups are applied backwards when the current program segment has been recognized as having been completed. An example where such a fix-up mechanism would be useful would be a forward GOTO statement, where the target of the GOTO is unknown until the program segment is completed. In this case, the application of the fix-up would be delayed until the target of the GOTO was recognized. Conversely, a backward GOTO does not require a fix-up, as the location will already be known.
320:) amongst a potentially unlimited range of possibilities, but only some of which are germane to the particular case. So an utterance "Man bites dog" versus "Dog bites man" is definite on one detail but in another language might appear as "Man dog bites" with a reliance on the larger context to distinguish between those two possibilities, if indeed that difference was of concern. It is difficult to prepare formal rules to describe informal behaviour even though it is clear that some rules are being followed.
463:
Garden-path sentences are difficult to parse because they contain a phrase or a word with more than one meaning, often their most typical meaning being a different part of speech. For example, in the sentence, "the horse raced past the barn fell", raced is initially interpreted as a past tense verb, but in this sentence, it functions as part of an adjective phrase. Since parsing is used to identify parts of speech, these sentences challenge the parsing ability of the reader.
1260:
508:
252:
722:
language in which, for example, a name must be declared before it may be referenced. More powerful grammars that can express this constraint, however, cannot be parsed efficiently. Thus, it is a common strategy to create a relaxed parser for a context-free grammar which accepts a superset of the desired language constructs (that is, it accepts some invalid constructs); later, the unwanted constructs can be filtered out at the
1108:
479:
gyrus, the right posterior cingulate cortex, and the left angular gyrus. Although it has not been absolutely proven, it has been suggested that these different structures might favor either phrase-structure parsing or dependency-structure parsing, meaning different types of parsing could be processed in different ways which have yet to be understood.
462:
The first, and perhaps most well-known, type of sentence that challenges parsing ability is the garden-path sentence. These sentences are designed so that the most common interpretation of the sentence appears grammatically faulty, but upon further inspection, these sentences are grammatically sound.
901:
or analysis, which is working out the implications of the expression just validated and taking the appropriate action. In the case of a calculator or interpreter, the action is to evaluate the expression or program; a compiler, on the other hand, would generate some kind of code. Attribute grammars
478:
Although there is still much to learn about the neurology of parsing, studies have shown evidence that several areas of the brain might play a role in parsing. These include the left anterior temporal pole, the left inferior frontal gyrus, the left superior temporal gyrus, the left superior frontal
466:
Another type of sentence that is difficult to parse is an attachment ambiguity, which includes a phrase that could potentially modify different parts of a sentence, and therefore presents a challenge in identifying syntactic relationship (i.e. "The boy saw the lady with the telescope", in which the
470:
A third type of sentence that challenges parsing ability is center embedding, in which phrases are placed in the center of other similarly formed phrases (i.e. "The rat the cat the man hit chased ran into the trap".) Sentences with 2 or in the most extreme cases 3 center embeddings are challenging
721:
Context-free grammars are limited in the extent to which they can express all of the requirements of a language. Informally, the reason is that the memory of such a language is limited. The grammar cannot remember the presence of a construct over an arbitrarily long input; this is necessary for a
889:
which recursively defines components that can make up an expression and the order in which they must appear. However, not all rules defining programming languages can be expressed by context-free grammars alone, for example type validity and proper declaration of identifiers. These rules can be
1413:
LR parsers typically have only a few actions after seeing each token. They are shift (add this token to the stack for later reduction), reduce (pop tokens from the stack and form a syntactic construct), end, error (no known rule applies) or conflict (does not know whether to shift or reduce).
474:
Within neurolinguistics there are multiple theories that aim to describe how parsing takes place in the brain. One such model is a more traditional generative model of sentence processing, which theorizes that within the brain there is a distinct module designed for sentence parsing, which is
458:
Neurolinguistics generally understands parsing to be a function of working memory, meaning that parsing is used to keep several parts of one sentence at play in the mind at one time, all readily accessible to be analyzed as needed. Because the human working memory has limitations, so does the
145:
when describing language comprehension. In this context, parsing refers to the way that human beings analyze a sentence or phrase (in spoken language or text) "in terms of grammatical constituents, identifying the parts of speech, syntactic relations, etc." This term is especially common when
402:
Parsing algorithms for natural language cannot rely on the grammar having 'nice' properties as with manually designed grammars for programming languages. As mentioned earlier some grammar formalisms are very difficult to parse computationally; in general, even if the desired structure is not
1545:
Alternatively, the parser or grammar needs to have extra logic to delay reduction and reduce only when it is absolutely sure which rule to reduce first. This method is used in LR parsers. This correctly parses the expression but with many more states and increased stack
596:
or other hierarchical structure, giving a structural representation of the input while checking for correct syntax. The parsing may be preceded or followed by other steps, or these may be combined into a single step. The parser is often preceded by a separate
1390:, the primary target of parsers, are carefully defined in such a way that a parser with limited lookahead, typically one, can parse them, because parsers with limited lookahead are often more efficient. One important change to this trend came in 1990 when
451:, parsing involves not just the assignment of words to categories (formation of ontological insights), but the evaluation of the meaning of a sentence according to the rules of syntax drawn by inferences made from each word in the sentence (known as
2077:
Sandra H. Vos, Thomas C. Gunter, Herbert
Schriefers & Angela D. Friederici (2001) Syntactic parsing and working memory: The effects of syntactic complexity, reading span, and concurrent load, Language and Cognitive Processes, 16:1, 65-103, DOI:
231:
Parsing was formerly central to the teaching of grammar throughout the
English-speaking world, and widely regarded as basic to the use and understanding of written language. However, the general teaching of such techniques is no longer current.
636:, but may also be text in a natural language or less structured textual data, in which case generally only certain parts of the text are extracted, rather than a parse tree being constructed. Parsers range from very simple functions such as
981:
in polynomial time and which generate polynomial-size representations of the potentially exponential number of parse trees. Their algorithm is able to produce both left-most and right-most derivations of an input with regard to a given
761:
The following code, however, is syntactically valid in terms of the context-free grammar, yielding a syntax tree with the same structure as the previous, but violates the semantic rule requiring variables to be initialized before use:
2158:
Lopopolo, Alessandro, van den Bosch, Antal, Petersson, Karl-Magnus, and Roel M. Willems; Distinguishing
Syntactic Operations in the Brain: Dependency and Phrase-Structure Parsing. Neurobiology of Language 2021; 2 (1): 152–175. doi:
1571:
Now stack has E + E and still the input is *. It has two choices now, either to shift based on rule2 or reduction based on rule1. Since * has higher precedence than + based on rule4, we shift * onto stack in anticipation of
796:
1484:
Most programming languages (except for a few such as APL and
Smalltalk) and algebraic formulas give higher precedence to multiplication than addition, in which case the correct interpretation of the example above is
935:
by expanding all alternative right-hand-sides of grammar rules. This is known as the primordial soup approach. Very similar to sentence diagramming, primordial soup breaks down the constituencies of sentences.
362:
of training data which has already been annotated (parsed by hand). This approach allows the system to gather information about the frequency with which various constructions occur in specific contexts.
969:. Although it has been believed that simple implementations of top-down parsing cannot accommodate direct and indirect left-recursion and may require exponential time and space complexity while parsing
943:
A parser can start with the input and attempt to rewrite it to the start symbol. Intuitively, the parser attempts to locate the most basic elements, then the elements containing these, and so on.
1424:
It eliminates many duplicate states and eases the burden of an extra stack. A C language non-lookahead parser will have around 10,000 states. A lookahead parser will have around 300 states.
2308:
343:
is another linguistic formalism which has been popular in the parsing community, but other research efforts have focused on less complex formalisms such as the one used in the Penn
1489:. Note that Rule4 above is a semantic rule. It is possible to rewrite the grammar to incorporate this into the syntax. However, not all such rules can be translated into syntax.
459:
function of sentence parsing. This is evidenced by several different types of syntactically complex sentences that propose potentially issues for mental parsing of sentences.
1042:
A simple parser implementation reads the entire input file, performs an intermediate computation or translation, and then writes the entire output file, such as in-memory
407:, some kind of context-free approximation to the grammar is used to perform a first pass. Algorithms which use context-free grammars often rely on some variant of the
2125:
Karlsson, F. (2010). Working Memory
Constraints on Multiple Center-Embedding. Proceedings of the Annual Meeting of the Cognitive Science Society, 32. Retrieved from
667:
The use of parsers varies by input. In the case of data languages, a parser is often found as the file reading facility of a program, such as reading in HTML or
2034:." Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference. Association for Computational Linguistics, 2000.
914:
of the parser is essentially to determine if and how the input can be derived from the start symbol of the grammar. This can be done in essentially two ways:
664:
and extraction of text. In other contexts regular expressions are instead used prior to parsing, as the lexing step whose output is then used by the parser.
2321:
2149:
Atlas, J. D. (1997). On the modularity of sentence processing: semantical generality and the language of thought. Language and
Conceptualization, 213–214.
885:
The next stage is parsing or syntactic analysis, which is checking that the tokens form an allowable expression. This is usually done with reference to a
312:
systems, written texts in human languages are parsed by computer programs. Human sentences are not easily parsed by programs, as there is substantial
1542:
The parser needs to have more logic to backtrack and retry whenever a rule is violated or not complete. The similar method is followed in LL parsers.
428:
2236:
1559:
Reduce stack item 1 to simple
Expression on input + based on rule3. The lookahead is +, so we are on path to E +, so we can reduce the stack to E.
1060:) as soon as the parser detects relevant tokens in the input stream. A push parser may skip parts of the input that are irrelevant (an example is
192:
with an explanation of the form, function, and syntactic relationship of each part. This is determined in large part from study of the language's
2240:
2014:." Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1. Association for Computational Linguistics, 2003.
1371:
Lookahead establishes the maximum incoming tokens that a parser can use to decide which rule it should use. Lookahead is especially relevant to
854:, each of which is a meaningful symbol in the context of an arithmetic expression. The lexer would contain rules to tell it that the characters
703:
because fast and efficient parsers can be written for them. For compilers, the parsing itself can be done in one pass or multiple passes – see
351:
aims to find only the boundaries of major constituents such as noun phrases. Another popular strategy for avoiding linguistic controversy is
241:
122:
the term is used to refer to the formal analysis by a computer of a sentence or other string of words into its constituents, resulting in a
3013:
103:
parsing is often performed as a method of understanding the exact meaning of a sentence or word, sometimes with the aid of devices such as
1421:
It helps the parser take the correct action in case of conflicts. For example, parsing the if statement in the case of an else clause.
2908:
2295:
803:
The following example demonstrates the common case of parsing a computer language with two levels of grammar: lexical and syntactic.
2947:
2137:
Ferreira, F., & Clifton, C. (1986). The independence of syntactic processing. Journal of Memory and
Language, 25(3), 348–368.
2873:
2878:
2484:
Natural language parser for the
Italian, open source, developed in Common Lisp by Leonardo Lesmo, University of Torino, Italy.
2257:
2027:
1922:
2345:
1958:
1906:
2883:
2090:
Pritchett, B. L. (1988). Garden Path
Phenomena and the Grammatical Basis of Language Processing. Language, 64(3), 539–576.
898:
340:
1383:, where it is often explicitly indicated by affixing the lookahead to the algorithm name in parentheses, such as LALR(1).
2175:." Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2014.
2374:
973:, more sophisticated algorithms for top-down parsing have been created by Frost, Hafiz, and Callaghan which accommodate
3094:
2868:
1794:
1658:
700:
157:, referring to the syntactic analysis of the input code into its component parts in order to facilitate the writing of
2278:
2962:
2903:
2775:
2514:
2457:
2437:
2212:
1307:
555:
291:
174:
1289:
537:
2810:
2732:
2335:
431:
in which the parser proposes some large number of analyses, and a more complex system selects the best option. In
2737:
1154:
1057:
1031:
3135:
3130:
2990:
2401:
1285:
923:
Top-down parsing can be viewed as an attempt to find left-most derivations of an input-stream by searching for
533:
432:
273:
2487:
795:
730:
723:
692:
574:
2995:
2850:
2449:
1017:
412:
1896:
1084:) that, as the text of the file is edited by a user, does not need to completely re-parse the entire file.
3074:
2800:
2742:
1227:
309:
2047:." Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 2014.
3079:
2957:
2924:
2860:
2680:
2582:
2429:
1824:
1729:
1700:
376:
332:
1568:
Reduce stack item 2 to Expression on input * based on rule3. The lookahead * expects only E before it.
966:
601:, which creates tokens from the sequence of input characters; alternatively, these can be combined in
3089:
2985:
2929:
2830:
2784:
1974:
Jurafsky, Daniel (1996). "A Probabilistic Model of Lexical and Syntactic Access and Disambiguation".
119:
49:
1988:
3084:
2888:
2820:
2577:
2549:
1734:
1710:
1654:
1324:
1320:
1281:
1270:
1194:
959:
529:
518:
269:
3033:
2685:
2628:
2587:
1748:
1274:
1222:
1179:
684:
605:. Parsers may be programmed by hand or may be automatically or semi-automatically generated by a
522:
262:
162:
2452:, Amsterdam, the Netherlands. Originally published by Ellis Horwood, Chichester, England, 1990;
2361:
2283:
10th International Symposium on Practical Aspects of Declarative Languages (PADL), ACM-SIGPLAN
2189:
2187:
Aho, A.V., Sethi, R. and Ullman, J.D. (1986) " Compilers: principles, techniques, and tools."
1983:
1948:
1849:
1834:
1606:
810:, by which the input character stream is split into meaningful symbols defined by a grammar of
715:
100:
31:
17:
660:
and a regular expression engine automatically generating a parser for that language, allowing
3038:
2980:
2840:
2768:
2710:
2690:
2554:
2507:
1789:
490:
examines ways to analyze language use and semiotic events. Persuasive language may be called
404:
193:
1946:
1539:
The user has to enclose expressions within parentheses. This often is not a viable solution.
2845:
2722:
1725:
1695:
1684:
1671:
1640:
1631:
1618:
1387:
998:
983:
886:
676:
593:
424:
147:
1532:
The parse tree and resulting code from it is not correct according to language semantics.
387:
statistics (that is, they consider the identities of the words involved, as well as their
8:
3043:
2727:
2695:
2633:
2611:
2057:
Jia, Robin; Liang, Percy (2016-06-11). "Data Recombination for Neural Semantic Parsing".
1844:
1690:
1622:
1043:
948:
708:
625:
pair, or the input (front end parsing) and output (back end code generation) stages of a
610:
602:
313:
305:
135:
112:
2205:
Parsing schemata : a framework for specification and analysis of parsing algorithms
2972:
2939:
2659:
2230:
2058:
1814:
1556:
Shift 1 onto stack on input 1 in anticipation of rule3. It does not reduce immediately.
931:
rules. Tokens are consumed from left to right. Inclusive choice is used to accommodate
811:
653:
487:
423:
However some systems trade speed for accuracy using, e.g., linear-time versions of the
352:
213:
205:
154:
61:
3104:
2705:
2649:
2566:
2453:
2433:
2397:
2390:
2341:
2322:
A context-sensitive graph grammar formalism for the specification of visual languages
2218:
2208:
2108:
1954:
1902:
1784:
1522:
Shift "3" onto stack from input (in anticipation of rule3). Input = (empty) Stack =
970:
938:
891:
704:
696:
633:
448:
209:
142:
108:
2475:
2396:. Prentice Hall Software Series (2nd ed.). Englewood Cliffs/NJ: Prentice Hall.
2311:." Visual Languages, Proceedings., 11th IEEE International Symposium on. IEEE, 1995.
3125:
3099:
3069:
3023:
2825:
2761:
2601:
2531:
2500:
2254:
2024:
1993:
1926:
1819:
1759:
1738:
1705:
1399:
1207:
1061:
1027:
1005:
and LR parsers will generate a rightmost derivation (although usually in reverse).
918:
807:
714:
The implied disadvantages of a one-pass compiler can largely be overcome by adding
661:
657:
606:
598:
436:
366:
225:
221:
104:
96:
57:
617:
These may be applied to different domains, but often appear together, such as the
2952:
2893:
2805:
2423:
2261:
2031:
1839:
1799:
1212:
1129:
672:
348:
53:
2443:
2172:
989:
An important distinction with regard to parsers is whether a parser generates a
947:
are examples of bottom-up parsers. Another term used for this type of parser is
695:
to create some form of internal representation; the parser is a key step in the
3005:
2898:
2138:
1997:
1804:
1717:
1199:
978:
963:
928:
641:
585:
388:
189:
85:
69:
65:
204:
languages. To parse a phrase such as "man bites dog" involves noting that the
3119:
2815:
2792:
2623:
2539:
2222:
1644:
1627:
1614:
1519:
Shift "*" onto stack from input (in anticipation of rule2). Input = Stack =
1510:
Shift "2" onto stack from input (in anticipation of rule3). Input = Stack =
1507:
Shift "+" onto stack from input (in anticipation of rule1). Input = Stack =
1501:
Shift "1" onto stack from input (in anticipation of rule3). Input = Stack =
1184:
1021:
418:
408:
358:
Most modern parsers are at least partly statistical; that is, they rely on a
323:
In order to parse natural language data, researchers must first agree on the
217:
2255:
Modular and Efficient Top-Down Parsing for Ambiguous Left-Recursive Grammars
2126:
2112:
2011:
584:
is a software component that takes input data (typically text) and builds a
467:
ambiguous phrase with the telescope could modify the boy saw or the lady.)
3018:
2835:
2654:
2044:
1829:
1779:
1774:
1754:
1578:
Reduce stack item 3 to Expression after seeing end of input based on rule3.
1391:
1133:
1081:
1873:
471:
for mental parsing, again because of ambiguity of syntactic relationship.
228:
are sometimes used to indicate relation between elements in the sentence.
3028:
2700:
2606:
2266:
10th International Workshop on Parsing Technologies (IWPT), ACL-SIGPARSE
1947:
Christopher D.. Manning; Christopher D. Manning; Hinrich SchĂĽtze (1999).
1809:
1721:
1680:
1667:
1648:
1593:
1380:
688:
649:
452:
392:
359:
336:
316:
in the structure of human language, whose usage is to convey meaning (or
197:
92:
27:
Analysing a string of symbols, according to the rules of a formal grammar
2387:
107:. It usually emphasizes the importance of grammatical divisions such as
2717:
2616:
1636:
1159:
Some of the well known parser development tools include the following:
1002:
924:
589:
380:
276: in this section. Unsourced material may be challenged and removed.
201:
126:
showing their syntactic relation to each other, which may also contain
123:
2160:
1647:. It is tuned for deterministic grammars, on which it performs almost
573:"Parser" redirects here. For the scripting language named Parser, see
2596:
2544:
2492:
1676:
1663:
1528:
Reduce stack items and new input "E" to "E" based on rule2. Stack =
1516:
Reduce stack items and new input "E" to "E" based on rule1. Stack =
1376:
1372:
1169:
1122:
1071:
974:
955:
944:
932:
396:
317:
158:
127:
1259:
814:. For example, a calculator program would look at an input such as "
507:
251:
184:
The traditional grammatical exercise of parsing, sometimes known as
2063:
1525:
Reduce stack element "3" to expression "E" based on rule3. Stack =
1513:
Reduce stack element "2" to Expression "E" based on rule3. Stack =
1118:
680:
626:
491:
344:
335:, but in general, parsing for grammars of this type is known to be
328:
2285:, Volume 4902/2008, Pages: 167 - 181, January 2008, San Francisco.
1657:: an O(n) algorithm for re-estimating production probabilities in
331:
and computational concerns; for instance some parsing systems use
91:
The term has slightly different meanings in different branches of
2470:
2298:." Journal of Visual Languages & Computing 8.1 (1997): 27-55.
2296:
Defining and parsing visual languages with layered graph grammars
2091:
1535:
To correctly parse without lookahead, there are three solutions:
324:
1437:
Set of expression parsing rules (called grammar) is as follows,
1327:
program that cannot be parsed with less than 2 token lookahead.
2481:
1232:
1189:
1174:
622:
165:. The term may also be used to describe a split or separation.
1592:
than non-lookahead parsers. This is the strategy followed in
1395:
1217:
1164:
637:
618:
153:
Within computer science, the term is used in the analysis of
2337:
Adaptive Parsing: Self-Extending Natural Language Interfaces
3048:
2045:
A fast and accurate dependency parser using neural networks
1367:
to choose; the latter requires peeking at the second token.
1242:
874:
mark the start of a new token, so meaningless tokens like "
699:. Programming languages tend to be specified in terms of a
645:
372:
2753:
455:). This normally occurs as words are being heard or read.
220:
of the verb "to bite", and the singular noun "dog" is the
1237:
962:
are examples of top-down parsers that cannot accommodate
668:
146:
discussing which linguistic cues help speakers interpret
2279:
Parser Combinators for Ambiguous Left-Recursive Grammars
2271:
2247:
1030:
algorithms have been used to construct "self-extending"
371:
Approaches which have been used include straightforward
1751:: converts an infix-notation math expression to postfix
1575:
Shift 3 onto stack on input 3 in anticipation of rule3.
1565:
Shift 2 onto stack on input 2 in anticipation of rule3.
1562:
Shift + onto stack on input + in anticipation of rule1.
439:
convert the text into a representation of its meaning.
2183:
2181:
1950:
Foundations of Statistical Natural Language Processing
1504:
Reduces "1" to expression "E" based on rule3. Stack =
1020:. Parsers for visual languages are sometimes based on
2388:
Brian W. Kernighan and Dennis M. Ritchie (Apr 1988).
652:. An important class of simple parsing is done using
327:
to be used. The choice of syntax is affected by both
2202:
656:, in which a group of regular expressions defines a
2178:
1363:", it cannot decide which of both alternatives for
188:, involves breaking down a text into its component
2389:
2362:"Natural Language Processing Techniques in Prolog"
427:algorithm. A somewhat recent development has been
1940:
3117:
2333:
2277:Frost, R., Hafiz, R. and Callaghan, P. (2008) "
2253:Frost, R., Hafiz, R. and Callaghan, P. (2007) "
1868:
1866:
640:, to complex programs such as the frontend of a
632:The input to a parser is typically text in some
130:information. Some parsing algorithms generate a
2320:Zhang, Da-Qian, Kang Zhang, and Jiannong Cao. "
1588:The parse tree generated is correct and simply
2102:
1459:Expression is the product of two expressions.
1097:
1049:Alternative parser implementation approaches:
415:to prune away unlikely analyses to save time.
2769:
2508:
2324:." The Computer Journal 44.3 (2001): 186-200.
2309:A graph grammar approach to graphical parsing
2105:The cognitive basis for linguistic structures
1894:
1863:
1599:
1584:Reduce stack items E + E to E based on rule1.
1581:Reduce stack items E * E to E based on rule2.
1357:. Looking only at the first lookahead token "
1070:, such as parsers that are typically used by
242:Syntactic parsing (computational linguistics)
134:or list of parse trees from a string that is
2235:: CS1 maint: multiple names: authors list (
2139:https://doi.org/10.1016/0749-596X(86)90006-9
806:The first stage is the token generation, or
30:"Parse" redirects here. For other uses, see
2360:Patrick Blackburn and Kristina Striegnitz.
1288:. Unsourced material may be challenged and
733:the following is syntactically valid code:
536:. Unsourced material may be challenged and
200:, which can be quite intricate for heavily
2776:
2762:
2515:
2501:
2327:
2239:) CS1 maint: numeric names: authors list (
2086:
2084:
1448:Expression is the sum of two expressions.
1353:" and is about to choose a rule to derive
902:can also be used to define these actions.
391:). However such systems are vulnerable to
383:. Most of the more successful systems use
2340:. Springer Science & Business Media.
2127:https://escholarship.org/uc/item/4j00v1j2
2062:
2010:Klein, Dan, and Christopher D. Manning. "
1987:
1901:. Springer Science & Business Media.
1888:
1670:parsing algorithm for a limited class of
1630:: another O(n) algorithm for parsing any
1308:Learn how and when to remove this message
556:Learn how and when to remove this message
292:Learn how and when to remove this message
212:of the sentence, the verb "bites" is the
2948:Comparison of regular-expression engines
2056:
1973:
1683:parsing algorithm for a larger class of
1319:
927:using a top-down expansion of the given
235:
2081:
2043:Chen, Danqi, and Christopher Manning. "
1001:). LL parsers will generate a leftmost
375:(probabilistic context-free grammars),
14:
3118:
2522:
2445:Parsing Techniques - A Practical Guide
2268:, Pages: 109 - 120, June 2007, Prague.
2192:Publishing Co., Inc. Boston, MA, USA.
790:
179:
2909:Zhu–Takaoka string matching algorithm
2757:
2496:
497:
482:
2488:Short history of parser construction
2334:Jill Fain Lehman (6 December 2012).
2171:Berant, Jonathan, and Percy Liang. "
1286:adding citations to reliable sources
1253:
1101:
534:adding citations to reliable sources
501:
442:
341:Head-driven phrase structure grammar
274:adding citations to reliable sources
245:
224:of the sentence. Techniques such as
2874:Boyer–Moore string-search algorithm
2408:(Appendix A.13 "Grammar", p.193 ff)
2161:https://doi.org/10.1162/nol_a_00029
1659:probabilistic context-free grammars
1493:Simple non-lookahead parser actions
1074:front-ends by "pulling" input text.
905:
24:
2442:Grune, Dick; Jacobs, Ceriel J.H.,
2415:
1795:DMS Software Reengineering Toolkit
1724:parsing algorithm supporting some
1335:a parser has digested the tokens "
1016:algorithms have been designed for
794:
701:deterministic context-free grammar
168:
25:
3147:
2963:Nondeterministic finite automaton
2904:Two-way string-matching algorithm
2464:
2173:Semantic parsing via paraphrasing
2050:
2025:A maximum-entropy-inspired parser
1895:Masaru Tomita (6 December 2012).
1428:Example: Parsing the Expression
1037:
2733:History of compiler construction
1617:: an O(n) algorithm for parsing
1605:This section is an excerpt from
1258:
1106:
1032:natural language user interfaces
799:Flow of data in a typical parser
506:
250:
2738:Comparison of parser generators
2471:The Lemon LALR Parser Generator
2425:LR Parsing: Theory and Practice
2380:
2367:
2354:
2314:
2301:
2294:Rekers, Jan, and Andy SchĂĽrr. "
2288:
2196:
2165:
2152:
2143:
2131:
2119:
2096:
2071:
1639:: an algorithm for parsing any
1155:Comparison of parser generators
971:ambiguous context-free grammars
818:" and split it into the tokens
679:, a parser is a component of a
261:needs additional citations for
68:, conforming to the rules of a
2879:Boyer–Moore–Horspool algorithm
2869:Apostolico–Giancarlo algorithm
2092:https://doi.org/10.2307/414532
2037:
2017:
2012:Accurate unlexicalized parsing
2004:
1967:
1915:
1470:Expression is a simple number
1417:Lookahead has two advantages.
1121:format but may read better as
609:. Parsing is complementary to
433:natural language understanding
48:is the process of analyzing a
13:
1:
2307:Rekers, Jan, and A. Schurr. "
2203:Sikkel, Klaas, 1954- (1997).
1856:
1478:+ has less precedence than *
693:computer programming language
575:Parser (programming language)
2884:Knuth–Morris–Pratt algorithm
2811:Damerau–Levenshtein distance
2450:Vrije Universiteit Amsterdam
2375:"Classic Parsing Algorithms"
1607:List of algorithms § Parsing
1249:
1018:visual programming languages
726:(contextual analysis) step.
7:
3075:Compressed pattern matching
2801:Approximate string matching
2783:
2743:Operator-precedence grammar
1767:
1730:parsing expression grammars
1696:LALR (look-ahead LR) parser
1228:Syntax Definition Formalism
1098:Parser development software
613:, which produces formatted
310:natural language processing
10:
3152:
3080:Longest common subsequence
2991:Needleman–Wunsch algorithm
2861:String-searching algorithm
2430:Cambridge University Press
2392:The C Programming Language
1998:10.1207/s15516709cog2002_1
1876:. dictionary.reference.com
1825:Parsing expression grammar
1701:Operator-precedence parser
1604:
1600:List of parsing algorithms
1152:
1056:call registered handlers (
572:
333:lexical functional grammar
239:
172:
29:
3090:Sequential pattern mining
3057:
3004:
2971:
2938:
2930:Commentz-Walter algorithm
2918:Multiple string searching
2917:
2859:
2851:Wagner–Fischer algorithm
2791:
2673:
2642:
2565:
2530:
2078:10.1080/01690960042000085
1923:"Grammar and Composition"
1477:
1436:
1358:
1336:
882:" will not be generated.
671:text; these examples are
568:
395:and require some kind of
141:The term is also used in
120:computational linguistics
3100:String rewriting systems
3085:Longest common substring
2996:Smith–Waterman algorithm
2821:Gestalt pattern matching
1735:Recursive descent parser
1711:Simple precedence parser
1655:Inside-outside algorithm
1551:Lookahead parser actions
1398:for his Ph.D. thesis, a
960:recursive-descent parser
890:formally expressed with
764:
735:
175:Natural language parsing
3034:Generalized suffix tree
2958:Thompson's construction
2686:Definite clause grammar
2482:Turin University Parser
2103:Thomas G Bever (1970).
1749:Shunting-yard algorithm
1651:and O(n) in worst case.
1223:Spirit Parser Framework
1180:Definite clause grammar
1130:converting this article
136:syntactically ambiguous
2986:Hirschberg's algorithm
2190:Addison-Wesley Longman
1898:Generalized LR Parsing
1850:Source code generation
1835:Program transformation
1706:SLR (Simple LR) parser
1666:: a relatively simple
1368:
800:
32:Parse (disambiguation)
3136:Compiler construction
3131:Algorithms on strings
2841:Levenshtein automaton
2831:Jaro–Winkler distance
2691:Deterministic parsing
1790:Deterministic parsing
1726:context-free grammars
1685:context-free grammars
1672:context-free grammars
1619:context-free grammars
1388:programming languages
1323:
1080:(such as incremental
798:
677:programming languages
588:– often some kind of
236:Computational methods
214:third person singular
148:garden-path sentences
2889:Rabin–Karp algorithm
2846:Levenshtein distance
2207:. Berlin: Springer.
1641:context-free grammar
1632:context-free grammar
1410:is any fixed value.
1282:improve this section
1044:multi-pass compilers
999:context-free grammar
995:rightmost derivation
984:context-free grammar
887:context-free grammar
594:abstract syntax tree
530:improve this section
411:, usually with some
270:improve this article
3044:Ternary search tree
2728:Scannerless parsing
2696:Dynamic programming
2478:The Stanford Parser
2422:Chapman, Nigel P.,
2023:Charniak, Eugene. "
1845:Sentence processing
1691:Canonical LR parser
1623:Chomsky normal form
1497:Initially Input =
1331:C grammar excerpt.
1078:incremental parsers
991:leftmost derivation
897:The final phase is
812:regular expressions
791:Overview of process
709:multi-pass compiler
687:, which parses the
654:regular expressions
603:scannerless parsing
306:machine translation
180:Traditional methods
2973:Sequence alignment
2940:Regular expression
2524:Parsing algorithms
2260:2018-08-22 at the
2030:2019-04-01 at the
1815:Left corner parser
1369:
1132:, if appropriate.
892:attribute grammars
801:
498:Computer languages
488:Discourse analysis
483:Discourse analysis
353:dependency grammar
208:noun "man" is the
155:computer languages
62:computer languages
46:syntactic analysis
3113:
3112:
3105:String operations
2751:
2750:
2550:Recursive descent
2347:978-1-4615-3622-2
1976:Cognitive Science
1960:978-0-262-13360-9
1908:978-1-4615-4034-2
1785:Compiler-compiler
1679:: A more complex
1482:
1481:
1406:) parsers, where
1402:for efficient LL(
1318:
1317:
1310:
1151:
1150:
1012:graphical parsing
939:Bottom-up parsing
724:semantic analysis
705:one-pass compiler
697:compiler frontend
675:. In the case of
634:computer language
566:
565:
558:
449:psycholinguistics
443:Psycholinguistics
399:to be effective.
302:
301:
294:
226:sentence diagrams
143:psycholinguistics
105:sentence diagrams
76:comes from Latin
16:(Redirected from
3143:
3070:Pattern matching
3024:Suffix automaton
2826:Hamming distance
2778:
2771:
2764:
2755:
2754:
2706:Parser generator
2629:Recursive ascent
2517:
2510:
2503:
2494:
2493:
2409:
2407:
2395:
2384:
2378:
2371:
2365:
2358:
2352:
2351:
2331:
2325:
2318:
2312:
2305:
2299:
2292:
2286:
2275:
2269:
2251:
2245:
2244:
2234:
2226:
2200:
2194:
2185:
2176:
2169:
2163:
2156:
2150:
2147:
2141:
2135:
2129:
2123:
2117:
2116:
2100:
2094:
2088:
2079:
2075:
2069:
2068:
2066:
2054:
2048:
2041:
2035:
2021:
2015:
2008:
2002:
2001:
1991:
1971:
1965:
1964:
1944:
1938:
1937:
1935:
1934:
1925:. Archived from
1919:
1913:
1912:
1892:
1886:
1885:
1883:
1881:
1870:
1820:Lexical analysis
1760:Lexical analysis
1741:suitable for LL(
1591:
1488:
1434:
1433:
1431:
1400:parser generator
1362:
1361:
1352:
1351:
1348:
1345:
1342:
1339:
1313:
1306:
1302:
1299:
1293:
1262:
1254:
1146:
1143:
1137:
1128:You can help by
1110:
1109:
1102:
1028:Adaptive parsing
1014:
1013:
967:production rules
919:Top-down parsing
906:Types of parsers
899:semantic parsing
881:
877:
873:
869:
865:
861:
857:
853:
849:
845:
841:
837:
833:
829:
825:
821:
817:
808:lexical analysis
786:
783:
780:
777:
774:
771:
768:
757:
754:
751:
748:
745:
742:
739:
729:For example, in
673:markup languages
662:pattern matching
658:regular language
607:parser generator
599:lexical analyser
561:
554:
550:
547:
541:
510:
502:
437:semantic parsers
367:machine learning
297:
290:
286:
283:
277:
254:
246:
97:computer science
86:part (of speech)
58:natural language
21:
3151:
3150:
3146:
3145:
3144:
3142:
3141:
3140:
3116:
3115:
3114:
3109:
3053:
3000:
2967:
2953:Regular grammar
2934:
2913:
2894:Raita algorithm
2855:
2806:Bitap algorithm
2787:
2782:
2752:
2747:
2669:
2638:
2561:
2526:
2521:
2476:Stanford Parser
2467:
2462:
2418:
2416:Further reading
2413:
2412:
2404:
2385:
2381:
2373:Song-Chun Zhu.
2372:
2368:
2359:
2355:
2348:
2332:
2328:
2319:
2315:
2306:
2302:
2293:
2289:
2276:
2272:
2262:Wayback Machine
2252:
2248:
2228:
2227:
2215:
2201:
2197:
2186:
2179:
2170:
2166:
2157:
2153:
2148:
2144:
2136:
2132:
2124:
2120:
2101:
2097:
2089:
2082:
2076:
2072:
2055:
2051:
2042:
2038:
2032:Wayback Machine
2022:
2018:
2009:
2005:
1989:10.1.1.150.5711
1972:
1968:
1961:
1945:
1941:
1932:
1930:
1921:
1920:
1916:
1909:
1893:
1889:
1879:
1877:
1872:
1871:
1864:
1859:
1854:
1840:Shallow parsing
1800:Grammar checker
1770:
1765:
1764:
1739:top-down parser
1610:
1602:
1589:
1486:
1429:
1359:
1349:
1346:
1343:
1340:
1337:
1314:
1303:
1297:
1294:
1279:
1263:
1252:
1247:
1157:
1147:
1141:
1138:
1127:
1111:
1107:
1100:
1092:passive parsers
1040:
1011:
1010:
908:
879:
875:
871:
867:
863:
859:
855:
851:
847:
843:
839:
835:
831:
827:
823:
819:
815:
793:
788:
787:
784:
781:
778:
775:
772:
769:
766:
759:
758:
755:
752:
749:
746:
743:
740:
737:
578:
571:
562:
551:
545:
542:
527:
511:
500:
485:
445:
429:parse reranking
377:maximum entropy
349:Shallow parsing
298:
287:
281:
278:
267:
255:
244:
238:
190:parts of speech
186:clause analysis
182:
177:
173:Main category:
171:
169:Human languages
66:data structures
42:syntax analysis
35:
28:
23:
22:
15:
12:
11:
5:
3149:
3139:
3138:
3133:
3128:
3111:
3110:
3108:
3107:
3102:
3097:
3092:
3087:
3082:
3077:
3072:
3067:
3061:
3059:
3055:
3054:
3052:
3051:
3046:
3041:
3036:
3031:
3026:
3021:
3016:
3010:
3008:
3006:Data structure
3002:
3001:
2999:
2998:
2993:
2988:
2983:
2977:
2975:
2969:
2968:
2966:
2965:
2960:
2955:
2950:
2944:
2942:
2936:
2935:
2933:
2932:
2927:
2921:
2919:
2915:
2914:
2912:
2911:
2906:
2901:
2899:Trigram search
2896:
2891:
2886:
2881:
2876:
2871:
2865:
2863:
2857:
2856:
2854:
2853:
2848:
2843:
2838:
2833:
2828:
2823:
2818:
2813:
2808:
2803:
2797:
2795:
2789:
2788:
2781:
2780:
2773:
2766:
2758:
2749:
2748:
2746:
2745:
2740:
2735:
2730:
2725:
2720:
2715:
2714:
2713:
2703:
2698:
2693:
2688:
2683:
2677:
2675:
2674:Related topics
2671:
2670:
2668:
2667:
2664:
2663:
2662:
2652:
2646:
2644:
2640:
2639:
2637:
2636:
2631:
2626:
2621:
2620:
2619:
2614:
2609:
2604:
2594:
2593:
2592:
2591:
2590:
2580:
2571:
2569:
2563:
2562:
2560:
2559:
2558:
2557:
2555:Tail recursive
2547:
2542:
2536:
2534:
2528:
2527:
2520:
2519:
2512:
2505:
2497:
2491:
2490:
2485:
2479:
2473:
2466:
2465:External links
2463:
2461:
2460:
2440:
2419:
2417:
2414:
2411:
2410:
2402:
2379:
2366:
2353:
2346:
2326:
2313:
2300:
2287:
2270:
2246:
2213:
2195:
2177:
2164:
2151:
2142:
2130:
2118:
2095:
2080:
2070:
2049:
2036:
2016:
2003:
1982:(2): 137–194.
1966:
1959:
1939:
1914:
1907:
1887:
1861:
1860:
1858:
1855:
1853:
1852:
1847:
1842:
1837:
1832:
1827:
1822:
1817:
1812:
1807:
1805:Inverse parser
1802:
1797:
1792:
1787:
1782:
1777:
1771:
1769:
1766:
1763:
1762:
1757:
1752:
1746:
1732:
1718:Packrat parser
1715:
1714:
1713:
1708:
1703:
1698:
1693:
1674:
1661:
1652:
1634:
1625:
1611:
1603:
1601:
1598:
1590:more efficient
1586:
1585:
1582:
1579:
1576:
1573:
1569:
1566:
1563:
1560:
1557:
1553:
1552:
1548:
1547:
1543:
1540:
1530:
1529:
1526:
1523:
1520:
1517:
1514:
1511:
1508:
1505:
1502:
1495:
1494:
1480:
1479:
1476:
1472:
1471:
1468:
1465:
1461:
1460:
1457:
1454:
1450:
1449:
1446:
1443:
1439:
1438:
1426:
1425:
1422:
1316:
1315:
1266:
1264:
1257:
1251:
1248:
1246:
1245:
1240:
1235:
1230:
1225:
1220:
1215:
1210:
1205:
1202:
1197:
1192:
1187:
1182:
1177:
1172:
1167:
1161:
1149:
1148:
1114:
1112:
1105:
1099:
1096:
1095:
1094:
1085:
1075:
1065:
1039:
1038:Implementation
1036:
1022:graph grammars
979:left recursion
964:left recursive
953:
952:
941:
936:
929:formal grammar
921:
907:
904:
816:12 * (3 + 4)^2
792:
789:
765:
736:
586:data structure
570:
567:
564:
563:
514:
512:
505:
499:
496:
484:
481:
444:
441:
435:applications,
389:part of speech
300:
299:
258:
256:
249:
240:Main article:
237:
234:
181:
178:
170:
167:
99:. Traditional
70:formal grammar
26:
9:
6:
4:
3:
2:
3148:
3137:
3134:
3132:
3129:
3127:
3124:
3123:
3121:
3106:
3103:
3101:
3098:
3096:
3093:
3091:
3088:
3086:
3083:
3081:
3078:
3076:
3073:
3071:
3068:
3066:
3063:
3062:
3060:
3056:
3050:
3047:
3045:
3042:
3040:
3037:
3035:
3032:
3030:
3027:
3025:
3022:
3020:
3017:
3015:
3012:
3011:
3009:
3007:
3003:
2997:
2994:
2992:
2989:
2987:
2984:
2982:
2979:
2978:
2976:
2974:
2970:
2964:
2961:
2959:
2956:
2954:
2951:
2949:
2946:
2945:
2943:
2941:
2937:
2931:
2928:
2926:
2923:
2922:
2920:
2916:
2910:
2907:
2905:
2902:
2900:
2897:
2895:
2892:
2890:
2887:
2885:
2882:
2880:
2877:
2875:
2872:
2870:
2867:
2866:
2864:
2862:
2858:
2852:
2849:
2847:
2844:
2842:
2839:
2837:
2834:
2832:
2829:
2827:
2824:
2822:
2819:
2817:
2816:Edit distance
2814:
2812:
2809:
2807:
2804:
2802:
2799:
2798:
2796:
2794:
2793:String metric
2790:
2786:
2779:
2774:
2772:
2767:
2765:
2760:
2759:
2756:
2744:
2741:
2739:
2736:
2734:
2731:
2729:
2726:
2724:
2721:
2719:
2716:
2712:
2709:
2708:
2707:
2704:
2702:
2699:
2697:
2694:
2692:
2689:
2687:
2684:
2682:
2679:
2678:
2676:
2672:
2665:
2661:
2658:
2657:
2656:
2653:
2651:
2648:
2647:
2645:
2641:
2635:
2632:
2630:
2627:
2625:
2622:
2618:
2615:
2613:
2610:
2608:
2605:
2603:
2600:
2599:
2598:
2595:
2589:
2588:Shunting-yard
2586:
2585:
2584:
2581:
2579:
2576:
2575:
2573:
2572:
2570:
2568:
2564:
2556:
2553:
2552:
2551:
2548:
2546:
2543:
2541:
2538:
2537:
2535:
2533:
2529:
2525:
2518:
2513:
2511:
2506:
2504:
2499:
2498:
2495:
2489:
2486:
2483:
2480:
2477:
2474:
2472:
2469:
2468:
2459:
2458:0-13-651431-6
2455:
2451:
2447:
2446:
2441:
2439:
2438:0-521-30413-X
2435:
2431:
2427:
2426:
2421:
2420:
2405:
2399:
2394:
2393:
2383:
2376:
2370:
2363:
2357:
2349:
2343:
2339:
2338:
2330:
2323:
2317:
2310:
2304:
2297:
2291:
2284:
2280:
2274:
2267:
2263:
2259:
2256:
2250:
2242:
2238:
2232:
2224:
2220:
2216:
2214:9783642605413
2210:
2206:
2199:
2193:
2191:
2184:
2182:
2174:
2168:
2162:
2155:
2146:
2140:
2134:
2128:
2122:
2114:
2110:
2106:
2099:
2093:
2087:
2085:
2074:
2065:
2060:
2053:
2046:
2040:
2033:
2029:
2026:
2020:
2013:
2007:
1999:
1995:
1990:
1985:
1981:
1977:
1970:
1962:
1956:
1953:. MIT Press.
1952:
1951:
1943:
1929:on 2016-12-01
1928:
1924:
1918:
1910:
1904:
1900:
1899:
1891:
1875:
1869:
1867:
1862:
1851:
1848:
1846:
1843:
1841:
1838:
1836:
1833:
1831:
1828:
1826:
1823:
1821:
1818:
1816:
1813:
1811:
1808:
1806:
1803:
1801:
1798:
1796:
1793:
1791:
1788:
1786:
1783:
1781:
1778:
1776:
1773:
1772:
1761:
1758:
1756:
1753:
1750:
1747:
1744:
1740:
1736:
1733:
1731:
1727:
1723:
1719:
1716:
1712:
1709:
1707:
1704:
1702:
1699:
1697:
1694:
1692:
1689:
1688:
1687:. Variants:
1686:
1682:
1678:
1675:
1673:
1669:
1665:
1662:
1660:
1656:
1653:
1650:
1646:
1645:Masaru Tomita
1642:
1638:
1635:
1633:
1629:
1628:Earley parser
1626:
1624:
1620:
1616:
1615:CYK algorithm
1613:
1612:
1608:
1597:
1595:
1583:
1580:
1577:
1574:
1570:
1567:
1564:
1561:
1558:
1555:
1554:
1550:
1549:
1544:
1541:
1538:
1537:
1536:
1533:
1527:
1524:
1521:
1518:
1515:
1512:
1509:
1506:
1503:
1500:
1499:
1498:
1492:
1491:
1490:
1474:
1473:
1469:
1466:
1463:
1462:
1458:
1455:
1452:
1451:
1447:
1444:
1441:
1440:
1435:
1432:
1423:
1420:
1419:
1418:
1415:
1411:
1409:
1405:
1401:
1397:
1393:
1389:
1384:
1382:
1378:
1374:
1366:
1356:
1334:
1330:
1326:
1322:
1312:
1309:
1301:
1291:
1287:
1283:
1277:
1276:
1272:
1267:This section
1265:
1261:
1256:
1255:
1244:
1241:
1239:
1236:
1234:
1231:
1229:
1226:
1224:
1221:
1219:
1216:
1214:
1211:
1209:
1206:
1203:
1201:
1198:
1196:
1193:
1191:
1188:
1186:
1183:
1181:
1178:
1176:
1173:
1171:
1168:
1166:
1163:
1162:
1160:
1156:
1145:
1136:is available.
1135:
1131:
1125:
1124:
1120:
1115:This article
1113:
1104:
1103:
1093:
1089:
1086:
1083:
1082:chart parsers
1079:
1076:
1073:
1069:
1066:
1063:
1059:
1055:
1052:
1051:
1050:
1047:
1045:
1035:
1033:
1029:
1025:
1023:
1019:
1015:
1006:
1004:
1000:
996:
992:
987:
985:
980:
976:
972:
968:
965:
961:
957:
950:
946:
942:
940:
937:
934:
930:
926:
922:
920:
917:
916:
915:
913:
903:
900:
895:
893:
888:
883:
813:
809:
804:
797:
763:
734:
732:
727:
725:
719:
717:
712:
710:
706:
702:
698:
694:
690:
686:
682:
678:
674:
670:
665:
663:
659:
655:
651:
647:
643:
639:
635:
630:
628:
624:
620:
616:
612:
608:
604:
600:
595:
591:
587:
583:
576:
560:
557:
549:
546:February 2013
539:
535:
531:
525:
524:
520:
515:This section
513:
509:
504:
503:
495:
493:
489:
480:
476:
472:
468:
464:
460:
456:
454:
450:
440:
438:
434:
430:
426:
422:
420:
419:chart parsing
414:
410:
409:CYK algorithm
406:
400:
398:
394:
390:
386:
382:
378:
374:
370:
368:
361:
356:
354:
350:
346:
342:
338:
334:
330:
326:
321:
319:
315:
311:
307:
296:
293:
285:
282:February 2013
275:
271:
265:
264:
259:This section
257:
253:
248:
247:
243:
233:
229:
227:
223:
219:
218:present tense
215:
211:
207:
203:
199:
195:
191:
187:
176:
166:
164:
160:
156:
151:
149:
144:
139:
137:
133:
129:
125:
121:
116:
114:
110:
106:
102:
98:
94:
89:
87:
83:
79:
75:
71:
67:
63:
59:
55:
51:
47:
43:
39:
33:
19:
3064:
3019:Suffix array
2925:Aho–Corasick
2836:Lee distance
2643:Mixed, other
2634:Shift-reduce
2523:
2444:
2424:
2391:
2382:
2369:
2356:
2336:
2329:
2316:
2303:
2290:
2282:
2273:
2265:
2249:
2204:
2198:
2188:
2167:
2154:
2145:
2133:
2121:
2104:
2098:
2073:
2052:
2039:
2019:
2006:
1979:
1975:
1969:
1949:
1942:
1931:. Retrieved
1927:the original
1917:
1897:
1890:
1878:. Retrieved
1830:Pratt parser
1780:Chart parser
1775:Backtracking
1755:Pratt parser
1742:
1594:LALR parsers
1587:
1534:
1531:
1496:
1483:
1427:
1416:
1412:
1407:
1403:
1392:Terence Parr
1385:
1381:LALR parsers
1370:
1364:
1354:
1332:
1328:
1304:
1295:
1280:Please help
1268:
1158:
1142:January 2017
1139:
1134:Editing help
1116:
1091:
1087:
1077:
1068:pull parsers
1067:
1054:push parsers
1053:
1048:
1041:
1026:
1009:
1007:
994:
990:
988:
954:
949:Shift-Reduce
911:
909:
896:
884:
805:
802:
760:
728:
720:
713:
666:
648:parser of a
642:C++ compiler
631:
614:
581:
579:
552:
543:
528:Please help
516:
486:
477:
473:
469:
465:
461:
457:
446:
425:shift-reduce
416:
405:context-free
401:
384:
364:
357:
322:
303:
288:
279:
268:Please help
263:verification
260:
230:
194:conjugations
185:
183:
163:interpreters
152:
140:
132:parse forest
131:
117:
90:
81:
77:
73:
72:. The term
56:, either in
45:
41:
37:
36:
3029:Suffix tree
2701:Memoization
2666:Statistical
2660:Left corner
2617:Generalized
2574:Precedence
2386:taken from
1880:27 November
1810:LALR parser
1722:linear time
1681:linear time
1668:linear time
1649:linear time
1487:1 + (2 * 3)
925:parse trees
689:source code
685:interpreter
650:web browser
453:connotation
393:overfitting
381:neural nets
337:NP-complete
198:declensions
93:linguistics
84:), meaning
3120:Categories
2718:Parse tree
2650:Combinator
2607:Look-ahead
2403:0131103628
2064:1606.03622
1933:2012-11-24
1857:References
1745:) grammars
1637:GLR parser
1467:E → number
1298:April 2012
1153:See also:
1003:derivation
956:LL parsers
945:LR parsers
611:templating
590:parse tree
329:linguistic
124:parse tree
2612:Canonical
2567:Bottom-up
2231:cite book
2223:606012644
1984:CiteSeerX
1677:LR parser
1664:LL parser
1456:E → E * E
1445:E → E + E
1430:1 + 2 * 3
1269:does not
1250:Lookahead
1208:Parboiled
1072:compilers
1058:callbacks
975:ambiguity
933:ambiguity
517:does not
413:heuristic
397:smoothing
355:parsing.
318:semantics
314:ambiguity
202:inflected
159:compilers
113:predicate
82:orationis
2583:Operator
2532:Top-down
2432:, 1987.
2258:Archived
2113:43300456
2028:Archived
1768:See also
1394:created
951:parsing.
681:compiler
627:compiler
492:rhetoric
345:Treebank
304:In some
206:singular
128:semantic
101:sentence
3126:Parsing
3095:Sorting
3065:Parsing
2785:Strings
1874:"Parse"
1333:Bottom:
1290:removed
1275:sources
1090:versus
716:fix-ups
644:or the
615:output.
538:removed
523:sources
385:lexical
325:grammar
216:of the
210:subject
118:Within
109:subject
74:parsing
54:symbols
38:Parsing
2602:Simple
2578:Simple
2540:Earley
2456:
2436:
2400:
2344:
2221:
2211:
2111:
1986:
1957:
1905:
1572:rule2.
1546:depth.
1475:Rule4:
1464:Rule3:
1453:Rule2:
1442:Rule1:
1379:, and
1233:SYNTAX
1213:Parsec
1190:JavaCC
1175:Coco/R
1117:is in
1088:Active
878:" or "
731:Python
623:printf
582:parser
569:Parser
379:, and
360:corpus
222:object
50:string
18:Parser
3058:Other
3014:DAFSA
2981:BLAST
2655:Chart
2059:arXiv
1396:ANTLR
1386:Most
1218:Ragel
1195:Lemon
1170:Bison
1165:ANTLR
1123:prose
1062:Expat
1008:Some
997:(see
993:or a
776:print
747:print
691:of a
638:scanf
619:scanf
417:(See
373:PCFGs
365:(See
44:, or
3049:Trie
3039:Rope
2711:LALR
2454:ISBN
2434:ISBN
2398:ISBN
2342:ISBN
2241:link
2237:link
2219:OCLC
2209:ISBN
2109:OCLC
1955:ISBN
1903:ISBN
1882:2010
1737:: a
1728:and
1720:: a
1365:Stmt
1355:Stmt
1347:main
1329:Top:
1273:any
1271:cite
1243:Yacc
1204:LuZc
1185:GOLD
1119:list
977:and
958:and
912:task
910:The
870:and
707:and
646:HTML
521:any
519:cite
308:and
196:and
161:and
111:and
95:and
78:pars
2723:AST
2681:PEG
2624:CYK
2281:."
2264:."
1994:doi
1643:by
1621:in
1350:(){
1338:int
1284:by
1238:XPL
1200:Lex
876:12*
683:or
669:XML
532:by
447:In
272:by
64:or
52:of
3122::
2597:LR
2545:LL
2448:,
2428:,
2233:}}
2229:{{
2217:.
2180:^
2107:.
2083:^
1992:.
1980:20
1978:.
1865:^
1596:.
1377:LR
1375:,
1373:LL
1064:).
1046:.
1034:.
1024:.
986:.
894:.
880:(3
866:,
862:,
858:,
850:,
846:,
842:,
838:,
834:,
830:,
826:,
822:,
820:12
711:.
629:.
592:,
580:A
494:.
421:.)
369:.)
347:.
339:.
150:.
138:.
115:.
88:.
60:,
40:,
2777:e
2770:t
2763:v
2516:e
2509:t
2502:v
2406:.
2377:.
2364:.
2350:.
2243:)
2225:.
2115:.
2067:.
2061::
2000:.
1996::
1963:.
1936:.
1911:.
1884:.
1743:k
1609:.
1408:k
1404:k
1360:v
1344:;
1341:v
1325:C
1311:)
1305:(
1300:)
1296:(
1292:.
1278:.
1144:)
1140:(
1126:.
872:)
868:(
864:^
860:+
856:*
852:2
848:^
844:)
840:4
836:+
832:3
828:(
824:*
785:)
782:y
779:(
773:1
770:=
767:x
756:)
753:x
750:(
744:1
741:=
738:x
621:/
577:.
559:)
553:(
548:)
544:(
540:.
526:.
295:)
289:(
284:)
280:(
266:.
80:(
34:.
20:)
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.