Context Dependency PreviousNext

The geyacc paradigm is to parse tokens first, then group them into larger syntactic units. In many languages, the meaning of a token is affected by its context. Although this violates the geyacc paradigm, certain techniques (known as "kludges") may enable you to write geyacc parsers for such languages. (Actually, "kludge" means any technique that gets its job done but is neither clean nor robust.)

Semantic Info in Token Types

The C language has a context dependency: the way an identifier is used depends on what its current meaning is. For example, consider this:

foo (x);

This looks like a function call statement, but if foo is a typedef name, then this is actually a declaration of x. How can a geyacc parser for C decide how to parse this input?

The method could be to have two different token types, IDENTIFIER and TYPENAME. When read_token finds an identifier, it looks up the current declaration of the identifier in order to decide which token type to return: TYPENAME if the identifier is declared as a typedef, IDENTIFIER otherwise.

The grammar rules can then express the context dependency by the choice of token type to recognize. IDENTIFIER is accepted as an expression, but TYPENAME is not. TYPENAME can start a declaration, but IDENTIFIER cannot. In contexts where the meaning of the identifier is not significant, such as in declarations that can shadow a typedef name, either TYPENAME or IDENTIFIER is accepted - there is one rule for each of the two token types.

This technique is simple to use if the decision of which kinds of identifiers to allow is made at a place close to where the identifier is parsed. But in C this is not always so: C allows a declaration to redeclare a typedef name provided an explicit type has been specified earlier:

typedef int foo, bar, lose;
static foo (bar); /* redeclare `bar' as static variable */
static int foo (lose); /* redeclare `foo' as function */

Unfortunately, the name being declared is separated from the declaration construct itself by a complicated syntactic structure - the declarator.

As a result, the part of geyacc parser for C needs to be duplicated, with all the nonterminal names changed: once for parsing a declaration in which a typedef name can be redefined, and once for parsing a declaration in which that can't be done. Here is a part of the duplication, with actions omitted for brevity:

initdcl: declarator maybeasm '=' init
    | declarator maybeasm

notype_initdcl: notype_declarator maybeasm '=' init
    | notype_declarator maybeasm

Here initdcl can redeclare a typedef name, but notype_initdcl cannot. The distinction between declarator and notype_declarator is the same sort of thing.

There is some similarity between this technique and a lexical tie-in (described next), in that information which alters the lexical analysis is changed during parsing by other parts of the program. The difference is here the information is global, and is used for other purposes in the program. A true lexical tie-in has a special-purpose flag controlled by the syntactic context.

Lexical Tie-ins

One way to handle context-dependency is the lexical tie-in: a flag which is set by geyacc actions, whose purpose is to alter the way tokens are parsed. For example, suppose we have a language vaguely like C, but with a special construct hex (HEX-EXPR). After the keyword hex comes an expression in parentheses in which all integers are hexadecimal. In particular, the token a1b must be treated as an integer rather than as an identifier if it appears in that context. Here is how you can do it:

        { ... }
    | constant
        { ... }
    | HEX '('
        { hexflag := True }
      expr ')'
        { hexflag := False; $$ := $4 }
    | expr '+' expr
        { $$ := sum ($1, $3) }

constant: INTEGER
        { ... }
    | STRING
        { ... }
    hexflag: BOOLEAN

Here we assume that read_token looks at the value of hexflag; when it is true, all integers are parsed in hexadecimal, and tokens starting with letters are parsed as integers if possible.

Lexical Tie-ins and Error Recovery

Lexical tie-ins make strict demands on any error recovery rules you have. The reason for this is that the purpose of an error recovery rule is to abort the parsing of one construct and resume in some larger construct. For example, in C-like languages, a typical error recovery rule is to skip tokens until the next semicolon, and then start a new statement, like this:

stmt: expr ';'
        { ... }
    | IF '(' expr ')' stmt
        { ... }
    | error ';'
        { hexflag := False }

If there is a syntax error in the middle of a hex (EXPR) construct, this error rule will apply, and then the action for the completed hex (EXPR) will never run. So hexflag would remain set for the entire rest of the input, or until the next hex keyword, causing identifiers to be misinterpreted as integers. To avoid this problem the error recovery rule itself clears hexflag.

There may also be an error recovery rule that works within expressions. For example, there could be a rule which applies within parentheses and skips to the close-parenthesis:

expr: ...
    | '(' expr ')'
        { $$ := $2 }
    | '(' error ')'

If this rule acts within the hex construct, it is not going to abort that construct (since it applies to an inner level of parentheses within the construct). Therefore, it should not clear the flag: the rest of the hex construct should be parsed with the flag still in effect.

What if there is an error recovery rule which might abort out of the hex construct or might not, depending on circumstances? There is no way you can write the action to determine whether a hex construct is being aborted or not. So if you are using a lexical tie-in, you had better make sure your error recovery rules are not of this kind. Each rule must be such that you can be sure that it always will, or always won't, have to clear the flag.

Copyright 1997, Eric Bezault
Last Updated: 7 September 1997