Python

Universal Set in Python

Posted in Mathematics, Python on April 16th, 2009 by kay – 7 Comments

Building sets from set() and ANY

When defining a regular expression there is a well known duality between a set of characters specified by enclosing characters or character ranges within square brackets […] and defining the pattern of the complement set [^…] by preceding the caret symbol ^ .

  • [0-9] — matches all digits
  • [^0-9] — matches all non-digits

The nature of the complement set mentioned here is less precisely defined and is implementation dependent in practice. If a regexp matcher knows about ASCII only then [^0-9] is the set of all ASCII characters which are not digits.

Suppose now we want to build a character matching set from sets in the sense of a set data-type as inhabited in Python as a builtin. This is easy because we start from the empty set() and add just more elements. Each finite set can be constructed in this way and we can establish a set ~ pattern correspondence:

set(range(9)) ~ [0-9]

But how do we find a corresponding set for [^0-9]?

This cannot be finitely constructed from the bottom up using set() as a start value but it has to be finitely de-constructed from the set of all elements. This looks like we are running into unlimited comprehension and the Russel paradox but actually the set theory we are using is quite weak and element membership can be decided just by doing hashtable lookups.

Instead of constructing a set from set() we want to (de-)construct it from ANY. The name ANY is not chosen by chance because we want to express the correspondence with . pattern:

ANY ~ .

( Actually ANY is a little stronger because . doesn’t match newline characters unless explicitly specified by the DOTALL flag in the re.compile function. )

Using ANY we can easily state the above correspondence:

ANY – set(range(9)) ~ [^0-9]

Implementation of ANY

We define ANY as an instance of univset. The univset has a set-like interface and implements set operations consistently on sets of the type ANY-A, ANY-B,… where A, B… are finite sets constructed from set(). For example

ANY-A ∩ ANY-B = ANY – ( A ∪ B )
ANY-A ∪ B = ANY – ( A – B )

class univset(object):
    def __init__(self):
        self._diff = set()
 
    def __sub__(self, other):
        S = univset()
        if type(other) == set:
            S._diff = self._diff | other
            return S
        else:
            S._diff = self._diff | other._diff
            return S
 
    def __rsub__(self, other):
        return other & self._diff
 
    def __contains__(self, obj):
        return not obj in self._diff
 
    def __and__(self, other):
        return other - self._diff
 
    def __rand__(self, other):
        return other - self._diff
 
    def __repr__(self):
        if self._diff == set():
            return "ANY"
        else:
            return "ANY - %s"%self._diff
 
    def __or__(self, other):
        S = univset()
        S._diff = self._diff - other
        return S
 
    def __xor__(self, other):
        return (self - other) | (other - self)
 
    def add(self, elem):
        if elem in self._diff:
            self._diff.remove(elem)
 
    def update(self, elem):
        self._diff = self._diff - other
 
    def __ror__(self, other):
        return self.__or__(other)
 
    def union(self, other):
        return self.__or__(other)
 
    def difference(self, other):
        return self.__sub__(other)
 
    def intersection(self, other):
        return self.__and__(other)
 
    def symmetric_difference(self, other):
        return self.__xor__(other)
 
    def issubset(self, other):
        if type(other) == set:
            return False
        if issubset(other._diff, self._diff):
            return True
        return False
 
    def issuperset(self, other):
        if self._diff & other:
            return False
        return True
 
    def __lt__(self, other):
        return self.issubset(other)
 
    def __eq__(self, other):
        if type(other) == set:
            return False
        try:
            return self._diff == other._diff
        except AttributeError:
            return False
 
    def __ne__(self, other):
        return not self.__eq__(other)
 
    def __le__(self, other):
        return self.__lt__(other) or self.__eq__(other)
 
    def __gt__(self, other):
        return self.issuperset(other)
 
    def __gt__(self, other):
        return self.issuperset(other) or self == other

Examples

ANY = univset()
NON_DIGITS = ANY - set(range(9))
assert 8 not in NON_DIGITS
assert 'a' in NON_DIGITS
assert 0 in NON_DIGITS | set([0])

The statement that never was

Posted in General, Python on April 15th, 2009 by kay – 1 Comment

The Python release archive is a nice repository for archaeological findings. My biggest surprise was the existence of an access statement and access keyword in the pre-history of Python 1.5.0. The syntax was already specified in release 1.0 with an accompanying accessobject.c and it began to fade in Python 1.4.

The access statement

That is how the access statement was defined in 1.0

access_stmt: 'access' ('*' | NAME (',' NAME)*) ':' accesstype  (',' accesstype)*
accesstype: NAME+
# accesstype should be ('public' | 'protected' | 'private') ['read'] ['write']
# but can't be because that would create undesirable reserved words!

The access_stmt was a small statement i.e. one that doesn’t contain a block and could be used like print, del or exec.

So the proper use of the access statement would be like

class Point:
    access x, y, z: private write, public read

or more compact:

class Point:
    access *: private write, public read

The word public wasn’t a language keyword which made it possible to use it in this way:

class AccessModifier:
    access public: public
    access private: public
    access protected: public

Note that the access statement was purely declarative even more so than Java’s where you can write

public int x = 10;
and define x while specifying its type and access modifier. It was also more elaborated in terms of intended protections by means of differentiating between read and write.

How it never took off and gone away

In the HISTORY file of Python 1.0 we find the notice

There’s a new reserved word: “access”. The syntax and semantics are still subject of of research and debate (as well as undocumented), but the parser knows about the keyword so you must not use it as a variable, function, or attribute name.

In Python 1.4 the access statement was already uncommented from the Grammar.

The story ends with the following remark in the HISTORY of Python 1.5:

‘access’ is no longer a reserved word, and all code related to its implementation is gone (or at least #ifdef’ed out). This should make Python a little speedier too!

Pyload part I ( Path objects )

Posted in Python on April 7th, 2009 by kay – Be the first to comment

This article is the first in a series that studies the design of a module import system. Although this work is situated in the Python context it can stay on its own to a large extent and ideas may be transferred to other systems and languages. We will provide as much integration with Python as necessary but keep the structure as general as possible. The article series will roughly cover following topics:

  1. Path objects – module paths and their representation
  2. ModuleNode objects – loading and caching modules using Path objects
  3. Import system – binding ModuleNode objects to import statements
  4. Implementation – putting it all together in an EasyExtend langlet as a reference implementation

In this article we will discuss Path objects. This topic is foundational and a bit dry but I hope the reader will be compensated by the elegance of some its constructions. Path objects are used to establish a relationship between internally used name spaces and external path structures in physical or logical “media” like file-systems, zip-files or the web. Those path structures can also be fully abstract and represented by special data-structures only. We’ll provide several examples.

First some terminology is introduced. Many of the notions given here a rather abstract but they mostly capture what people know about Python modules, packages and file system paths anyway. They provide a conceptual background for specifications given later.

Terminology

A module name is an ordinary Python name: a finite sequence of alphanumeric characters and underscores starting with an alphabetic character or underscore. A module path is a dot-separated sequence of module names. It is possible for a module path to be preceded by dots. In that case a module path is called a relative path. A.B.C and ..A.B.C are both module paths but only the latter one is relative. The names of a module path are also called its components. So A, B, C are the components of the module path A.B.C.

Besides module paths we consider external paths. The intuitive meaning of an external path is that of a pointer to a location of a module in some medium. Most commonly file system paths are used as external paths: modules are represented as files and the dot separators are mapped onto file-system separators. Throughout this article we use a slash “/” as an external path separator. So A.B.C is a module path and A/B/C is an external path. A proper external path definition is given below using a Path abstract base class.

A module can be loaded from an external path which yields an interpreter level <module> object. Each <module> object shall have a unique module name. If M is a known module name we write <M>. It is also possible to load <module> objects from builtins or even create fresh <module> objects on the fly. In any case we still consider a <module> being loaded from a path. If no such path is available we associate the <module> with the empty path.

A module path A.B.C… is valid if an external path …/A/B/C/… exists and <A> can be loaded from …/A, <B> can be loaded from …A/B etc.

We call a module P a package if there is a module M and if P1.M and P2.M are valid module paths then P1 = P2 = P. So each module has at most one package. If a module has no package we call it an unpackaged or a free module. For any module the chain of packages P1.P2….M containing M shall be finite. This implies that each of such chains has a maximum length. If P0.P1.P2…M is a a module path of maximal chain of packages we call P0 a top level module and the module path a full path. Each unpackaged module is by definition also a top level module.

Notice that the concept of a namespace is a little more general than the ones we have defined. Each <module> has an associated namespace. This is usually a __dict__ for user defined modules. This namespace can contain other modules as well and might be changed dynamically. We rather intend to have a close reference of the module path concept with the way it is used in an import statement.

With PYTHONPATH we denote a set of external paths from which modules can be found. Those paths can represent file-system paths, zip-files or other entities. The PYTHONPATH may contain external paths that are paths of some package. In this case the modules we can reach from the PYTHONPATH are not all top-level modules. In this situation the top-level module P0 of a full path P0.P1…..M may not be reachable from PYTHONPATH. We call such a full path a finger.

The intuitive idea of an external path as a file path is actually a bit too simplistic. In practice there might be various files that might correspond to a module. For example A.py, A.pyc, A.pyd are all files that correspond to a Python module A. An external path is a class of file paths and they are equivalent in the sense that they all describe a single module entity. In this sense a set of file suffixes is an equivalence class of file paths.

Path objects

Path objects are defined as abstract base classes. Concrete subclasses of Path are FSPath which uses file system operations to implement Pathmethods, ZipPath which combines file system path operations with those provided by the <zipfile> module. We have also defined a silly Path subclass called TuplePath. All of those Path objects are used to represent external paths or path-like objects.

from abc import *
class Path:
    __metaclass__ = ABCMeta
    def __init__(self, pathobj):
        self.path = pathobj
 
    @abstractmethod
    def fullpath(self):
        '''
        Derives full module path from given path.
        '''
 
    def find(self, modulepath):
        '''
        Interprets the modulepath as a sequence of parent / joinpath operations.
        '''
        P = self
        components = modulepath.split(".")
        if components[-1] == '':
            components.pop()
        for component in components:
            if component is "":
                P = P.parent()
            else:
                P = P.joinpath(component)
            if not P.exists():
                raise ValueError("invalid path object: '%s'"%P.path)
        return P
 
    @abstractmethod
    def ispackage(self):
        '''
        Returns true if path belongs to a package and false otherwise.
        '''
 
    @abstractmethod
    def ismodule(self):
        '''
        Returns True if path corresponds with a Python module and False otherwise.
        '''
 
    @abstractmethod
    def isempty(self):
        '''
        Returns True if this path object is empty and False otherwise.
        '''
 
    @abstractmethod
    def exists(self):
        '''
        Returns True if the path object is valid and False otherwise.
        '''
        return self.path in self.files
 
    @abstractmethod
    def parent(self):
        '''
        If path is .../A/B/C a new path object initialized by .../A/B will be
        created and returned.
        '''
 
    @abstractmethod
    def children(self):
        '''
        All valid one element extensions of path. If there are no such children return
        None.
        '''
 
    @abstractmethod
    def joinpath(self, *args):
        '''
        Joins the current path object with those provided by args in the sequence of
        occurrence in args. So if this path is ../A/B then self.joinpath('C/D', 'E') will
        return ../A/B/C/D/E.
        '''
 
    @abstractmethod
    def base(self):
        '''
        Returns the rightmost element of a path. If ../A/B/C is a path it C will be
        returned.
        '''
 
    @abstractmethod
    def split(self):
        '''
        Returns a tuple containing path components.
        '''
 
    @abstractmethod
    def splitbase(self):
        '''
        Returns the tuple (parent, base) for a given path.
        '''
 
    def __repr__(self):
        return "<%s : %s>"%(self.__class__.__name__, self.path)

The methods find and fullpath establish the relationship between Path objects and module paths. The fullpath method is intended to return the module path that corresponds to the Path object. The concrete find method is a representation of a module path in terms of the parent and joinpath methods which are also defined in the Path class. So each module path has a an unambiguous interpretation as a sequence of operations on a Path object.

FSPath objects

Now we start looking at concrete examples. The first and most important example is that of the FSPath class which relies on file system operations defined in os.path. The FSPath object enables configurable file suffixes. In Python file-suffix data are hardcoded. They can be accessed using the <imp> module function get_suffixes(). In our implementation we provide a class wrapper called SuffixInfo for the 3-tuples that describes a suffix.

import os
import imp
 
class SuffixInfo(object):
    def __init__(self, suffix_data):
        self.suffix    = suffix_data[0]
        self.read_mode = suffix_data[1]
        self.priority  = suffix_data[2]
 
class FSPath(Path):
    modulesuffixes = set()
    def __init__(self, pathobj):
        self.path = pathobj
 
    @classmethod
    def add_suffix(cls, suffixinfo):
        cls.modulesuffixes.add(suffixinfo)
 
    def fullpath(self):
        if self.ismodule():
           module = self.base()
           name, ext = os.path.splitext(module)
           P = self.parent().fullpath()
           return P+"."+name if P else name
        elif self.ispackage():
           directory, name = self.splitbase()
           P = directory.fullpath()
           return P+"."+name if P else name
        else:
           return ""
 
    def ispackage(self):
        if os.path.isdir(self.path):
            if os.path.isfile(self.path+os.sep+"__init__.py"):
                return True
        return False
 
    def isempty(self):
        return self.path == ""
 
    def exists(self):
        return self.ispackage() or self.ismodule()
 
    def ismodule(self):
        suffixes = [""]+[suffix.suffix for suffix in self.modulesuffixes]
        for suffix in suffixes:
            if os.path.isfile(self.path+suffix):
                return True
        return False
 
    def parent(self):
        return self.__class__(os.path.dirname(self.path))
 
    def children(self):
        if os.path.isdir(self.path):
            return [self.__class__(self.path.join(f)) for f in os.path.listdir(self.path)]
 
    def joinpath(self, *args):
        return self.__class__(os.sep.join([self.path]+list(args)))
 
    def base(self):
        return os.path.basename(self.path)
 
    def split(self):
        return self.path.split(os.sep)
 
    def splitbase(self):
        return self.parent(), self.base()

The next two path objects have separate listings.

ZipPath objects

The ZipPath object is similar to the FSPath object. The main difference is that there is no natural way to walk within a zip-file and the ZipPath class pulls out the packed directory structure and makes it explicit. This particular step has to be made only once per zip-file which means that for all parent(), joinpath() operations that create new ZipPath objects from an existing ZipPath no new unzip operations are performed and no new inspections are needed. Those will become relevant at different places where we have to load module content – but load operations don’t affect Path objects.

TuplePath objects

The TuplePath object may be entirely useless but since we enter here the level of plain data structures it might inspire more exotic Path objects that could be useful for some.

Broken beyond repair – Pythons import system

Posted in Python on April 3rd, 2009 by kay – 9 Comments

It often happens that language features we have once distasted become comfortable after a while. We learn to live with them, take their advantages and forget about their annoyance. For many Python developers things like the explicit self in the argument list of a method is among them or len as a function. Experienced Python programmers are usually weakly excited about alternative proposals and the longer this quirk is discussed their mood turns towards -0 which is encoded as “why bothering?”.

Other features are the cause of continuous struggle and we never get used to them but don’t talk much about them because they aren’t low hanging fruits. They might even turn from bad to worse in the course of improving them in some respects. Those features feel like being broken beyond repair which means that it’s not easy to patch them but they need a fundamental redesign. In my opinion Pythons import machinery has this quality and it was regrettably forgotten in the many “cleanups” of Python 3.

Pythons import machinery is complicated to understand but this alone isn’t an argument against it. Many systems are far more complex but they are well thought out. The import machinery seems to be designed by implementation and cobbled together. This has finally lead to accidental design artifacts like the module/script distinction which makes sense only in the light of Pythons import implementation.

Python always suffered from the problem that two modules M1.py and M2.py which import a third module M3.py from different import paths also receive different module objects <M3>. This is as if we gave up coordinate independence but believed as we are moving through the space all objects are changing continuously. The objects themselves not their perspective of them. We really get different objects and we can’t compare our data with each other.

Not only are module objects path dependent but the success of a module import may depend on the liveliness of other modules. Suppose M1.py defines the following relative import: from . import M2. This import fails if the package P containing both M1.py and M2.py has not been imported already. When running M1.py from the command line it’s not relevant that we declared P as a package by placing an __init__.py file into it we still get the increasingly famous error message:

ValueError: Attempted relative import in non-package

This is not a Monty Python skit. The package exists and exists not. The interpreter doesn’t make any use of the availability of package P as long as it hasn’t been imported explicitly. But why does it has to be imported only to find M2.py? There is no obvious answer. It’s just the way it is implemented.

“We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. (Donald E. Knuth)

Back to the first problem. Once M1.py has imported M3.py the <M3> will be accessible on M1’s import path in the module cache sys.modules which is a flat dictionary of {module-path: <module>} pairs. The module M2.pyuses a different import path and a new and different module <M3> is created which is cached using another module-path.

This lookup with a module-path string is fast and it is usually faster than ordinary attribute lookup in objects which has to jump through a nested dictionary. I do think that’s basically it. If we implement a proper module cache providing the moderate performance of attribute lookups in objects we can get rid of the tedious import machinery that Python has right now. But I’d like to go even a little further and claim that a whole import system can be designed directly from a tiny set of objects and their properties. No Finders, Loaders, Importers and all the clunky stuff which pretends to be objects but is actually a service which serves the void.

I’ll flesh this idea out in a series of articles on this blog soon. This can be understood then as the constructive part of my criticism.

Newsflash: chances are that higher Python officers will succeed in confusing the situation even more. PEP 382 is on the way. Good luck with explaining why Python needs two notions of packages whereas it is difficult enough to justify even one. It’s not a coincidence though that this is placed in the context of setuptools. When Pythons import system is a tragedy than distutils and the layers above are the accompanying farce.

Contextual Keywords

Posted in csharp, EasyExtend, Grammars, Python on March 22nd, 2009 by kay – Be the first to comment

Keywords and Contextual Keywords

A language keyword is a reserved word that cannot be used as an identifier for variables, constants, classes etc. Often it is an element of language syntax. Syntactical structure is simplified by means of keywords. A keyword can be thought of as a “semantic marker” on many occasions i.e. it denotes a location where something particular meaningful is happening. The while keyword points to a while-loop, the class keyword to a class definition etc. Other keywords like as or in link together expressions and have a fixed location within a statement.

A contextual keyword is sort of a hybrid. It acts much like a keyword but is not reserved. The word as was such a contextual keyword in Python until it became a proper keyword in version 2.5. The C# language defines lots of contextual keywords besides the regular ones. MSDN defines a contextual keyword as:

A contextual keyword is used to provide a specific meaning in the code, but it is not a reserved word [in C#].

Contextual keywords in C# are used properly in the grammar description of the language – like regular keywords. For example the add_accessor_declaration is defined by the following C# rule:

add_accessor_declaration: [attributes] "add" block

Keywords in EasyExtend

In EasyExtend keywords are implicitly defined by their use in a grammar file. The “add” string in the add_accessor_declaration becomes automatically a keyword. Technically a keyword is just an ordinary name and the tokenizer produces a NAME token. It’s easy to verify this: when we inspect the token stream of a function definition def foo(): pass we’ll notice that def and pass are mapped onto the NAME token:

&gt;&gt;&gt; def foo(): pass
[token&gt;
----------------------------------------------------------------------.
 Line  | Columns | Token Value         | Token Name    | Token Id     |
-------+---------+---------------------+---------------+--------------+
 1     | 0-3     | 'def'               | NAME          | 1 -- 1       |
 1     | 4-7     | 'foo'               | NAME          | 1 -- 1       |
 1     | 7-8     | '('                 | LPAR          | 7 -- 7       |
 1     | 8-9     | ')'                 | RPAR          | 8 -- 8       |
 1     | 9-10    | ':'                 | COLON         | 11 -- 11     |
 1     | 11-15   | 'pass'              | NAME          | 1 -- 1       |
 1     | 15-15   | '\n'                | NEWLINE       | 4 -- 4       |
 2     | 0-0     | ''                  | ENDMARKER     | 0 -- 0       |
----------------------------------------------------------------------'
&gt;&gt;

In the parse tables keywords are preserved and we will find a state description like (‘pass’, 1, 273) which explicitly refer to the pass keyword. It will be clearly distinguished from token id’s of type NAME which are used otherwise ( e.g. for foo ). Now the pass token in the token-stream has precisely following structure [1, ‘pass’, 1, (11, 15)]. Usually the token id is all that the parser needs to know but when the parser encounters a NAME token the token value is a keyword the keyword is turned into a token type. We can summarize this using following function

def compute_token_type(tok):
    tok_type  = tok[0]      # the standard token type
    tok_value = tok[1]
    if tok_type == token.NAME:
        if tok_value in keywords:
            return tok_value    # tok_value is a keyword and used as a token type
    return tok_type

Contextual Keywords in EasyExtend

Unlike keywords which can simply be determined from the grammar this isn’t possible with contextual keywords. They have to be made explicit elsewhere. When I considered contextual keywords in EasyExtend I refused to create special infrastructure for them but used a simple trick instead. Suppose one defines following token in aToken.ext file

    CONTEXT_KWD: 'add' | 'remove'

This is just an ordinary token definition. When parse_token.py is generated from Token+Token.ext we’ll find following settings

    CONTEXT_KWD = 119896

and

    token_map = {
        ...
        'add|remove': 119896,
        ...
    }

The parse_token.py provides obviously sufficient information to determine contextual keywords form the CONTEXT_KWD token definition. We exploit this in the next function definition:

def generate_contextual_kwds(parse_token):
    try:
        ctx_kwd = parse_token.CONTEXT_KWD
        t_map   = swap_dict(parse_token.token_map)      # swap_dict turns dict keys
                                                        # into values and values into keys
        contextual_keywords = set( t_map[ctx_kwd].split("|") )
        assert contextual_keywords &lt;= keywords
        return contextual_keywords
    except AttributeError:
        return set()

Now we have to understand the semantics of contextual keywords. The following code illustrates the behavior of the parser in the presence of contextual keywords.

def compute_next_selection(token_stream, parse_table):
    tok = token_stream.current()
    tok_type = compute_token_type(tok)
    selection = parse_table.selectable_ids()   # provides all token ids and nonterminal ids which are
                                               # admissible at this state
    for s in selection:
        if s == tok_type or tok_type in first_set(s):
            return s
    # here we start to deal with contextual keywords
    elif tok_type in contextual_keywords:
        # replace the token type of the contextual keyword by NAME
        tok_type = token.NAME
        for s in selection:
            if s == tok_type or tok_type in first_set(s):
                return s
    raise ParserError

When a keyword is needed in the particular context ( tok_type was accepted in the first for-loop ) either the tok_type itself is returned or a node id of a non-terminal that derives the tok_type ( tok_type is in the first_set of the node id ). If this fails it is still permitted that instead of the contextual keyword the NAME token is examined in the same way as the keyword. So if a particular context requires a contextual keyword this keyword is provided otherwise it is checked whether the context just requires a NAME and the NAME is provided instead. This has the practical implication that a contextual keyword can be used as an identifier of a variable.

CPython vs IronPython

Posted in Python on March 10th, 2009 by kay – Be the first to comment

Two video shots tell more than 2000 words.

CPython

IronPython

The thunk_stmt and programmable semantics In Python

Posted in DSL, Grammars, Python on March 9th, 2009 by kay – 4 Comments

Seems as if there was just a single direction of syntactical improvement of Python these days which is the introduction of Ruby style blocks. Tav suggests to reuse Pythons with-statement:

with_stmt: "with" expression (["as" target] | ["do" [parameter_list]]) ":" suite

Guido responds promptly and rejects the semantic dispersion of the with-statement. No meaning changer by using just a magical do keyword. In Tavs vision the do-branch of the with-statement is turned into an anonymous closure that is called by a special __do__ function. The __do__ shall also be customizable which finally leads to DSLs:

But more enterprising __do__ functions could do interesting things like rewriting the AST of the BLOCK_CODE. Given that this is already done by many many Python libraries, it’s not much of a stretch of the imagination. The benefit? Super nice Pythonic DSLs!

Well it is quite easy to put this idea upside down. Instead of re-purposing a special purpose statement like with one can simply turn each expression and “small statement” e.g. raise, assert, return, assignment etc. into a compound statement using following definition

thunk_stmt: small_stmt ':' suite

One has to elaborate on it a little and change rules of Pythons existing grammar for LL(1) conformance

stmt: compound_stmt
single_input: NEWLINE | compound_stmt NEWLINE
compound_stmt: ( if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | 
                 funcdef | classdef | thunk_stmt )
thunk_stmt: small_stmt ( ':' suite | (';' small_stmt)* [';'] NEWLINE )

Here the thunk_stmt is a fusion of the definition stated above and the simple_stmt of Pythons grammar. This definition doesn’t lead to programmable syntax but programmable semantics which is what Tav intends to support with a customizable __do__.

An obvious use case would be property definitions:

x = property():
    def get(self):
        return self._x
 
    def set(self, value):
        self._x = value

It simply appends a block of code to a valid assignment x = property().

Same goes with

employees.select.do(emp)

which might be extended to

employees.select.do(emp):
    if emp.salary &gt; developer.salary:
            return fireEmployee(emp)
    else:
        return extendContract(emp)

Without an appended block the expression

employees.select.do(emp)

might simply mean

employees.select(lambda emp: emp)

It should be clear that Python needs a protocol to support this which means here that a __special__ function is attached to an object and that for user defined classes this __special__ function can be overwritten.

Lonely Python is Intelligent Design and how to liberate from it

Posted in DSL, Grammars, Python on March 7th, 2009 by kay – 2 Comments

Python is doomed. Well, of course it is not yet doomed. Python is an evolutionary dead end living lonely on its own fitness plateau. It is clear that Python won’t have offspring because of significant whitespace aka indentation sensitivity. No children, no future.

You might ask where are the news? There are none and we knew this for ever and a decade but sometimes it is in the news again. Read this post by the author of a programming language called Reia. He struggles endlessly with indentation delimited blocks and finally abandons them for Ruby style end terminators ( not before a popularity poll in the Web 2.0 age and consulting the author of the failed Logix project who is now a Ruby advocate). Finally his language looks like Ruby which demonstrates the evolutionary fitness of Ruby. Python on the other hand is a piece of Intelligent Design. Guido van Rossum has managed a ” beautiful set of compromises” when he designed the Python grammar. So it ends up in overspecialization whereas languages like JavaScript are flowing more with the nature. To illustrate this nature you might examine the file jQuery.js that implements the popular jQuery library. The impatients might immediately scroll to the last line. It’s sweet.

Some people have argued that Haskell style layouts may save the soul of significant whitespace. Transplanted to Python this would imply mandatory explicit block delimiters that can also be omitted for the now classical indentation style. If you are not willing to decide just give multiple choice style freedom to anyone.

In the rest of the article I’m discussing optional explicit delimiters for Python built into the grammar and making the statement/expression distinction weightless. I promise not to cringe when it becomes “ugly”. Not behaving like little girls seems to be an evolutionary dead end among male, adult programmers as well but I take it with stoicism.

Grammar Extension

atom: ( '(' [embedded_stmt|yield_expr] ')' |'[' [listmaker] ']' |
        '{' [dictmaker] '}' |
         '`' testlist1 '`' |
         NAME |  NUMBER | STRING+)
trailer: '(' [arglist|embedded_suite] ')' | '[' subscriptlist ']' | '.' NAME
 
embedded_stmt: ( embedded_funcdef |
    embedded_if_stmt |
    embedded_while_stmt |
    embedded_for_stmt |
    embedded_try_stmt |
    embedded_with_stmt |
    embedded_classdef |
    embedded_small )
 
embedded_small: small_stmt (';' small_stmt)*
embedded_suite: embedded_stmt+
embedded_funcdef: [decorators] 'def' NAME parameters ':' embedded_suite
embedded_if_stmt: ( 'if' test ':' embedded_suite ('elif' test ':' embedded_suite)*
                   ['else' ':' embedded_suite])
embedded_while_stmt: 'while' test ':' embedded_suite ['else' ':' embedded_suite]
embedded_for_stmt: ( 'for' exprlist 'in' testlist ':' embedded_suite
                     ['else' ':' embedded_suite] )
embedded_try_stmt: ('try' ':' suite ((except_clause ':' embedded_suite)+
                    ['else' ':' embedded_suite] ['finally' ':' embedded_suite] |
                   'finally' ':' embedded_suite))
embedded_with_stmt: 'with' test [ with_var ] ':' embedded_suite
embedded_classdef: 'class' NAME ['(' [testlist] ')'] ':' embedded_suite

This is a conservative grammar extension of the official Python 2.5 Grammar. Conservative means that all expressions / statements written in Python 2.5 remain valid. Those non-terminals that substitute existing ones are atom and trailer. In detail the testlist_gexp non-terminal in atom is replaced by the newly defined embedded_stmt non-terminal and instead of an arglist an embedded_suitecan be used . The embedded_stmt non-terminal is the gateway for all kinds of statements embedded into expressions. The statement-expression distinction becomes de facto obsolete. What happens to significant whitespace? Post-processing the token stream filters whitespace inside of parentheses. Hence the token stream won’t contain NEWLINE, INDENTand DEDENT token within expressions. All those embedded_xxx non-terminals are only adjustments of existing ones reflecting the absence of the mentioned whitespace token in expressions. Notice that you can’t mix. Once you are living in a grammatical expression whitespace becomes definitely insignificant.

The grammar is not LL(1) and you need a stronger parser than Pythons own one to handle it. There is also no behavior associated with the new definitions. Semantics shall be entirely neglected and only the future shape of things shall be highlighted. Here are a few examples:

Multiline Lambda

lambda x,y: (
    z = x+y;              # semicolons always separate small statements
    print z;
    return z
)(3, 2)

Long Assertions

assert ( if x&gt;0:
    x == 2
    else:
        x == -2)

The Return of the Return

def foo(x):
    return ( return )

My Property

x = property(
    def fget(self):
        return self._x
    def fset(self, value):
        self._x = value
)

Function in the middle

x + (def fact(n):
     if n&lt;=1:
          return 1
     else:
          return fact(n-1))(7) + z