“Resolver Games” are alive!

Resolver One competition

I admit I participated in the Resolver One competition in the first round in January as one of the losers. When Resolver Systems announced their challenge I got the impression they encouraged using their spreadsheet almost like a medium for expressing ideas and thinking out of the box. However, Resolver Systems is yet another company which exists solely to sell stuff and consulting, not a patron of modern art or hacking experiments. So the awarded spreadsheets are looking a bit conventional and are technically unsophisticated. Their sophistication lies in external factors like the scientific ideas which are exercised. Some make extensive use of graphic capabilities of .NET which is also outside of the Resolver One API. It’s good demo stuff nevertheless and this might be their main purpose in the end.

Resolver Games

I’m glad to see Resolver Games being online now which was my own contribution. Resolver Games is about simple learning games for word lists. The examples I used were from “Teuton” ( Teuton is a German Python dialect, inspired by an Artima posting of Andy Dent, which replaces Pythons English keywords and builtins by German translations of them – Teuton is one of my fun projects and a langlet demo for EasyExtend ), the other one is IPA – the international phonetic alphabet – which is just a great learning target.

A Resolver Game consists of a pair of Resolver One spreadsheets. One for the word-list/game data and the other one for the game board. The game data spreadsheet is conventional and stateless. The game board is designed to be generic and independent of the particular game. The game board was tricky to program because it uses the spreadsheet for event handling and user interactions. Writing an event handler is a bit like scanning a screen and notify changes by comparing the actual with the previous image point by point. Resolver One stores data in the background and those data affect re-computations. Sometimes I wanted to cause a user event doing re-computations without changing the displayed data. I used the following trick: add and remove a blank to the cell-data and swap between the two representations periodically.

"+" -> "+ " -> "+" -> "+ " -> ...

When the cell content is “+” change it to “+ ” and vice versa. This goes unnoticed because there is no visual effect associated with the blank. Once I got into whitespace oriented programming the hard problems with state changes in Resolver Games became solvable.

One could argue that Resolver One is simply not the right-tool-for-the-job and it is overstretched in this application. I don’t disagree but this line of argument always appeared philistine to me and I reserve it to those who simply don’t know it better. A more serious objection to Resolver Games might be the fun aspect. Is it really fun to play? Resolver Games are surely a bit poor in game dramaturgy and visual effects. So I’d rather say NO, but I’m not a gamer anyway.

Posted in Python | 3 Comments

The future of EasyExtend

The state of EasyExtend

Maybe an autumnal look on EasyExtend is justified. EE was and is an alien project which never resonated well with the Python community and its possible users. Actually up to now I don’t no anyone who has ever used it ( besides myself, of course ) and I wouldn’t wonder if this isn’t going to change in the future. For a Python programmer there are numerous alternatives now like python4ply, MetaPython and also 2to3 – why not? – which can be used to extend Python. None of them were available when I started with EE in 2006. Some people might also attempt to revive Logix which is among the more famous “dead” projects in the Python community. Logix might be in style and ambition precisely what Python users are looking for. EasyExtend isn’t even tangential.

Whenever I thought EE becomes stable I challenged it with bigger, more difficult problems: simultaneous transformations of multiple langlets, context sensitive languages, quirky real world grammars, online syntax definitions, source directed transformations, more expressible grammar syntax, language agnosticism etc.

Another major issue is performance. In the past I’ve used Psyco and also Cython. They boosted performance quite well and I got 3-5 times speedup for lexer+parser but I have clearly no performance model and I don’t see why those speedups shall be the limit? Python isn’t the right tool for the right job here and I suspect this had been an impediment for the current implementation already, since I overused containers like tuples and dicts in favor for classes and objects and their slow attribute access.

From EasyExtend to Langscape

The most likely path into the future of EasyExtend is to factor out components like the parser generator, the langlet transformer and most of the csttools and rewrite them in C++. I’ll probably start a completely new project which I intend to call “Langscape”. By means of SWIG it shall be possible to use Langscape also from environments like the JVM or the CLR. As a Python front end I’ll use the code I’ve developed for EasyExtend 4 which will probably never go public in the current form. I’ll still consider doing the functional testing in Python and I also want to preserve interactivity. Both the language front-end as well as back-end bindings become separated from Langscape. Langscape only deals with source code, grammars and CSTs.

Posted in General | 4 Comments

Bickering about unit testing

Doubts on the effectiveness of unit testing

Unit testing has entered the programming mainstream with XUnit packages and derivations of them. They are available for all mainstream programming languages. It is not normal today shipping an OSS project without any tests. Programmers can read test cases like behavioral specifications of APIs and they often learn a lot about a system from this sort of code reading ( at least I do ).

Still unit testing is disputed as a reasonable practice by many respected programmers and I wonder if guys like Joel Spolsky or James Coplien aren’t basically right? Isn’t it true that UTs have to be permanently adapted as our code base changes and doesn’t this imply a significant maintenance overhead even and foremost in early phases? Coplien suggests design-by-contract as a more lightweight and DRY alternative to writing UTs: place pre and post-conditions directly into the code and check the available units i.e. the interface specifications. Isn’t this far more agile and won’t better coding practices make UTs go away just like many of the once celebrated design pattern go away when using powerful language level concepts like multimethods and higher order functions?

Black box testing

When you work as a tester in the industry you essentially specify and implement test-suites according to specifications. Your product is not the system under test ( SUT ). You are not interested in the inner working of a system and its components. The SUT is a black-box and the SUT code might change arbitrarily. If any code is exposed it is SUT API code being accessible by clients application like your test app. The API might even be fully away though and instead you’ll test in- and outgoing commands sent for and back between your test app and the SUT according to a specified command protocol. All of those tests are functional- or system level tests and the tested units remain hidden. As a tester you don’t care about the way the system is built but only how it behaves.

Can we use our standard UT frameworks to implement black box tests? Well, isn’t this actually their most frequent use?

Are there any UTs around?

What if the most common unit tests we are finding in the wild are functional or system blackbox tests applied to API level functions/classes, implemented in one of the available unit testing frameworks? Some of the system components are abstracted away and get replaced by mock objects representing networks or C/S databases but this just avoids system integration tests. A close reading of unit testing might indeed lead to Jim Copliens conclusion that they are better implemented as pre and post-conditions but you won’t test a system on such a fine grained level. Using UT frameworks for functional tests has short comings but it doesn’t mean they are not used for them. When the interface is kept small the likelihood that it gets badly broken when you evolve your system is manageable. This is the prime reason why programmers do not suffer from writing UTs and maintenance costs are kept under control. Every software tester in the industry knows that writing tests takes much effort and is very costly but changes in public APIs isn’t a major reason.

UTs and beyond

The missing link between between current UT systems and a test-system for all kinds of SUTs is a dataflow connection which triggers tests in a particular order. By this I mean that each test can produce data as a side-effect which can be required within another setup of a test-case. In Junit4 we have `@before` and `@after` annotations for running setups and tear-downs unconditionally. When adding two more annotations `@require` and `@provide` it becomes possible to specify conditions on running tests by means of the need of data. A test-runner has to match the @required data against the @provided ones and determines a schedule.

In case of Java this can be checked at compile time using an annotation processor. In .NET one might apply those checks once the assemblies are loaded during initialization of the test-runner. The only disadvantage of load-time checks is that all available test-modules have to be loaded initially and not on demand.

Posted in Testing | 1 Comment

Choosers and ChooserMixins in C++ and Python

Chooser Objects

From time to time I’m amazed to find a simple algorithm which seemed to be a low hanging fruit which was just overlooked. In this particular case it is about generating and utilizing test data in a both simple and flexible manner. Mark Seaborn described the method in his outstanding blog article How to do model checking of Python code. He distilled what we might call the Chooser Algorithm from a scientific paper which buries the message under all sorts of methodological considerations and special case treatments and other bloat. This is sad because good algorithms are the crown jewels of programming. It also helped that he provided an implementation in Python and not in C or some sloppy computing-scientist-only pseudo code notation which changes from author to author.

We can motivate `Chooser` objects as follows.

Suppose you have a control flow statement defined in a function `f`. The path the flow control takes is determined by the value of some variable `x`:

def f(*args):
    ...
    x = g(*args)
    if x>0:
        ...
    else:
        ...

When we want to test the if-statement alone we can ignore the value of `x` computed by `g`. A simple method to achieve this is to introduce a for-loop in the code which iterates over a range of values which represent jumps to the individual if-statement branches:

def f(*args):
    ...
    x = g(*args)
    for x in (1,-1):
        if x>0:
            ...
        else:
            ...

However, this is a quite heavy change and we would likely not want to repeat this at another place. Instead of adding a for-loop we can introduce a non-deterministic choice over the values 1 and -1 and pull the iteration, represented by the loop, out of the function:

def test(chooser):
    def f(*args):
        ...
        x = g(*args)
        x = chooser.choose([1,-1])
        if x>0:
            ...
        else:
            ...
    f(1,2,3)  # call f with appropriate arguments

Here we inserted a call to `choose` which represents a set of choices. No new control flow is introduced. The function `f` must be called as many times as there are choices passed to `choose`.

The repeated call of `f` is managed by a new function `check` which is part of the `Chooser Algorithm`. It actually calls the `test` function which has a uniform interface and keeps a single `chooser` parameter.

class ModelCheckEscape(Exception): pass
 
def check(func):
    stack = [[]]
    while stack:
        chosen = stack.pop()
        try:
            func(Chooser(chosen, stack))
        except ModelCheckEscape:
            pass

The `check` function creates a `Chooser` object and passes it to `func` which is represents the system under test. The `Chooser` constructor takes two arguments. One is a list called chosen popped from a stack of such lists, the other one is the stack itself which might be filled with new lists.

class ModelCheckEscape(Exception): pass
 
class Chooser(object):
    def __init__(self, chosen, stack):
        self._chosen = chosen
        self._stack  = stack
        self._it     = iter(chosen)
 
    def choose(self, choices):
        try:
            choice = self._it.next()
            if choice not in choices:
                raise Exception("Program is not deterministic")
            return choice
        except StopIteration:
            self._stack+=[self._chosen + [choice] for choice in choices]
            raise ModelCheckEscape()

This is the definition of the `Chooser` object. It is a tiny bit of elementary but ingenuous code. In order to understand what it does consider the following test function with its three calls of `choose`:

def test(chooser):
    x = chooser.choose([True, False])
    if x:
        y = chooser.choose(["a", "b"])
    else:
        z = chooser.choose(["y", "z"])

On each `choose` call a value is returned from the `_it` iterator. Those values must conform to the choices passed to `choose` for every call of `choose`. Otherwise a `ChooserException` is raised. So we expect `_it` to be an iterator wrapped around lists like `[True, “a”], [True, “b”], [False, “y”], [False, “z”]`. Those lists are associated with the choices being made at (`x`, `y`) or (`x`, `z`).

In fact we observe some more of those lists, starting with the empty list `[]` and the incompletely filled lists `[True]` and `[False]`. When `_it` is wrapped around an incomplete list one of the `choose` calls will raise a `StopIteration` exception at `_it.next()`. Assume for example that `_it = iter([True])` then `_it` is already exhausted after `x` and `choose` and will raise `StopIteration` at the definition of `y`. At this point each of the choices at `y` i.e. “a” and “b” will produce a new list. Those lists are `[True, “a”]` and `[True, “b”]` which are now complete. New lists are pushed on the stack as long as incomplete lists are popped from the stack in`check()`.

As a special case we consider a simple linear sequence of `choose` calls

def test(chooser):
    x = chooser.choose([True, False])
    y = chooser.choose(["a", "b"])
    z = chooser.choose(["y", "z"])

The set of complete lists according to this sequence will be the Cartesian product of the choices: `{True, False} x {“a”, “b”} x {“y”, “z”}`. If you just want Cartesian products there are more efficient alternatives to create them though.

These are the `Chooser` basics. For Python you can download the used code here.

Choosers in C++

I gave a C++ and STL based implementation of `Chooser` objects. The Chooser C++ API closely follows the Python example. You can download the code from the linked document.

In its most general form the `choose` method has following signature:

    template <typename Container>
    typename Container::value_type choose(Container& choices)

The return type is derived from the containers `value_type` attribute. Other than this the algorithm only relies on iterators which means that any STL container can be used. We can rewrite the simple `test` function above in C++:

void test(Chooser& chooser) {
    int x = chooser.choose(2);
    if x {
        string s = "ab";
        char y = chooser.choose(s);
    }
    else {
        string s = "yz";
        char z = chooser.choose(s);
    }
}

This is not all that much overhead. In case of the `x` definition we use an overloaded version of `choose` which takes a single integer parameter `k`. This is equivalent to a choice of values within the range `{0, 1, …, k-1}`. The most relevant case may be `choose(2)` which is the boolean choice.

The `string` type is an STL container type as well. More precisely it is a `typedef` for `basic_string`<`char`>. We can create a `string` object with a string literal but we cannot pass a string literal directly to `choose` which expects an explicit reference to a container from which the return type is derived ( `char` in this case ).

ChooserMixin classes

Suppose we want to introduce `Chooser` objects into arbitrary methods of an existing class. The Chooser Algorithm is implemented s.t. a `Chooser` object is explicitly passed as a parameter but this would require changes in a methods interface, something we try to avoid.

Visibility of `Chooser` instances in the local scope of a method can also be achieved by making them global or member variables. An inexpensive method which is safer than using globals is to use a mixin class. The mixin class defines a`Chooser` instance and if some class wants to use it, it derives from the mixin.

class ChooserMixin {
protected:
    Chooser chooser;
public:
    void test() = 0;
 
    void check()
    {
        ...
        this-&gt;chooser = Chooser(chosen, queue);
        test();
        ...
    }
}

The `test` method is abstract. If `f` is the method we want to check, then the implementation of `test` would just invoke `f` with appropriate parameters:

void test() {
    f(arg1, arg2, ...);
}

It’s easy to change `test` without touching any other source code.

More advantages of ChooserMixins

When we use `ChooserMixin` we can define the choices `C` being used in `chooser.choose(C)` also as member variables. This makes choices configurable. A subclass of a `ChooserMixin` might read data from an external file or a database and populate the `C` container.

I wonder if it’s even possible to get rid of `T x = chooser.choose(C)`assignments in method source when using data binding techniques? In JavaFX we can restate the assigment in the form

`var x = bind chooser.choose(C)`

The bound variable `x` is updated whenever `C` is changed. Instead of creating a new instance of `Chooser` on each iteration, we replace the members defined in a single instance and trigger updates of `C` which in turn causes `chooser.choose(C)` to produce a new value. It remains to be examined if this idea is somehow practical.

Posted in Chooser, CPP, Python, Testing | 2 Comments

Python – Hibernate – Jynx

Jynx 0.4 goes Hibernate

In Jynx 0.4 JPA/Hibernate annotations are supported. Although this is still work in progress some of the more complex nested annotations were tested as well as Hibernate extension annotations which cannot be single-name imported along with the corresponding JPA annotations without conflicts.

Jynx 0.4 provides other new features as well. One can now use `@signature` decorators to express Java method overloading. A simple Java parser is integrated. A Java parser was necessary to improve the Java class detection heuristics used to determine required imports when a Java proxy is created from a Jython class and compiled dynamically. Finally there is a new `@bean_property` decorator which creates a private attribute `foo` along with public getters and setters given a `bean_property` decorated method `def foo(_):_`. Full documentation of Jynx as well as its changes can be found here.

Using Hibernate from Jython

Starting and closing sessions and managing simple transactions is not difficult in Hibernate. In Jynx two context managers for with-statements are defined which hide open+close and begin+commit/rollback boilerplate from the programmer. Code for Hibernate sessions and transactions lives then in with-statement blocks.

class hn_session(object):
    '''
    Context manager which opens/closes hibernate sessions.
    '''
    def __init__(self, *classes):
        sessionFactory = buildSessionFactory(*classes)
        self.session   = sessionFactory.openSession()
 
    def __enter__(self):
        return self.session
 
    def __exit__(self, *exc_info):
        self.session.close()
 
class hn_transact(object):
    '''
    Context manager which begins and performs commits/rollbacks hibernate transactions.
    '''
    def __init__(self, session):
        self.tx = session.beginTransaction()
 
    def __enter__(self):
        return self.tx
 
    def __exit__(self, type, value, traceback):
        if type is None:
            self.tx.commit()
        else:
            self.tx.rollback()

A simple session using a single Entity Bean may then look like:

from __future__ import with_statement
 
from jynx.lib.hibernate import*
 
@Entity
class Course(Serializable):
    @Id
    @Column(name="COURSE_ID")
    @signature("public int _()")
    def getCourseId(self):
        return self.courseId
 
    @Column(name="COURSE_NAME", nullable = False, length=50)
    @signature("public String _()")
    def getCourseName(self):
        return self.courseName
 
    @signature("public void _(String)")
    def setCourseName(self, value):
        self.courseName = value
 
    @signature("public void _(int)")
    def setCourseId(self, value):
        self.courseId = value
 
with hn_session(Course) as session:
    course  = Course()
    course.setCourseId(121)
    course.setCourseName(str(range(5)))
    with hn_transact(session):
        session.saveOrUpdate(course)

Boilerplate Reduction

The standard class decorator for creating a Java class from a Jython class in Jynx is `@JavaClass`. In Jynx 0.4 some slightly extended decorators are introduced in particular `@Entity` and `@Embeddable`. Not only do they make Jython code more concise because one doesn’t have to stack `@Entity` and `@JavaClass` but translating with `@Entity` turns some automatically generated Java attributes into transient ones i.e. a `@Transient` annotation is applied which prevents those attributes to be mapped to table columns.

The massive boilerplate needed for defining a valid Entity Bean in the preceding example can be reduced using the `@bean_property` decorator:

@Entity
class Course(Serializable):
    @Id
    @Column(name="COURSE_ID")
    @bean_property(int)
    def courseId(self): pass
 
    @Column(name="COURSE_NAME", nullable = False, length=50)
    @bean_property(String)
    def courseName(self): pass

Applied to `def courseId(self): pass` the `@bean_property` decorator will cause the following Java code translation

    @Id @Column(name="COURSE_ID") private int courseId;
    int getCourseId() { return courseId; }
    int setCourseId(int value) { courseId = value; }

which specifies a complete Java Bean property.

Example

In the following example two Entities are associated using a one-to-one mapping between primary keys.

@Entity
class Heart(Serializable):
    @Id
    @bean_property(int)
    def id(self):pass
 
@Entity
class Body(Serializable):
    @Id
    @bean_property(int)
    def id(self):pass
 
    @OneToOne(cascade = CascadeType.ALL)
    @PrimaryKeyJoinColumn
    @bean_property(Heart)
    def heart(self):pass

Now we can check the behavior:

# session 1
with hn_session(Heart, Body) as session:
    body = Body()
    heart = Heart()
    body.heart = heart
    body.id = 1
    heart.id = body.id
    with hn_transact(session):
        session.saveOrUpdate(body)
        session.saveOrUpdate(heart)
 
# session 2
with hn_session(Heart, Body) as session:
    with hn_transact(session):
        b = session.get(Body, 1)
        assert b
        assert b.heart
        assert b.heart.id == 1

Summary

With Hibernate support in Jython we notice another clear departure from the CPython world and its web frameworks and components. Hibernate is distinctively Java and special techniques are needed to create compile time Java properties in a dynamic language. Jython has long been a second citizen in Python land. I suspect this is going to change with support of Java frameworks which alone have as many users/downloads as Python.

Posted in Hibernate, Jynx, Jython | Leave a comment

Jynx 0.3 – how to fix custom class loaders for use with Jython

Broken class loaders

Jynx 0.2 contained an ugly workaround for a bug I couldn’t fix for quite a while. The bug can be described as follows: suppose you defined code of a Java class `A` and compiled it dynamically:

A = JavaCompiler().createClass("A", A_source)

When you attempt to build a subclass

class B(A): pass

a `NoClassDefFoundError` exception was raised:

Traceback (most recent call last):
  File "C:\lang\Jython\jcompile.py", line 185, in &lt;module&gt;
    class B(A):pass
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:466)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
java.lang.NoClassDefFoundError: org/python/core/PyProxy (wrong name: A)

In that case the Jython runtime failed to create a proxy class for `B` while locating `PyProxy` which is a Jython core interface. From the traceback it wasn’t clear how to locate the error and I started to debug into Jython from Netbeans.

This is what happened: Jynx defines a `ByteClassLoader` class which is custom class loader for dynamic compilation of `A`. When `A` is loaded with `loadClass` a `findClass` method is called to locate `A` and this method had to be overwritten. The `ByteClassLoader` was bound to `A` automatically and used by Jython to locate interfaces such as `org.python.core.PyProxy`. This didn’t work and explains the failure. A possible fix is to respond to classes which cannot be dealt with from `ByteClassLoader` and delegate a `findClass` call to the parent class loader.

Curiously Jython stopped using `ByteClassLoader` after I changed the inheritance hierarchy from

class ByteClassLoader(ClassLoader):
    def __init__(self, code):
        super(ByteClassLoader, self).__init__(ClassLoader.getClassLoader())
        ...

to

class ByteClassLoader(URLClassLoader):
    def __init__(self, code):
        super(ByteClassLoader, self).__init__([], ClassLoader.getSystemClassLoader())
        ...

The `URLClassLoader` provides the opportunity to add `URLs` at runtime and therefore modifying the `CLASSPATH` dynamically.

No disk dumps in Jynx 0.3

Prior to Jynx 0.3 a workaround has been dumping `A` to disk and load the class from there. We discussed the subtle nuances of selecting the right class loader and loading `A` from disk moved the machinery into a correct state. This wasn’t only cumbersome but a hurdle when a programmer intended to work within a Java sandbox. With Jynx 0.3 I feel prepared to explore Java integration with Jynx on GAE-J.

Posted in Jynx, Jython | Leave a comment

Jynx 0.2 released

I’ve released Jynx 0.2. Jynx is a Jython package which utilizes dynamic Java compilation from Jython and improves on Java scripting. With Jynx 0.2 two major new features are implemented now.

Annotation Extraction

In the initial Jynx 0.1 release an `annotation` object was defined which can be used as a decorator. A Python class such as

@JavaClass
class TestClass(Object):
    @annotation("Test")
    @signature("public void _()")
    def test_report_test_failure(self):
        assertTrue("length of empty list is 0", len([]) != 0)

equipped with the `JavaClass` decorator is compiled into a Java class on the fly which acts as a proxy for a Python object and provides the correct interface for being used within a Java framework which expects methods of a particular type signature and annotations. The class defined above can be used within JUnit 4.X.

Jynx 0.2 provides a new classmethod `extract` of the annotation class which can be used to extract Java annotation classes and acts as a factory function for Jython annotation objects.

# import Test annotation in JUnit 4.X
from org.junit import Test      
 
# a Python annotation object
Test = annotation.extract(Test) 
 
# keep a signature object as a parameter and returns a new Jython
# annotation object. The Java code generator will create a method
# with the correct signature and the @Test annotation
Test = Test(signature("public void _()")  
 
@JavaClass
class TestClass(Object):
    @Test
    def test_report_test_failure(self):
       assertTrue("length of empty list is 0", len([]) != 0)

As we see there is no overhead left here. When programming against a Java API / framework, Jython annotations can be defined within a single file and used application wide.

Classpath Manipulation

For reasons which are not completely transparent to me Java doesn’t permit runtime classpath manipulations. The JDK defines an `addURL` method in a special classloader called `URLClassLoader`. This method is protected and cannot generally be accessed without reflection. Internally the Sun JVM uses such a loader class ( or a subclass of it ) and when you are willing to accept a hack and programming against an implementation detail you can use the JVMs default class loader and add new paths to a classpath:

from java.lang import ClassLoader
systemLoader = ClassLoader.getSystemClassLoader()
systemLoader.addURL("file:///C|junit-4.6.jar")

Jynx defines a `ClassPath` class and a new `sys` module attribute `classpath`. Adding a file system path `P` to `sys.classpath` results in a method call

systemloader.addURL(URL("file:"+pathname2url(pth)))

which converts the file system path into a Java URL object and adds it to the classpath. Additionally the same path is added to the `PYTHONPATH` via `sys.path`:

sys.classpath.append(r"C:\junit-4.6.jar")

The advantage is that each Python package can maintain the Java packages it depends upon and no global `CLASSPATH` environment variable has to be adapted unless a Java or Jython class defines its own class loader.

Posted in Java, Jynx, Jython | Leave a comment

Four things I’d change in Python – and a little more

1. Import system

Replace the flat module cache by a set of ModuleTree objects rooted in nodes living on the PYTHONPATH. Apply relative path semantics by default and treat absolute paths as special cases. Internal paths which are used in import statements or for traversing ModuleTree objects and external ones ( file-system, zip-files, URLs etc. ) are related through representations of internal paths [1]. Representations shall be user definable. For ModuleTree objects custom import semantics may be defined. This replaces “import hooks” and provides similar functionality in a much safer and object oriented manner. Further effects: no physical module is imported twice for two different import paths; each module can be used as a script no matter how the path is written. No changes to the language syntax.

[1] What I mean here is a representation of a path algebra in systems which can be considered as the “environment” of Python. This sounds more grandiose than it actually is.

2. Decorators everywhere

This basically reflects my interest in improving Jython compliance with Java and lifting Jython classes to Java classes turning Java classes into Jython class proxies – everything at runtime. This doesn’t work without specifying Java interfaces in Jython. Those consist of two parts: type signatures + annotations. For functions and classes this works in Python without much hassle. With Python 3.0 style function annotations one can even remove a decorator for type signatures. It doesn’t work for members though. In Java you can write

public class CusomerId {
    @Id
    @Column(name = "CustId", nullable = false)
    private Integer cust_id;
}

In Python I want to write similarly

class CusomerId:
    @Id
    @Column(name = "CustId", nullable = False)
    cust_id = jproperty("private int")

which translates into

class CusomerId:
    cust_id = Id(Column(name="CustId", nullable=False)(jproperty("private int")))

This requires that assignment statements ( grammatically expr_stmt’s ) may be decorated, not just functions and classes.

3. A new opcode for code monitoring

I know Ned Batchelders coverage tool and I have written one by myself using EasyExtend. EasyExtends is more powerful in that it doesn’t only provide the simplest type of coverage namely statement coverage. However it uses source code weaving which might affect functionality in a highly reflective language. It would be far better to introduce a new opcode which is weaved into Pythons bytecode and acts as a sensor. The weaving can be activated using a command line option. The overall achievement is improved code monitoring. This solution might also be applied to improve debuggers by setting breakpoints within expressions.

4. Function annotation and the nonlocal statement backports

I wish to see function argument annotations and the nonlocal statement in Python 2.x.

Other things

Admittedly I felt a little depression after the huge disappointment which was Python 3. Instead of a bright future it looked much like a minor legacy transformation which essentially missed the point of relevant trends in language design which are marked by concurrency orientation and unification of declarative dataflow and OO in frameworks + languages like WPF/Silverlight, Flex and JavaFX. The best thing which can be said about Python 3 is that it didn’t turn into a running gag and actually shipped code.

However there are lots of positive news and much progress in many other areas. At many fronts Python performance is targeted: PyPy, Unladen Swallow, Psyco 2, Cython, Shedskin. Package distribution and deployment is addressed just like renovation of the standard library. With PyPy, Unladen Swallow, Jython and IronPython Python becomes or is already GIL free and fit for multicore. The one improvement I’m personally most pleased about is that of Jython. Aside from my eternal pets ( Trail + EasyExtend ) I enjoy exploring the Javaverse, which is incredibly rich, from the Jython + scripting angle with some promising first results, new challenges and also some disappointments. I actually expect the next 600 Python web frameworks of interest will not be written in CPython anymore but in Jython and IronPython using Java/.Net components. When will we see a Jython Enterprise Framework on the JVM which will be as powerful as Spring but as lightweight as Pylons?

Posted in Python | 4 Comments

Redesign of the code.py and codeop.py modules

Brett Cannon asks for modules of the stdlib to be redesigned. I find the idea rather bizarre to initiate a poll for this but maybe that’s just the future of programming where the quality of an implementation is judged by democratic voting. So I immersed into the hive mind and voted for distutils. Seems like Tarek Ziade addresses this already but I’m not entirely sure he goes far enough. Last time I looked at the source code there were still all kinds of compiler modules in the lib which contain config information closely coupled with application code. That’s not so nice and mostly a refactoring bit.

Some other of the stdlib modules I’d rewrite are not mentioned in the voting list. Maybe they are not sexy enough for the majority of web programmers that dominate all the discussions about Python? Among my favorites are `code.py` and `codeop.py`. Here is a brief but incomplete list of requirements and refactorings.

  • The heuristics used to determine incomplete Python commands in _maybe_compile is pretty weak.
  • Can you tell the difference between Compile, CommandCompiler and compile_command in codeop.py?
  • Encapsulate the raw_input function in interact within a method that can be overwritten.
  • provide two methods at_start and at_exit in InteractiveConsole to make startup and shutdown customizable.
  • Separate interactive loop from line processing and implement the line processor as a generator. It’s easier to write custom interactive loops for systems that interface with Python. The default interact method becomes
    def interact(self):
        self.at_start()
        try:
            gen_process = self.process_line()
            line = None
            while True:
                try:
                    prompt = gen_process.send(line)
                    line   = self.user.get_input(prompt)
                except StopIteration:
                    break
        finally:
            self.at_exit()
  • Move the the line terminating heuristics from _maybe_compile into process_line and define a try_parse function together with a try_compile function. I’d go a little further even and define a try_tokenize function which isn’t essential though.
  • Provide a subclass for interactive sessions which can be recorded and replayed and command line options accordingly. This is optional though and not part of a redesign strategy.

There are other modules I’d like to rewrite such as `tokenizer.py`. Having a lexer in the stdlib which handles Python as a special case would be quite a big deal IMO. But it’s delicate and I struggle with writing lexers which can be both extended in a simple way ( without the trouble of running into ordered choice problems of the current regular expression engine ) and have a high performance. So far I only accomplished the first of the goals, at least partially, but not the second one.

Posted in Python | 2 Comments

A simple Spring challenge

I got some comments on my Biggus Dickus article in its own comments section as well as on programming.reddit. Many people defended Spring on grounds of its usability, whereas others identified the author of this lines as completely clueless. I don’t want to argue against the latter and they are certainly right that Spring is the way enterprise software shall be written to eliminate Java complexity.

SpringSource made just the mistake of supporting dynamic languages but omitted Jython which wasn’t hip for a while and now countless amateurs, Java mavericks and inevitable crackpots feel attracted by Spring and seek their luck. Once you open the door to this folks they want to feel comfortable in their own way which means they want to get rid of XML configuration files and enable self-management for dynamic languages.

The problem description

It is not much effort to use Springs dependency injection (DI) machinery. The Spring user can follow a Bean creation protocol which looks cumbersome on the first sight but one gets used to it very soon.

Spring defines a `BeanReaderInterface` implemented by the `PropertyBeanDefinitionReader` and `XmlBeanDefinitionReader` classes. So what about adding a `JythonBeanDefinitionReader` along with a `JythonBeanFactory` replacing the `DefaultListableBeanFactory` or another factory of this kind which is typically used? The following protocol shows how to tangle both types and how to create new bean instances without letting the application know anything about configuration logics:

JythonBeanFactory factory = new JythonBeanFactory();
JythonBeanDefinitionReader reader = new JythonBeanDefinitionReader(factory);
reader.loadBeanDefinitions(path);
SomeBean obj = (SomeBean)factory.getBean(name);

How can `JythonBeanFactory` and `JythonBeanDefinitionReader` possibly work ? In a simple case the Reader uses the Jython API and imports a Python module which defines parameter-less functions like the following:

def source():
    # creates SomeBean object and returns it

Calling `factory.getBean(“source”)` will invoke the `source()` function which returns a `SomeBean` object. Eventually the object has been cached but at this stage I do not want to complicate the design if it can be avoided.

Both classes can be implemented on an elementary level as a simple exercise of embedding Jython in Java and using the Jython API.

The Challenge

Now try to write both classes s.t. they do fit into the Spring framework. As I said above it is a basic exercise to write them but side stepping the Spring interface hierarchy would just mean to create another DI framework which is off topic here. They shall implement Spring interfaces and they shall replace existing bean readers and factories. This is not hackish and Spring itself has foreseen such extensions as use cases of the framework.

Posted in Java | 3 Comments