Sunday, July 13, 2008

Dynamic Languages - The FUD Continues...

Cedric Beust has been spreading his anti-dynamic language propaganda in a presentation that he made at JAZOON. I really find it strange that someone who is respected in the Java community would spend so much effort trying to discredit alternative languages. In an attempt to set the record straight for the "busy" Java developer I tried to enter the comment below. Unfortunately Cedrics blog wouldn't accept what it felt to be "questionable content" so I have posted my comment here instead:

Hi Cedric,

I didn't mention why I believe there is so much Fear Uncertainty and Doubt when it comes to dynamic languages. Two reasons: Politics and Fear.

Dynamic languages have been hugely successful dating back to the 1950's so whether they are viable or not should be beyond debate.

So why are we debating?

The real issue is the right tool for the right job. The problem is that lots of programmers only know how to use one type of tool so aren't in a position to make an informed choice.

This inability to choose based on ignorance leads to fear. The politics comes from proprietary languages (Java, C#) where the proponents have a vested self interest in keeping us all fearful of alternatives.

I have been using dynamic languages (Smalltalk) since 1993 and Java since 1996, and I started out programming with Modula-2 and C in 1987. They all have their strengths and weaknesses and none of them are a silver bullet.

The simple truth is that for web applications dynamic approaches are massively more productive. Take a look at Seaside (Smalltalk), Grails (groovy) or Rails (Ruby) and its clear that Java has nothing to compare. The DSLs provided by these languages make web development a cinch. Productivity improvements of 2-3 times is not uncommon. This translates to a reduced time to market, and better response to business needs.

So the real question is why are these languages excelling in this way? You seem never to address this issue, assuming that the people that choose to use these languages are some how misguided or confused. Well they've been misguided since 1956 and the advent of Lisp :) They choose dynamic languages because they value an higher level of expression, allowing them to achieve more with less. This doesn't only apply to the web, it applies to any scenario where a high level, domain specific language is applicable.

You advertise your talk as a guide for the busy Java developer, yet you do very little to educate him and alleviate him of his fears.

Let me:

1. Programming is hard and there are no Silver Bullets.

2. The biggest determining factor for success are the skills of the programmers.

3. Dynamic languages are different, requiring different skills and a different programming style.

4. If you take the time to master these skills then you are in a position to choose the right tool for the job: Either static or dynamic, or perhaps both.

5. With the right skills and the right tools you have a built in competitive advantage. Human centric computing applications call for higher level languages. Dynamic languages allow for the creation of higher level domain specific languages in a way that static languages don't.

The last point deserves to be backed up. Take a look at the Groovy HTML builder as an example and compare it with Java JSP. An even better (all though more esoteric) example is Seaside in Smalltalk.

The domains where Java makes sense are shrinking. Given the performance of dynamic languages nowadays and the ability to inter-operate with lower level, high performance system languages like C/C++, I see Java and C# being squeezed.

If you want productivity and a higher level domain specific language then Ruby, Groovy, Python etc is a natural choice. If you are on the JVM or CLR then you can always fall back to Java or C# when you have to. If you are on a native VM then you can fall back to C/C++.

The right tool for the job, which may mean splitting the job in two (front-end, back-end) and using different tools for different parts. With messaging systems and SOA "splitting-the-job" is easy.

Dynamic languages will only get better, incorporating better Foreign Function Interfaces and better tooling support, in the same way Java did back in the late 90's. BTW adding type annotations is always an option if people think they are really needed, but like I say a sizeable community have thrived very well without them since the 1950's :)

Cedric. You do your self no service by dressing up your prejudices as scientific fact. How about a balanced expose?

Go on surprise me :)

Paul.

Friday, July 11, 2008

The Self Language

In my last post I said that I wasn't too impressed by Selfs idea of using prototypes. Well I've changed my mind. My initial complaint was a lack of structure. When you open the Self BareBones snapshot all you get is a graphical representation of a shell object and a waste bin object. You don't get to see all the classes in the image like you do with the Smalltalk Class Browser. There is no Class browser in Self because there aren't any Classes.

This doesn't mean there isn't structure though. If you get the lobby object you will notice a slot called globals, one called traits and one called mixins. As I mentioned in my last post traits are objects that are used to encapsulate shared behaviour (methods). Globals are a parent slot on lobby; inside globals are all the prototypical objects. Each prototype object has a trait object. The prototype holds state whilst the trait holds behaviour. So between the two you have the same structure as a Class. So you create new objects by copying prototypes which inherit shared behaviour through an associated trait object.

Since the traits slot is not a parent slot of lobby you must send the message 'traits' to access trait objects from the lobby. So 'traits list' gets you a reference to the list trait object and 'list' gets you the list prototype. Why is the lobby important? Well all new objects are created in the context of the lobby. So the lobby object acts like a global namespace.

My explanation makes it sound more awkward then it actually is in practice. The bottom line is that Self has a lot of structure, as much as Smalltalk in fact. The structure is just different and more granular. Working with this structure is actually very pleasant. You still think in terms of classes, but only after thinking about the object first. So with Self you create a prototypical instance of what you want then you refactor it into common shared parts (traits) and an instance part (prototype).

The traits are more or less Classes. Self objects support multiple parent slots, but by convention multiple inheritance is not used. Instead usually there is one parent trait and additional shared behaviour is achieved by adding mixins to additional parent slots.

I am beginning to agree with Dave Ungar, that the Self way of thinking about objects is more natural and more simple. What convinced me is the ease in which objects can be created:

( | x <- 100. y <- 200 |)

Is an object literal which you can create at the shell and get an instant graphical representation of. The graphical representation of an object in Self is called an Outliner which is basically an editor that allows you to view, add and modify slots on the associated object. The Outliner also has an evaluator, where you can type in messages and have them sent to the target object.

So in self you create objects by entering literal text, test them out by sending messages and extend them by adding new slots. This is all achieved with instant feedback. You then factor your objects into traits and prototypes by creating new objects and moving slots through drag-and-drop.

Is all this important? I'm not sure, but I think so. The fact that objects have a literal representation that includes their behaviour is quite interesting and I like the drag and drop refactoring. What I can say is that the Self approach is fun and feels more concrete, as if you are using real building blocks to create your program.

Would a Self approach lead to higher productivity? With a bunch of keyboard accelerators so that you didn't need to use the mouse much, then I think so. To me feedback is king, and Self offers plenty of feedback. I think that the Self approach also leads to a more exploratory programming style, which in my opinion is a good thing. Above all, manipulating objects as if they are 'real' is a lot of fun, which as got be be worth something in itself :)

Friday, July 04, 2008

Objects revisted - Don't Generalise?

I've been playing with Self and it has got me thinking about why prototype based OO is not more prevalent. Generalising and categorising a bunch of things, as all being the "same thing" is something we all do all the time. Yet we know that we shouldn't generalise this way since each individual "thing" is unique :) I have come across this paper that takes a philosophical look at the difference between prototypes and classes. It concludes that ultimately prototypes are more expressive but generalising into classes is "good enough" most of the times.

Just from a practical view point I find classes much easier to work with thus far. This could be due to my vast experience with classes versus prototypes. Classes impose structure which aids with comprehension I find. I need to play with Self some more, but at the moment I find myself translating Selfs idea of a parent "trait object" into the more familiar concept of a Class. Here is another paper that takes the opposite point of view, stating that prototypes are more useful on practical grounds.

The motivation for prototypes as I understand it was the fragile base class problem. Representational independence and mixins largely solve this problem. Bob Martin takes another slant on the idea of a brittle base class, by stating that base classes should be stable by design. So the fact that other classes depend heavily upon them should not cause a problem. Classes that change should not be base classes. So base classes should encapsulate stable policies.

One thing that is clear to me is that classification and classes can be viewed as an additional structure imposed upon an object (prototype) based environment. So prototypes are the more general mechanism. The Self image I have been playing with has an emulated Smalltalk environment built from prototypes. So based on this, prototypes are the more fundamental abstraction. Following this logic, then Lisp with it's multi-methods and macros (code as data) is also more fundamental and hence more expressive then a prototype based language.

So it all boils down to Lisp :) I guess what we have with OO is a language imposed structure that enforces the encapsulation of state. This structure reduces the gap between the base language (Lisp like) and the problem domain in many instances. So OO itself can be considered a domain specific language, were the domain is "the physical world". In many scenarios, classifying objects and sharing a common behaviour (class) object across a set of instance objects is an approach that maps well to how we mostly think about the world and hence is a "good enough" template with which to model our world in a number of useful ways. But we know from philosophy that classifications aren't concrete and are arbitrary to some degree. If we choose to apply this deeper appreciation of the world around us to our models then we must do away with some structure. To model a world where objects aren't constrained by classification, we can choose to use prototypical objects, allowing for object specific behaviour.

So there appears to be a trade off between structure and expressiveness. So it follows that we gain more flexibility and freedom of expression if we fall back to a language with a less rigid structure, like Lisp were we are free to model the problem in any way we wish. The downside is that we now have to take more responsibility for structuring our program ourselves.

The bottom line is usefulness I think. For most use cases prototypes do not seem to provide any additional utility over classes. I'm curious to know whether there are uses where prototypes excel. From what I've seen so far, the Self Demo could have as easily have been written in Smalltalk.

(An after thought: Ruby also allows you to add behaviour to specific objects. This is not the same as Self, since a Ruby Object must still have a class and doesn't have parent slots where it can inherit traits dynamically).