I've been playing with Self and it has got me thinking about why prototype based OO is not more prevalent. Generalising and categorising a bunch of things, as all being the "same thing" is something we all do all the time. Yet we know that we shouldn't generalise this way since each individual "thing" is unique :) I have come across this paper that takes a philosophical look at the difference between prototypes and classes. It concludes that ultimately prototypes are more expressive but generalising into classes is "good enough" most of the times.
Just from a practical view point I find classes much easier to work with thus far. This could be due to my vast experience with classes versus prototypes. Classes impose structure which aids with comprehension I find. I need to play with Self some more, but at the moment I find myself translating Selfs idea of a parent "trait object" into the more familiar concept of a Class. Here is another paper that takes the opposite point of view, stating that prototypes are more useful on practical grounds.
The motivation for prototypes as I understand it was the fragile base class problem. Representational independence and mixins largely solve this problem. Bob Martin takes another slant on the idea of a brittle base class, by stating that base classes should be stable by design. So the fact that other classes depend heavily upon them should not cause a problem. Classes that change should not be base classes. So base classes should encapsulate stable policies.
One thing that is clear to me is that classification and classes can be viewed as an additional structure imposed upon an object (prototype) based environment. So prototypes are the more general mechanism. The Self image I have been playing with has an emulated Smalltalk environment built from prototypes. So based on this, prototypes are the more fundamental abstraction. Following this logic, then Lisp with it's multi-methods and macros (code as data) is also more fundamental and hence more expressive then a prototype based language.
So it all boils down to Lisp :) I guess what we have with OO is a language imposed structure that enforces the encapsulation of state. This structure reduces the gap between the base language (Lisp like) and the problem domain in many instances. So OO itself can be considered a domain specific language, were the domain is "the physical world". In many scenarios, classifying objects and sharing a common behaviour (class) object across a set of instance objects is an approach that maps well to how we mostly think about the world and hence is a "good enough" template with which to model our world in a number of useful ways. But we know from philosophy that classifications aren't concrete and are arbitrary to some degree. If we choose to apply this deeper appreciation of the world around us to our models then we must do away with some structure. To model a world where objects aren't constrained by classification, we can choose to use prototypical objects, allowing for object specific behaviour.
So there appears to be a trade off between structure and expressiveness. So it follows that we gain more flexibility and freedom of expression if we fall back to a language with a less rigid structure, like Lisp were we are free to model the problem in any way we wish. The downside is that we now have to take more responsibility for structuring our program ourselves.
The bottom line is usefulness I think. For most use cases prototypes do not seem to provide any additional utility over classes. I'm curious to know whether there are uses where prototypes excel. From what I've seen so far, the Self Demo could have as easily have been written in Smalltalk.(An after thought: Ruby also allows you to add behaviour to specific objects. This is not the same as Self, since a Ruby Object must still have a class and doesn't have parent slots where it can inherit traits dynamically).
No comments:
Post a Comment