Friday, March 30, 2007

Deep into the Blue - Industry Titbits

I have found the responses to my blog thus far a bit intriguing. The general response has been a stiff defence of the current status quo. My opinion (and it is just an opinion), is that the status quo isn't really delivering, and we could all be doing a lot better.

A few articles I've come across recently have re-inforced this opinion. The first article builds on my view that late-bound OO message sends can form the basis for language interoperability. Peter Frisk as recently implemented high performance 3D web rendering using Smalltalk. The usual response to using Smalltalk for such a CPU intensive application is that Smalltalk is too slow. So how does Peter do it?

Well, Peter has utilised the layered DSL idea I've discussed before. So the primitive 3D graphic rendering is performed in ActionScript, which as I understand it is a static, high performance, compiled OO language which runs on the Adobe Flash runtime (Virtual Machine). On top of this he layers a Lisp interpreter, which allows you to call ActionScript primitives from Lisp. On top of Lisp he then implements a DSL that so happens to be Smalltalk-80. As I understand it the Smalltalk implementation is fully interpreted, but this doesn't matter, because the bulk of the graphics rendering is delegated to ActionScript. BTW a domain language programmer using Smalltalk, doesn't need to understand action script at all. Pretty impressive. Take a look (requires Flash 9).

It may look like Peter has gone to all this work for nothing. After all it can all be done in ActionScript, so why Lisp and Smalltalk? The thing is though, is that Peter appreciates the power of late-binding. Smalltalk components written in this way can be mashed-up together to create new objects, in the same way that people are using html and java script to create mashups on the web today.

Another titbit that I have come across that was interesting is a post by Gilad Bracha. Gilad is famous for his work on the Java JVM and worked for Sun until very recently. For me Gilads most impressive work was performed before he joined Sun over 10 years ago, when he did research on Smalltalk, Mixins and Traits, which eventually lead to Strongtalk, the high performance Smalltalk implementation with optional manifest type annotations and a static type checking system. I've discussed Strongtalk before. Gilad has been talking about Self and the idea of slots. C# has the idea of properties, which is a way of implementing getters and setters as used in Java. What if you just make the variable public? And later you want to change it to a method? In both Java and C#, this could mean changing a significant amount of code. With Self this isn't the case (with Smalltalk you can't make an instance variable public anyway, because it breaks encapsulation, so the problem only exists for subclasses). Gilads blog has some interesting examples of better ways to solve/avoid common programming problems using late-bound languages.

Finally, Croquet has announced the release of version 1.0, and is no longer in beta. At the same time the Croquet consortium was officially launched. The consortium is a body to promote the development and adoption of Croquet. Along with a number of Universities the Consortium also contains Hewlett Packard and a new Start-up: Qwaq, a commercial company that will focus solely on collaborative applications using Croquet.

There seems to be growing momentum in the blue plane. Peter Frisk and Vista Smalltalk is definitely worth watching, along with Croquet. I also see Strongtalk as promising, not so much for it's superior performance, but as a bridge into late-bound programming for programmers who are reluctant to relinquish their preference for manifest type annotations and static type checking.

Monday, March 19, 2007

Deep into the Blue with Croquet

Time to look forward. My last couple of blogs on Object technology have focused on the perceived benefits of the current crop of incumbent main stream OO languages. We explored a bit of history and got a bit bogged down IMO over the subject of Type Safety and Program Correctness.

If anything I think the discussion demonstrated the point that we still don't know how to write safe programs with any degree of certainty, and that any program is as good as the programmers who produced it. So for me the term "Type Safety" is a bit of an oxymoron, because being type safe doesn't infer program 'safety' at all!

Accepting that there is no guarantees, perhaps we should let go of the pink past and explore the new blue OO idea a bit further. To do this we need to take a pure OO approach, with scant regard for incumbent technology. Croquet is a project that chooses to look at software Engineering afresh from a pure OO perspective. The question posed by Croquet is:

If we were to start again, and build an Operating System with modern assumptions about computer power, what could we do today?

To this question the Croquet team have come up with some answers:
  • A VM that works bit identical on all platforms. They achieve this by writing the Squeak Smalltalk VM in a subset of Smalltalk itself, called Slang.
  • Given bit identical behaviour, replicate objects across the web, with the guarantee that replicated objects will behave bit identically.
  • Using Object Replication, and synchronised message sends, create a shared virtual Time and Space, across the web, they call this TeaTime.
  • Use Peer-to-Peer communications to remove the bottle neck of centralised servers.
  • Late-binding to ensure that the system can grow and change organically. Also allow non Croquet components to be consumed into the Croquet world.

I will explore Croquet in detail over the next few blogs. Here is an article which is a excellent primer on Croquet for the uninitiated. It is difficult describing Croquet, because like the Sony Walkman, Croquet is something new and innovative, and unlike anything we have seen before. The closest description to the vision held out by Croquet is the virtual computer world presented in the movie "The Matrix".

Croquet is The Matrix.

Friday, March 09, 2007

Type safety, An Oxymoron?

I think I've found a concise definition for type safety. I found it on the C2 wiki, which is a great source for programming related info. Anyway here it is:

Type Safe
Any declared variable will always reference an object of either that type or a subtype of that type.

A more general definition is that no operation will be applied to a variable of a wrong type. There are additionally two flavors of type safety: static and dynamic. If you say that a program is type safe, then you are commenting on static type safety. That is, the program will not have type errors when it runs. You can also say that a language or language implementation is type safe, which is a comment on dynamic type safety. Such a language or implementation will halt before attempting any invalid operation.


Taking the first sentence. This rules out any type of conversion so int->float is type unsafe, it rules out any type of dynamic cast too. So that basically rules out C, C++, Java and C# as type safe. Moving on to the main paragraph we see that aswell as static type safety there is also the concept of dynamic type safety. Using this as our bench mark, still rules out C and C++, but deems Java and C# to be dynamically type safe (if we choose to ignore the issues surrounding conversions of primitives of course). This laxer definition of type safety also includes languages like Smalltalk, Python and Ruby. So all modern OO languages are dynamically type safe.

If this is true, what is the dynamic versus static typing debate all about? Is type safety an oxymoron? Reading on further on the C2 wiki:

There are various degrees of type safety.
This is different from
TypeChecking.
See also
StronglyTypedWithoutLoopholes, which is another term for (at least) dynamic type safety.
CategoryLanguageTyping

So using the "degrees of type safety" argument, Java could be said to be more type safe then say Smalltalk. This kind of makes sense, since even though Java is not fully static type safe, it is partially so. So type safety is relative. So you can rate languages on their degree of type safety. Statically typed languages are more type safe then dynamically typed languages generally. If you click on the link CategoryLanguageTyping you will find out that what we usually refer to as static typing isn't actually called static typing at all, the proper name is Manifest Typing, Static Typing means something else and includes Type Inference. Given the common use of the term static typing, I have chosen up to now not to use the proper term which is in fact Manifest Typng.

So what does all this buy us? At best we are partially type safe if we choose to use a language like Java. Partially? Is that useful? Either I'm safe or I'm not right? For example, when releasing to production, I can't tell the QA Manager that I believe my program is partially safe. He wants to know whether my program is safe.

So how do I know that my program is Safe? Well simple, I test it!

I could go into strong versus weak typing and the consequences, but the links are there if you're interested. No program is Type Safe, and to claim so is a bit of an oxymoron. IMO typing is no substitute for well thought out tests, but type checks can help to detect and track down bugs (either at compile time or runtime). Where I believe manifest typing is useful is in improving the readability of code, improving the comprehension of large systems, and improving tooling support for code browsing and editing. Examples of this is the code completion and refactoring features in Eclipse. Smalltalk has these features too, but with manifest type annotations, tools have that much more information to work with.

The downside of manifest typing is that all type systems use 'structural types'. Structural types are based on the structure of your code. Depending on the code annotations available, manifest structural types can limit expressiveness. This is why languages like Scala have invented a more expressive set of type annotations, to overcome the type constraints imposed by languages like Java. Strongtalk's type annotations are even more expressive. This had to be the case because the Strongtalk type annotations had to be applied to the existing Smalltalk 'blue book' library, and this was originally written without any manifest type constraints whatsoever. The other downside of manifest types is that your code is more verbose.

So ideally what you want is :

* Manifest type annotations that can express intent and do not constrain you (or no type annotations at all or type inference)
* Strongly typed without loop holes at runtime
* Tests that tell you whether your code is safe.

Type safe doesn't exist, and partial type safety is a poor substitute for the above!

Wednesday, March 07, 2007

"Sorry, you're not my Type!"

Before we explore the future potential with Blue OOP, I thought it only fair to address the perceived advantages of pink OOP first. After all, I have labelled pink OOP as just an extension of the "old thing", but who says that the old thing was all that bad? Was there anything about the "old thing" worth holding onto?

The old thing I am referring to is C. C was one of the first 3rd generation languages to be used to write an Operating System for micro-computers (I think?). That Operating System was Unix. Prior to C most micro-processor OSes were written in assembly. I mention microcomputers, as this is/was the name for computers built using micro-processors. Prior to the micro-processor computers where huge boxes of electronics built from discrete components.

Early microelectronics placed considerable constraints on computer software. Many of the Computer languages used on big "mainframe" computers just weren't suitable for microcomputers especially personal computers. Outside research organisations, personal computers had very little processing power and very little memory.

The success of C was largely due to the success of Unix. Unix, was ported to a wide range of computer systems. Also, with C you could get very close to the efficiency of assembly language, and unlike assembly language your code was portable.

This is a longer introduction than I had hoped, but a lot of people have forgotten this history and it is useful to remind ourselves of it. So by the early 80's C was the personal computer language of choice.

Then along came Objects. So the challenge was how to bring OOP to PC's and still retain the efficiency of C. There were two candidate languages, both derivatives of C. These were C++ and Objective C. C++ replaced the message passing of Smalltalk with a virtual function call. This ensured that method dispatches would be as efficient as possible. The downside is that C++ is an early bound langauge as binding to concrete methods occurs at compile-time. Objective C however, chose to retain message sends, this means that Objective C is late bound, but as a consequence is less efficient at method dispatch then C++.

Given the hardware restraints at the time, the majority of the industry went with C++. The only PC company I know of that went with Objective-C was Steve Job's Next with their NextStep OS.

So the big advantage of pink OOP is efficiency. As time has moved on however, some in the industry have tried to re-write history and claim that the big advantage of pink OOP is type safety. Now I must admit, I do not know exactly what type safety means. There are a few things that I do know however:

* A Class is not a Type
* Late/early binding and Static type checking are orthogonal concerns
* Static typing is usually associated with early binding
* Static typing can be applied to a late-bound dynamic language like Smalltalk.

The first bullet is a conceptual flaw in C++, that Java attempts to solve by introducing Interfaces. The problem with Interfaces though is that they are selective. So sometimes in Java you bind to a Type and at other times you bind to an Implementation (Class), an unsatisfactory compromise IMO.

I'm going to get myself up to speed on "type safety". My experience has shown that static typing as used in languages like C++ and Java can greatly reduce the expressiveness of the language. So instead of the compiler being my friend, it ends up being a straight jacket, stopping me doing what I know would work best, if only I was allowed.

This is just opinion of course. I have come across one static type system that I believe will allow me to have full flexibility. This is a Typechecking system for Smalltalk called Strongtalk. Here is a link to a paper on The Strongtalk Type checking System. The current Strongtalk is slightly different to the description in this paper. If you are interested in the difference you will need to look in the Strongtalk documentation in the Strongtalk download bundle. I believe Scala is an attempt to bring more expressiveness to static typing on the JVM so I will be taking a more detailed look at Scala too.

It should make a neat comparison. Two static OO type systems one targetting a late bound langauge (Smalltalk), the other targetting an early bound language (Java), it will be interesting to see how they compare.

BTW. If there is anyone out there who can answer the question: What is type-safety? I would be more then happy to hear from you.

Revised 07/03/2007: Modified to acknowledge the role of Unix in the rise in popularity of C - Thanx Steve.

Sunday, March 04, 2007

What Colour do you like your Objects? Pink or Blue?

It's late and it's a Sunday, but I thought I'd just make a quick post to clarify a few things. What is OOP? Since Alan Kay's team coined the term 'Object Orientated' with the release of Smalltalk to the world in the early 80's OOP has become one of the most exploited marketing terms in programming.

It would be interesting to see when the term was first used. It wouldn't surprise me if the first published use of the term was in the original Byte Magazine article on Smalltalk in August 1981. So OOP was born with Smalltalk. Before Smalltalk Simula extended Algol to allow data structures to contain function pointers, but this was seen as an extension of data abstraction, and the term OOP wasn't used.

In Alan Kay's keynote speech at OOPSLA in 1997 he talks about a blue plane and a pink plane. The pink plane represents ideas which are an incremental improvement of existing ideas. The blue plane which runs orthogonal to the pink represents revolutionary ideas that break the old way of doing things, setting you off in a new direction.

Since the creation of C++, OOP has born these two identities. Firstly a pink identity, where OOP is seen as an extension of the existing thing, this was the view of Bjarne Stroustrup and what lead to C++ and ultimately Java. Secondly there is a blue identity, where OOP is seen as a new thing, which breaks with the old and has new requirements all of it's own. This second identity is most closely associated with Smalltalk and Self. It has also influenced other OO languages like CLOS, Ruby and Python.

These two identities so happen to deal with types differently, and the difference between the two is often referred to as static versus dynamic, but in truth, this dichotomy is a false one. The difference runs much deeper. The real difference between the two stems from their goals and their vision.

The C++ goal was to introduce OOP like constructs to C in an efficient way. To do this Stroustrup avoided the garbage collection, byte code, VM and late-binding of Smalltalk and went back to the much simpler and efficient model presented by Simula. The strength with this approach is that C++ is very efficient, the downside is that C++ is decidedly pink.

Self built on the platform of Smalltalk in an attempt to push further into the blue plane. The goals of Self were:

* Objects that are tangible just like physical objects in the real world
* Objects that share uniformity, just like physical objects do (everything is an object)
* Objects that exhibit liveliness, removing the modal nature of programming (no edit/build/run cycle)

All these goals are characteristics of Smalltalk, but Self wanted to take these characteristics much further, creating a fully graphical programming experience, where objects could be handled and manipulated from a visual palette, just like physical objects in the real world.

You can see that this 'blue' vision is very different from the pink one. One of the most obvious consequences is that with Smalltalk and Self there is no difference between graphical objects on the users desktop, and 'programmable objects' in the programmers IDE. In a sense the desktop becomes the IDE and the IDE becomes the desktop. Following from this the distinction between programmer and end user starts to blur. Also the distinction between object and application disappears all together. Each object is an application in it's own right, even the benign Number object '1' or '3' is an entity that can be manipulated at runtime through it's own GUI. The VM contains a large collection of such objects and becomes more than just a runtime, it becomes a Graphical Operating System.

Infact the object instance '1' is more than just an application. It also encapsulates it's own server with it's own synchronous message queue and it's own virtual processor. Adopting this semantic view of OOP means the runtime is now analogous to a NOS (Networked Operating System) spanning several virtual processing nodes. This is the semantic goal of blue OOP and why Alan Kay used the analogy of the encapsulated biological 'cell' in his keynote speech. I will expand on this blue OOP vision in a future post. But as you can see pink OOP is very different from blue OOP and the difference has very little to do with types.

Revised 06/03/07: Replaced 'Real Objects' with 'Physical Objects' in line with the terminology used by the Self team - Thanx Isaac

Programming Languages - Follow the leader

A short interlude from my series of posts on Objects. Steve's last comment got me thinking. Why are some programming languages more popular than others? It would be easy to put it all down to cynical marketing by vendors, but that can’t be the whole story. It was this sentence in particular that got me thinking:
It may sound neat to allow developers to modify the language, but having used Smalltalk for more than 20 years, I have had to deal with the chaos that can result when different developers modifications conflict. I would rather have a controlled and organised process.
So an ordered and controlled process is seen as desirable. Ok but controlled by who exactly? The truth is that most people feel more comfortable being lead. I can wax lyrical about the technical superiority of languages like Self, Smalltalk and Lisp as compared to lesser languages like Java and C# (and even Ruby and Python), but this doesn't matter a jot if people just aren't 'comfortable' with these supposedly 'superior' languages.

With Java there is minimal degrees of freedom. If you want to iterate, there is one (non-deprecated) way. Want a call back there is one way. You do things the 'Gosling way'. It is all pre-packaged and rather assuring. I must admit when I first used java I found it's simplicity re-assuring too. It was definitely welcomed after the explosion of constructs that accompanied the transition from C to C++. C# has used the same formula, after all it worked for Java. Java has taken "the shrink-wrapped approach" further, beyond the base language. The whole J2EE application stack was supposed to result in "one way" to build enterprise applications. Reducing software development to painting by numbers.

This all works up until the point where the 'one way', just isn't the best way for you. What do you do then? Well you live with it, like the EJB community did for years, or you jump to something better suited, like pico-container or Spring.

Many Java developers are now jumping to Ruby and Rails for precisely the same reason. For many web apps, the full J2EE stack even with Spring and Hibernate, is just seen as overkill. Interestingly though, very few have moved to Squeak and Seaside, and even fewer to Lisp. Why?

Well in Matz and David Heineimeier Ruby and Rails respectively, have strong leaders. Benign dictators that prescribe "how things should be done". Ruby developers can model themselves on the approaches recommended by these leaders. Better still these leaders are developers, themselves, so there is an instant bond of trust. The Python community has demonstrated this phenomena even more so, with a single all knowing leader Guido van Rossum. Rossum even dictates how code should be laid out and how tab spaces should be used!

So in contrast how do languages like Lisp and Smalltalk compare? Well let’s start with Lisp. I like to think of Lisp as a Meta-language; a programming language for writing other programming languages. A good example of this can be seen at the Vista Smalltalk blog. Peter Frisk is using Lisp to build a Smalltalk interpreter on top of Flex. So as far as Lisp is concerned, Smalltalk is just a DSL, created using Lisp macros.

With Lisp you deal with fundamentals. The smallest construct in Lisp is called an atom. An atom is a single character and you can combine atoms to produce symbols, and symbols to produce s-expressions (lists) etc, all the way up to a full class hierarchy of objects and associated functions. You can even determine how s-expressions are evaluated with Lisp macros, so basically you can do what you like!

This power puts a great deal of control and responsibility in the hands of the programmer. Of course there are established patterns to help guide you, but there is no benign dictator making a bunch of design choices upfront. You have to make your design decisions yourself. You are on your own!

Some people will revel in this power and flexibility. Others though, are likely to find it daunting! Smalltalk follows Lisps lead, but provides a lot more pre-defined structure. It has a small syntax, just like lisp, and like lisp has a meta-model built to support meta-classes, classes, and object instances. Unlike lisp though, all objects interact through message passing and are fully encapsulated. Many objects in Smalltalk are part of the language, such as the Context object used as a stack frame, Block closure object used as a lambda expression, and compiler objects used to turn strings into byte code. So Smalltalk gives you a lot of structure.

Smalltalk wears it's heart on it's sleeve. With Smalltalk all this structure is written in Smalltalk, so as a programmer you can change any part of it as you see fit. So this is fantastic if you want to create your own specific Smalltalk dialect. But if you do, Dan Ingalls or Adele Goldberg won't be there to help you out. And you won’t be able to turn to the Smalltalk-80 "Blue Book" either. You will be in the same camp as the Lispers, on your own!

When I first came across Smalltalk I saw all the dialects as a concern. All these semi-compatible versions surely can't be a good idea? As I have become more experienced as a programmer though, I have come to see diversity as a good thing. Two analogies come to mind. The first one is biological. In nature animals ensure that there is sufficient diversity in the gene pool. Each individual is not a clone of all the others, so if a sudden virus attacks, some of the species will be wiped out, but hopefully, others will have immunity, so the species as a whole survives. I think Smalltalk has this strength. Depending on what is important, there is a variant of Smalltalk to fit the bill, and if there isn't, a dialect can be readily mutated to meet the need (in most cases). Languages that can’t adapt in this way, face the risk of dying out through natural selection (something I believe Java is in danger of).

The other analogy is spoken language. Spoken language is a living and changing thing. We do not speak the same way today as we spoke 300 years ago. Also we have regional dialects, a Scoucer for example, sounds very different to a Cockney, yet they both claim to speak English (the Queens English, not US English :^)).

In their own domains Scoucers and Cockneys get on fine speaking their own dialect. But in situations where they may have to communicate with each other, like with written English, they both fall back to 'Standard English". For Smalltalk, "Smalltalk-80" is the equivalent of Standard English.

So that's the language landscape as I see it from a cultural perspective. Where I think I agree with Steve, is that change is slow in software because of a number of reasons, many of which are cultural. Where I believe things are inevitably heading though is into a pluralistic world containing many languages and dialects, but also sharing a common base, a lingua-franca. I see the lingua-franca as being based on late-binding and message passing, but I’ll save a detailed discussion of this for a later blog. In this new world I see many domains with leadership being dispersed across them, and with several individuals taking a leadership role at different times and in different circumstances.

For this to occur, developers will need to be more comfortable taking the lead themselves, and getting rid of the "training wheels". Technically, there are tools on the horizon that could help here, protecting the less self-assured. I see Language workbenches as described by Martin Fowler as perhaps helping here. A language workbench could provide a reassuring wall between the meta-language and the domain specific language, providing reassurance and safety for domain language programmers.

Supporting tools aside, with the rise of open source and open source languages, I believe there is strong evidence of this cultural change happening already! I see this change as inevitable as the industry grows up and matures.

Saturday, March 03, 2007

Objects - I know that already!

I recently received an email from an old adversary from TSS (The Server Side). Steve and I are kind of friends now - which is nice considering that we have never met, and only know each other through posts on TSS, e-mail and through our blogs.

Anyone who follows the news threads on TSS knows that I can be pretty vociferous with my opinions about Objects and the shortcomings of Java. Well I've infuriated Steve on many occasions, leading to long exchanges... One of Steve’s pet peeves, is me continuingly quoting Alan Kay. So you can imagine my surprise when Steve sent me this link to a keynote speech given by Alan Kay at OOPSLA in 1997

BTW, For anyone interested in Object technology, there are a whole set of videos available on the web showing the history of Objects and the primary players involved going back to the 1950s.

Steve is an ardent Java supporter, and I had posted a link to this same video and several others, many months ago in an attempt to cure him of this unfortunate affliction :^) Well many months later he stumbled across the same video himself, and he wanted to discuss it with me. Steve has over 20 years software experience (a fact that he is fond of sharing :^)), and has used several OO languages over the years including Smalltalk. So what was there to discuss prompted by a 10 year old video by Alan Kay?

Well you can all judge for yourselves. I would urge any programmer to watch this video. It deals with fundamental programming concepts, which most of us have dispelled from our consciousness long ago. Why? Because we know it already! We all know what an operating system looks like. We know what professional, industrial strength programs look like too. And we all know an "enterprise strength" programming tool (language + IDE) when we see one! We've all used/seen Eclipse, IntelliJ and Visual Studio. All of these tools are marketed as 'Object Orientated', and all of them are supposedly state of the art!

If you look a little closer though, and peel off the shiny veneer from these tools, underneath they look remarkably like 'C', 'vi’, 'make' and ‘cc’. Not much has changed since C/Unix in the 1970s. We still use the same old while loops and if statements, still the same edit/build/run cycle. If a C/Unix programmer had been put in a time capsule in 1977, and re-awakened today he would find tools like Java and Eclipse pretty familiar and would be up and running with them in days.

So why has so little changed in 30 years? Here is an explanation I've lifted from an Article by Dafydd Rees on Croquet and Squeak:

"Kay blames this lack of innovation on the fact that most adults employ instrumental reasoning to evaluate and apply new ideas. This means that adults have difficulty evaluating new ideas because they're carrying too many existing goals, and too much context to be able to see the full potential of new ideas."

One of the beauties of children is that they are untainted by our pre-conceptions. Each new generation looks at the world afresh, with new eyes, and kids perennially ask the question why?

My plan is that this post will be the first in a series, where I will be questioning strongly held assumptions about object technology. Hopefully Steve will comment too (apparently his epiphany was only short lived!). Free from marketing and spin; the idea is to have a useful exchange on where we've been with objects, where we could/should have been and were we should go next.

Like Alan Kay says: "The Computer Revolution hasn't happened yet".

If you are genuinely interested in Object technology; in a language neutral sense; then book mark this blog. It should be interesting and your input is welcomed.