Saturday, November 03, 2007

Monopolies, Cartels and Developer Choice

This recent article on the serverside has got me thinking (yet again) about the state of the software industry as a whole. The article is about a blog entry that asks Why are Sun and Microsoft now clambering onto the Ruby bandwagon? And can Ruby be the next Visual Basic?

These questions expose some interesting assumptions: A) That there is a gap (or perhaps a gulf) in the Software Development market left by VB6 that still needs filling. B) That this gap can only be filled by a language that is backed by a major vendor like Sun or Microsoft. The last of these assumptions appear to be true when you look at the demise of Dolphin Smalltalk. Dolphin Smalltalk was an ideal OO replacement for VB6. Yet it perished. Why?

Thinking about these questions raised an interesting thought. I have long concluded that Managers/Strategist and Architects make technology purchase decisions not programmers. Managers are of course risk adverse and like to stay near to the technology herd. This makes sense given that they often know little about technology themselves. Like the saying goes, no one ever got fired for buying IBM. So vendors have learnt over the years how best to sell to Managers and decision makers. As for Programmers, well they are an easier proposition. If the Managers have bought into a given technology and the herd is clearly moving in a given direction then the programmers will clamber to get the "new" technology on to their resumes.

These cultural patterns have traditionally led to monopolies such as with IBM and then Microsoft. With the rise of Java and J2EE we have witnessed yet another phenomena, that of the cartel. In response to Microsoft's near monopoly, a group of companies decided to gang together to form a cartel. On the face of it J2EE is standards based, but until very recently it has been the cartel (Sun, IBM, Oracle, BEA etc) that have dictated the overall direction. In so doing mainstream developers are left choosing between either Java/J2EE or C#/.Net. This boils down to practically no choice at all.

So how will things be in the future? Well the cartel have split up the J2EE cake amongst themselves along database lines. But it is not just the J2EE market that is split this way. the whole serverside cake seems to be split along database lines. The thought that came to me is that Oracle and DB2 shops tend to use Java/J2EE and Microsoft SQL Server shops tend to use .Net.

So the key to sales seems to be the database. This makes sense, because from a management perspective, "the database" is the companies most important technology asset. Well actually the important asset is the corporate data not the database server, but people seldom seem to make this distinction :^). So after spending almost eight years contracting and 17 years in the industry what have I witnessed? Well the database salesman comes in and sells the managers a database. On top of that they then convince the managers of the need for middle ware. If the database is DB2 then the natural choice is Websphere, Oracle then OAS, SQLServer then .Net etc. They then convince them that it is a good thing to mandate a single technology stack for everything, because having a "corporate strategy" is important. So all teams end up using DB2 and Websphere whether the application requirements and timescales merit it or not.

Into this mix has landed open source. Some parts of the open source community are aping this traditional vendor behavior. So JBoss and Red Hat Linux see themselves as a traditional server side technology company. Others are just programmers interested in programming, and sharing tools and languages that help them become more effective at delivering. But wait a minute, how can programmer centric technology succeed if it is the managers who get to make all the decisions?

Well if the managers are themselves Programmers then this approach works very nicely. Paul Graham has been blogging on this very topic for quite some time, and the Agile community worked this one out some time ago too. Everything we do needs to be programmer centric, because at the end of the day it is the quality of the code and the productivity of programmers that counts the most. Organisations that take this approach are massively productive when compared to traditional software development organizations.

So it seems to me that I need to find work with people that realise the importance of programming and understand that they don't need Oracle (Or IBM or Microsoft) to look after their precious data. Freed from management ignorance, I'll get to choose the best tools for the job, and who knows programming could even become fun again :^). It sounds like I need to be looking for work with a startup!

Sunday, October 14, 2007

Ruby, Smalltalk and Lisp Revisited

In my last post I suggested that Ruby may have some advantages over Smalltalk, especially when it comes to borrowing some of the more powerful features of Lisp. First thing to say is titling the post "Ruby versus Smalltalk" wasn't the cleverest thing to do. The responses tended to contain more heat then light. I hold up may hands and take full responsibility for this. Given the amount of FUD that Smalltalk has suffered over the years, it is hardly surprising that the Smalltalk community are protective.

I've been recently looking closely at Rubinius, a Ruby implementation that uses a Smalltalk/Squeak-like VM. Rubinius borrows many of the architectural ideas of Smalltalk/Squeak and is Ruby implemented mostly in Ruby. Eventually even the VM will be implemented in a C-like variant of Ruby called Cuby, much like how the Squeak VM is implemented in Slang.

I'm pretty taken by Rubinius. The approach taken is to implement Ruby right, and in so doing they've gone back to Smalltalk. The other thing that is very impressive is that all the Ruby libraries in Rubinius are implemented using BDD. So there are Rspec specifications for all library methods. The full Ruby library is not complete yet, but using Rspec is an excellent way to finally specify the Ruby language and in so doing allow lots of people (like myself :^)) to get involved in the Rubinous project.

So back to Lisp. The Rubinius compiler generates a simple Lisp (s-expressions) as an intermediate form on the way to byte code. There are lost of good reasons to do this, many of which are dicussed here:

http://on-ruby.blogspot.com/2007/01/will-rubinius-be-acceptable-lisp.html

Another reason not mentioned is that Lisp can become a common intermediate form allowing several source languages to interoperate. So Ruby or Smalltalk could be compiled to Lisp then Lisp to byte code. Peter Frisk has already implemented a Smalltalk to Lisp parser in Vista Smalltalk so this is possible. It also fits in nicely with the language orientated programming ideas that Martin Folwer has been touting for quite some time now.

Rubinius is gaining a lot of momentum in the Ruby community. So why is Lispyness important? I think the answer is flexibility and allowing change. Something that Alan Kay said seems relevant here. The first Smalltalk implementastions where built with Lisp I believe, and whilst Smalltalk was still within the confines of Xerox Parc the language was being experiemented with and changed all the time. Alan Kay complains that once Smalltalk became public in 1983, the language became fixed. He had hoped that Smalltalk-80 would be redundnant by now, and that it would have ben replaced by something better.

The Ruby community seem up for evolution, whilst also being a very practical bunch. The Rubinius project is cooperating with the JRuby project to specify a common Ruby across all implementations, something that I believe was a problem within the Smalltalk comunity. By considering how best to bring Lisp to Ruby they are also showing an openess to change. Matz seems open to change too, Ruby 2.0 although not revolutionary, does show a willingness to deprecate old features and break backwards compatibility, in the same spirit as pre Smalltalk-80 Smalltalks.

I still think that Ruby is closer to Lisp then Smalltalk, but that doesn't stop Smalltalk inlining Lisp or adoptng more Lisp like features if it wants to. In a sense the two languages share a common root. The difference seems to be history and the two communities. The Ruby community seem to be more willing to experiement and evolve, whilst the Smalltalk community is still holding onto Smalltalk-80 as the common Smalltalk dialect. I guess the question is whether the Ruby community can evolve together and stay as one, avoiding the fragmentation that occurred with Smalltalk.

From what I've seen on the web, there is plenty of reason for optimism. If the Rubinius/JRuby people pull it off we may end up in a situation where we don't need to choose. We can mix Ruby, Smalltalk and Lisp as and when appropriate.

Friday, August 03, 2007

Ruby versus Smalltalk

Is Ruby a lesser Smalltalk? Well I use to think so, but now, after using Ruby for a while, I'm not so sure. Well Smalltalk definitely excels when it comes to tools, but as with Java, Ruby's increasing success will mean that better Ruby tools are sure to appear soon.

The reason why I'm not so sure is the ability to create macros in Ruby, in much the same way you would with Lisp. For those that don't know, a macro function is a function that writes functions, so code that writes code. Rails uses this technique all over the place. For example the famous scaffold method is a macro. Your Controller class writes itself when it gets defined at runtime. AFAIK this just isn't possible in Smalltalk or is it? Why is this important? Because you can add new control structures to your language, extending the language and creating DSLs in much the same way you can with Lisp macros.

It looks as though Ruby is a true successor to Lisp in a way that Smalltalk never has been. Is this true or am I missing something?

Wednesday, July 04, 2007

Open Source Software Businesses

Following on from my last post, that was motivated by this discussion on TSS. In answer to the question. What motivates Gavin King when he is selling Seam? Well after some mis-communication, Bill Burke of JBoss was very honest about this. Jboss is a Software Business, they make their money mostly through support subscriptions. So JBoss sell support licenses for a Software Product, The JBoss Application Server. This is the business that Red Hat bought.

So is there anything wrong with this? Well no if you're open and transparent about it. I definitely do not work for free, and as Bill rightly points out, much of what JBoss as achieved in the open source arena would not have been possible if they hadn't grown the business. Also, one size doesn't fit all, so although J2EE5 isn't the simplest way to build a web application, as some of the comments to my last post rightly point out, in some cases J2EE5 is a good option.

So where does that leave us? Well I still feel that describing JBoss as "community driven" like they do on their website isn't completely "accurate" :^). JBoss like any other business will do what it sees to be in it's own interest. So putting back on my cynical hat. Gavin Kings motivation when selling Seam? Create an opportunity to sell more JBoss Subscriptions perhaps?

Again, fine - but does this qualify him as a "community leader"? In my opinion no.

Sunday, July 01, 2007

The J2EE King is Dead - Long live the J2EE King!

I've quit posting on Objects. Gilad Bracha is much better qualified then I and besides he has a cool blog. Gilad reckons that Java is not OO. Well that's something coming from the guy who led development on the Java VM for a number of years.

As for me, I've decided to take Gilads advice and ensure that my next project is not a Java one. I'm seriously getting into Ruby and Rails, and I'm also dabbling with Flex. I'm finding the Ruby forums kind of boring though, basically filled of helpful knowledgeable people, supplying useful technical tit-bits, without an axe to grind insight.

In contrast the Java forums like TSS are a lot more "fun". I've noticed a trend lately, the OSS projects that exposed the swindle known as the J2EE Application Server are now seizing the J2EE crown for themselves. Spring, which is a neat framework, that basically helps to overcome a Java language smell (Gilad discusses this too), now sees itself as the new J2EE King. I have read several articles from Rod Johnson, and whilst he is very eloquent in describing the pitfalls in J2EE and in particular EJB, at no point does he mention that IoC is merely a pattern to deal wit the fact that object construction cannot be done in an OO fashion with Java. Now is this just an oversight on his part, or is he just simply ignorant of this fact?

Also, Johnson seems to have got himself into a squabble with the JBoss/Hibernate guys. The JBoss camp has an alternative vision for the new J2EE, and they see themselves as the true successors to the J2EE throne. Again, at the core is a framework to deal with the limits of the Java language. Hibernate with the use of CGLIB manages to overcome the static nature of Java, by late-binding POJOs to a database agnostic persistence mechanism at runtime. Has it escaped Gavin Kings notice that Rails manages to do the same thing, within the Ruby language and without having to resort to ugly tricks?

You would have guessed that both of these clever guys would have realised that the problem is with the language, and that they would be spending their time building OO frameworks using a more appropriate, 'OO' programming language, but they aren't, so why?

Well the simple answer is money. Java is an established market, and they both intend to exploit it. Unlike the old guard, which they have successfully over thrown, these new upstarts have an open source business model. I'm not an expert on OSS, but apparently you can make money out of software by giving it away. The thing about these new pretenders however is that in my opinion, they are both fundamentally dishonest. I don’t believe either of them is being totally open about their true motives. When an Oracle salesman walks up to you in a pin-stripped suite and offers to sell you a product, you know what you’re potentially getting into, but when Gavin King offers you Seam as the solution to all your problems what are his motives? Is he truly altruistic ?

Marc McFluery of JBoss fame is a very rich guy. So how did he make his money? The JBoss camp has chosen to stick close to the JCP and build on Java "standards". Their latest offering in this vain is Seam 2.0, which builds on JSF1.? and EJB3.0. As I understand it JBoss with Seam is fully J2EE 5 compliant. So I guess this is what Red Hat thought they bought. McFluery must of laughed all the way to the bank!

So back to Gavin and his Seam2.0. Without going into details, Seam tries to do with J2EE5 what Rails does with Ruby. Now I guess you're saying that a full blown J2EE5 App Server is not needed to do what Rails does, even with a limited language like Java - yes, true, but... JBoss is J2EE5 standards complaint so they must use all the J2EE5 standards :^).

From all accounts Seam at its core is an elegant framework, the problem is though is that it solves the wrong problem. The problem it should be solving is how do I build web applications with Java quickly? (to which the obvious answer is, you don't :^)). Instead the problem it chooses to solve is how do I build web applications using J2EE5, including EJB3.0, JSF and a full blown App Server? If you sell Application Servers, or product consulting for Application Servers like JBoss does, then perhaps this latter problem does need a solution. So back to the question, what are Gavin Kings motives when he is selling Seam? Well if you are as cynical as me, you would say that it is to push JBoss as an Application Server and to sell more product consulting.

These political and business issues have dogged the IT industry for a very long time, yet they are never adequately debated amongst the developer community in my view. My concern is that there will be a lot of young naive developers out there that believe that they can trust the likes of Rod Johnson and Gavin King as honest “community” leaders, and will actually invest hours learning either Spring or Seam, only to find out that it all was one big con and that their time would have been much better spent learning Ruby.

Anyway, like I say I intend to be doing less Java in the future. Personally, I prefer not to identify with a specific programming language and I like to see myself as just a Programmer first and foremost; that way I’m free to choose any programming language that suites me. My advice to the Java community though, if they are listening, is to watch out for the new pretenders. They could turn out to be worst then the ‘leaders’ we have now.

Tuesday, April 10, 2007

Look Mom, No versions.

Something that as troubled me a little, is the link between 'blue' OOP and Agile practices. I say 'troubled' because Agile development is widely misunderstood and I fear that 'blue' OOP is widely misunderstood for similar reasons. It is not widely acknowledged, but Agile development practices stem from OO programming practices and Smalltalk. Agile development breaks with the traditional idea that Software development can ever be deterministic. The desire for a deterministic software development process stems partially I believe from the desire for control. And the thing that non-Agile Managers want to control the most is change. We have all heard of the cost of change curve, where the cost of software change increases by a factor of ten, the later in the development process the change occurs. With a functional or procedural language this makes sense. Since everything is by default inter-connected, then any change is likely to propagate to other areas of the code. So the last thing you want to do is to 'allow change' once you have established a significant code base.

On the surface, this anti-change policy sounds like common sense. It even sounds similar to other common sense ideas like ' get it right first time', which seemingly promote an early determination of requirements and design upfront. But if you think about it, not allowing changes once you have code rules out maintenance, and also assumes that your customer knows exactly what he or she wants at the outset.

In practice it turns out that this 'big up front design' approach just doesn't work that well, and in fact exploring and experimenting in code, and allowing change to occur all the time can work much better. This is the philosophy behind dynamic languages, where feedback from experimentation can be gained relatively quickly. The problem though is that the 'embrace change' approach is counter to what most of us have been taught, and is not widely seen as 'industrial strength'.

The other thing about embracing change is that your code base needs to support change. The existence of unit tests and the 'safety' that they provide has been well documented in the Agile world. Tests do allow you to detect bugs introduced by changes, but it would be much better to avoid introducing bugs in the first place. This is where OOP and encapsulation comes in. If you can encapsulate your business logic inside objects that only communicate through loose messages, then the chances are that you will be able to introduce changes which remain encapsulated themselves and do not spread throughout your code. So OOP allows you to introduce changes cheaply, by reducing coupling through message sends and increasing cohesion through encapsulation. The biological metaphor of a cell which Alan Kay uses is a good one in this regard. Cells are independent and autonomous and go about their daily function independent of each other. The cell membrane provides encapsulation, so changes that occur within the membrane are localised and specific.

Using this technique people like Jeff Sutherland (Scrum) and Kent Beck (XP) realised that you could flatten the cost of change curve, and thus introduce changes at anytime during the project development life-cycle. This simple realisation opens up a number of new possibilities.

Gilad Bracha has a presentation where he takes this idea of allowing continuous change to it's logical extreme. Gilads idea is of bits that rot. Software that automatically updates itself continuously whilst running. In such a world version numbers have little meaning. The only version that counts is the latest. Without a fixed version your software is no longer an artifact, but a service consisting of a continuous stream of improvements and fixes. Old code rots away and is replaced by new code continuously. This idea fits well with a biological metaphor too.

Take a look at the presentation by Gilad. It is almost an hour, but is well worth watching.

Friday, March 30, 2007

Deep into the Blue - Industry Titbits

I have found the responses to my blog thus far a bit intriguing. The general response has been a stiff defence of the current status quo. My opinion (and it is just an opinion), is that the status quo isn't really delivering, and we could all be doing a lot better.

A few articles I've come across recently have re-inforced this opinion. The first article builds on my view that late-bound OO message sends can form the basis for language interoperability. Peter Frisk as recently implemented high performance 3D web rendering using Smalltalk. The usual response to using Smalltalk for such a CPU intensive application is that Smalltalk is too slow. So how does Peter do it?

Well, Peter has utilised the layered DSL idea I've discussed before. So the primitive 3D graphic rendering is performed in ActionScript, which as I understand it is a static, high performance, compiled OO language which runs on the Adobe Flash runtime (Virtual Machine). On top of this he layers a Lisp interpreter, which allows you to call ActionScript primitives from Lisp. On top of Lisp he then implements a DSL that so happens to be Smalltalk-80. As I understand it the Smalltalk implementation is fully interpreted, but this doesn't matter, because the bulk of the graphics rendering is delegated to ActionScript. BTW a domain language programmer using Smalltalk, doesn't need to understand action script at all. Pretty impressive. Take a look (requires Flash 9).

It may look like Peter has gone to all this work for nothing. After all it can all be done in ActionScript, so why Lisp and Smalltalk? The thing is though, is that Peter appreciates the power of late-binding. Smalltalk components written in this way can be mashed-up together to create new objects, in the same way that people are using html and java script to create mashups on the web today.

Another titbit that I have come across that was interesting is a post by Gilad Bracha. Gilad is famous for his work on the Java JVM and worked for Sun until very recently. For me Gilads most impressive work was performed before he joined Sun over 10 years ago, when he did research on Smalltalk, Mixins and Traits, which eventually lead to Strongtalk, the high performance Smalltalk implementation with optional manifest type annotations and a static type checking system. I've discussed Strongtalk before. Gilad has been talking about Self and the idea of slots. C# has the idea of properties, which is a way of implementing getters and setters as used in Java. What if you just make the variable public? And later you want to change it to a method? In both Java and C#, this could mean changing a significant amount of code. With Self this isn't the case (with Smalltalk you can't make an instance variable public anyway, because it breaks encapsulation, so the problem only exists for subclasses). Gilads blog has some interesting examples of better ways to solve/avoid common programming problems using late-bound languages.

Finally, Croquet has announced the release of version 1.0, and is no longer in beta. At the same time the Croquet consortium was officially launched. The consortium is a body to promote the development and adoption of Croquet. Along with a number of Universities the Consortium also contains Hewlett Packard and a new Start-up: Qwaq, a commercial company that will focus solely on collaborative applications using Croquet.

There seems to be growing momentum in the blue plane. Peter Frisk and Vista Smalltalk is definitely worth watching, along with Croquet. I also see Strongtalk as promising, not so much for it's superior performance, but as a bridge into late-bound programming for programmers who are reluctant to relinquish their preference for manifest type annotations and static type checking.

Monday, March 19, 2007

Deep into the Blue with Croquet

Time to look forward. My last couple of blogs on Object technology have focused on the perceived benefits of the current crop of incumbent main stream OO languages. We explored a bit of history and got a bit bogged down IMO over the subject of Type Safety and Program Correctness.

If anything I think the discussion demonstrated the point that we still don't know how to write safe programs with any degree of certainty, and that any program is as good as the programmers who produced it. So for me the term "Type Safety" is a bit of an oxymoron, because being type safe doesn't infer program 'safety' at all!

Accepting that there is no guarantees, perhaps we should let go of the pink past and explore the new blue OO idea a bit further. To do this we need to take a pure OO approach, with scant regard for incumbent technology. Croquet is a project that chooses to look at software Engineering afresh from a pure OO perspective. The question posed by Croquet is:

If we were to start again, and build an Operating System with modern assumptions about computer power, what could we do today?

To this question the Croquet team have come up with some answers:
  • A VM that works bit identical on all platforms. They achieve this by writing the Squeak Smalltalk VM in a subset of Smalltalk itself, called Slang.
  • Given bit identical behaviour, replicate objects across the web, with the guarantee that replicated objects will behave bit identically.
  • Using Object Replication, and synchronised message sends, create a shared virtual Time and Space, across the web, they call this TeaTime.
  • Use Peer-to-Peer communications to remove the bottle neck of centralised servers.
  • Late-binding to ensure that the system can grow and change organically. Also allow non Croquet components to be consumed into the Croquet world.

I will explore Croquet in detail over the next few blogs. Here is an article which is a excellent primer on Croquet for the uninitiated. It is difficult describing Croquet, because like the Sony Walkman, Croquet is something new and innovative, and unlike anything we have seen before. The closest description to the vision held out by Croquet is the virtual computer world presented in the movie "The Matrix".

Croquet is The Matrix.

Friday, March 09, 2007

Type safety, An Oxymoron?

I think I've found a concise definition for type safety. I found it on the C2 wiki, which is a great source for programming related info. Anyway here it is:

Type Safe
Any declared variable will always reference an object of either that type or a subtype of that type.

A more general definition is that no operation will be applied to a variable of a wrong type. There are additionally two flavors of type safety: static and dynamic. If you say that a program is type safe, then you are commenting on static type safety. That is, the program will not have type errors when it runs. You can also say that a language or language implementation is type safe, which is a comment on dynamic type safety. Such a language or implementation will halt before attempting any invalid operation.


Taking the first sentence. This rules out any type of conversion so int->float is type unsafe, it rules out any type of dynamic cast too. So that basically rules out C, C++, Java and C# as type safe. Moving on to the main paragraph we see that aswell as static type safety there is also the concept of dynamic type safety. Using this as our bench mark, still rules out C and C++, but deems Java and C# to be dynamically type safe (if we choose to ignore the issues surrounding conversions of primitives of course). This laxer definition of type safety also includes languages like Smalltalk, Python and Ruby. So all modern OO languages are dynamically type safe.

If this is true, what is the dynamic versus static typing debate all about? Is type safety an oxymoron? Reading on further on the C2 wiki:

There are various degrees of type safety.
This is different from
TypeChecking.
See also
StronglyTypedWithoutLoopholes, which is another term for (at least) dynamic type safety.
CategoryLanguageTyping

So using the "degrees of type safety" argument, Java could be said to be more type safe then say Smalltalk. This kind of makes sense, since even though Java is not fully static type safe, it is partially so. So type safety is relative. So you can rate languages on their degree of type safety. Statically typed languages are more type safe then dynamically typed languages generally. If you click on the link CategoryLanguageTyping you will find out that what we usually refer to as static typing isn't actually called static typing at all, the proper name is Manifest Typing, Static Typing means something else and includes Type Inference. Given the common use of the term static typing, I have chosen up to now not to use the proper term which is in fact Manifest Typng.

So what does all this buy us? At best we are partially type safe if we choose to use a language like Java. Partially? Is that useful? Either I'm safe or I'm not right? For example, when releasing to production, I can't tell the QA Manager that I believe my program is partially safe. He wants to know whether my program is safe.

So how do I know that my program is Safe? Well simple, I test it!

I could go into strong versus weak typing and the consequences, but the links are there if you're interested. No program is Type Safe, and to claim so is a bit of an oxymoron. IMO typing is no substitute for well thought out tests, but type checks can help to detect and track down bugs (either at compile time or runtime). Where I believe manifest typing is useful is in improving the readability of code, improving the comprehension of large systems, and improving tooling support for code browsing and editing. Examples of this is the code completion and refactoring features in Eclipse. Smalltalk has these features too, but with manifest type annotations, tools have that much more information to work with.

The downside of manifest typing is that all type systems use 'structural types'. Structural types are based on the structure of your code. Depending on the code annotations available, manifest structural types can limit expressiveness. This is why languages like Scala have invented a more expressive set of type annotations, to overcome the type constraints imposed by languages like Java. Strongtalk's type annotations are even more expressive. This had to be the case because the Strongtalk type annotations had to be applied to the existing Smalltalk 'blue book' library, and this was originally written without any manifest type constraints whatsoever. The other downside of manifest types is that your code is more verbose.

So ideally what you want is :

* Manifest type annotations that can express intent and do not constrain you (or no type annotations at all or type inference)
* Strongly typed without loop holes at runtime
* Tests that tell you whether your code is safe.

Type safe doesn't exist, and partial type safety is a poor substitute for the above!

Wednesday, March 07, 2007

"Sorry, you're not my Type!"

Before we explore the future potential with Blue OOP, I thought it only fair to address the perceived advantages of pink OOP first. After all, I have labelled pink OOP as just an extension of the "old thing", but who says that the old thing was all that bad? Was there anything about the "old thing" worth holding onto?

The old thing I am referring to is C. C was one of the first 3rd generation languages to be used to write an Operating System for micro-computers (I think?). That Operating System was Unix. Prior to C most micro-processor OSes were written in assembly. I mention microcomputers, as this is/was the name for computers built using micro-processors. Prior to the micro-processor computers where huge boxes of electronics built from discrete components.

Early microelectronics placed considerable constraints on computer software. Many of the Computer languages used on big "mainframe" computers just weren't suitable for microcomputers especially personal computers. Outside research organisations, personal computers had very little processing power and very little memory.

The success of C was largely due to the success of Unix. Unix, was ported to a wide range of computer systems. Also, with C you could get very close to the efficiency of assembly language, and unlike assembly language your code was portable.

This is a longer introduction than I had hoped, but a lot of people have forgotten this history and it is useful to remind ourselves of it. So by the early 80's C was the personal computer language of choice.

Then along came Objects. So the challenge was how to bring OOP to PC's and still retain the efficiency of C. There were two candidate languages, both derivatives of C. These were C++ and Objective C. C++ replaced the message passing of Smalltalk with a virtual function call. This ensured that method dispatches would be as efficient as possible. The downside is that C++ is an early bound langauge as binding to concrete methods occurs at compile-time. Objective C however, chose to retain message sends, this means that Objective C is late bound, but as a consequence is less efficient at method dispatch then C++.

Given the hardware restraints at the time, the majority of the industry went with C++. The only PC company I know of that went with Objective-C was Steve Job's Next with their NextStep OS.

So the big advantage of pink OOP is efficiency. As time has moved on however, some in the industry have tried to re-write history and claim that the big advantage of pink OOP is type safety. Now I must admit, I do not know exactly what type safety means. There are a few things that I do know however:

* A Class is not a Type
* Late/early binding and Static type checking are orthogonal concerns
* Static typing is usually associated with early binding
* Static typing can be applied to a late-bound dynamic language like Smalltalk.

The first bullet is a conceptual flaw in C++, that Java attempts to solve by introducing Interfaces. The problem with Interfaces though is that they are selective. So sometimes in Java you bind to a Type and at other times you bind to an Implementation (Class), an unsatisfactory compromise IMO.

I'm going to get myself up to speed on "type safety". My experience has shown that static typing as used in languages like C++ and Java can greatly reduce the expressiveness of the language. So instead of the compiler being my friend, it ends up being a straight jacket, stopping me doing what I know would work best, if only I was allowed.

This is just opinion of course. I have come across one static type system that I believe will allow me to have full flexibility. This is a Typechecking system for Smalltalk called Strongtalk. Here is a link to a paper on The Strongtalk Type checking System. The current Strongtalk is slightly different to the description in this paper. If you are interested in the difference you will need to look in the Strongtalk documentation in the Strongtalk download bundle. I believe Scala is an attempt to bring more expressiveness to static typing on the JVM so I will be taking a more detailed look at Scala too.

It should make a neat comparison. Two static OO type systems one targetting a late bound langauge (Smalltalk), the other targetting an early bound language (Java), it will be interesting to see how they compare.

BTW. If there is anyone out there who can answer the question: What is type-safety? I would be more then happy to hear from you.

Revised 07/03/2007: Modified to acknowledge the role of Unix in the rise in popularity of C - Thanx Steve.

Sunday, March 04, 2007

What Colour do you like your Objects? Pink or Blue?

It's late and it's a Sunday, but I thought I'd just make a quick post to clarify a few things. What is OOP? Since Alan Kay's team coined the term 'Object Orientated' with the release of Smalltalk to the world in the early 80's OOP has become one of the most exploited marketing terms in programming.

It would be interesting to see when the term was first used. It wouldn't surprise me if the first published use of the term was in the original Byte Magazine article on Smalltalk in August 1981. So OOP was born with Smalltalk. Before Smalltalk Simula extended Algol to allow data structures to contain function pointers, but this was seen as an extension of data abstraction, and the term OOP wasn't used.

In Alan Kay's keynote speech at OOPSLA in 1997 he talks about a blue plane and a pink plane. The pink plane represents ideas which are an incremental improvement of existing ideas. The blue plane which runs orthogonal to the pink represents revolutionary ideas that break the old way of doing things, setting you off in a new direction.

Since the creation of C++, OOP has born these two identities. Firstly a pink identity, where OOP is seen as an extension of the existing thing, this was the view of Bjarne Stroustrup and what lead to C++ and ultimately Java. Secondly there is a blue identity, where OOP is seen as a new thing, which breaks with the old and has new requirements all of it's own. This second identity is most closely associated with Smalltalk and Self. It has also influenced other OO languages like CLOS, Ruby and Python.

These two identities so happen to deal with types differently, and the difference between the two is often referred to as static versus dynamic, but in truth, this dichotomy is a false one. The difference runs much deeper. The real difference between the two stems from their goals and their vision.

The C++ goal was to introduce OOP like constructs to C in an efficient way. To do this Stroustrup avoided the garbage collection, byte code, VM and late-binding of Smalltalk and went back to the much simpler and efficient model presented by Simula. The strength with this approach is that C++ is very efficient, the downside is that C++ is decidedly pink.

Self built on the platform of Smalltalk in an attempt to push further into the blue plane. The goals of Self were:

* Objects that are tangible just like physical objects in the real world
* Objects that share uniformity, just like physical objects do (everything is an object)
* Objects that exhibit liveliness, removing the modal nature of programming (no edit/build/run cycle)

All these goals are characteristics of Smalltalk, but Self wanted to take these characteristics much further, creating a fully graphical programming experience, where objects could be handled and manipulated from a visual palette, just like physical objects in the real world.

You can see that this 'blue' vision is very different from the pink one. One of the most obvious consequences is that with Smalltalk and Self there is no difference between graphical objects on the users desktop, and 'programmable objects' in the programmers IDE. In a sense the desktop becomes the IDE and the IDE becomes the desktop. Following from this the distinction between programmer and end user starts to blur. Also the distinction between object and application disappears all together. Each object is an application in it's own right, even the benign Number object '1' or '3' is an entity that can be manipulated at runtime through it's own GUI. The VM contains a large collection of such objects and becomes more than just a runtime, it becomes a Graphical Operating System.

Infact the object instance '1' is more than just an application. It also encapsulates it's own server with it's own synchronous message queue and it's own virtual processor. Adopting this semantic view of OOP means the runtime is now analogous to a NOS (Networked Operating System) spanning several virtual processing nodes. This is the semantic goal of blue OOP and why Alan Kay used the analogy of the encapsulated biological 'cell' in his keynote speech. I will expand on this blue OOP vision in a future post. But as you can see pink OOP is very different from blue OOP and the difference has very little to do with types.

Revised 06/03/07: Replaced 'Real Objects' with 'Physical Objects' in line with the terminology used by the Self team - Thanx Isaac

Programming Languages - Follow the leader

A short interlude from my series of posts on Objects. Steve's last comment got me thinking. Why are some programming languages more popular than others? It would be easy to put it all down to cynical marketing by vendors, but that can’t be the whole story. It was this sentence in particular that got me thinking:
It may sound neat to allow developers to modify the language, but having used Smalltalk for more than 20 years, I have had to deal with the chaos that can result when different developers modifications conflict. I would rather have a controlled and organised process.
So an ordered and controlled process is seen as desirable. Ok but controlled by who exactly? The truth is that most people feel more comfortable being lead. I can wax lyrical about the technical superiority of languages like Self, Smalltalk and Lisp as compared to lesser languages like Java and C# (and even Ruby and Python), but this doesn't matter a jot if people just aren't 'comfortable' with these supposedly 'superior' languages.

With Java there is minimal degrees of freedom. If you want to iterate, there is one (non-deprecated) way. Want a call back there is one way. You do things the 'Gosling way'. It is all pre-packaged and rather assuring. I must admit when I first used java I found it's simplicity re-assuring too. It was definitely welcomed after the explosion of constructs that accompanied the transition from C to C++. C# has used the same formula, after all it worked for Java. Java has taken "the shrink-wrapped approach" further, beyond the base language. The whole J2EE application stack was supposed to result in "one way" to build enterprise applications. Reducing software development to painting by numbers.

This all works up until the point where the 'one way', just isn't the best way for you. What do you do then? Well you live with it, like the EJB community did for years, or you jump to something better suited, like pico-container or Spring.

Many Java developers are now jumping to Ruby and Rails for precisely the same reason. For many web apps, the full J2EE stack even with Spring and Hibernate, is just seen as overkill. Interestingly though, very few have moved to Squeak and Seaside, and even fewer to Lisp. Why?

Well in Matz and David Heineimeier Ruby and Rails respectively, have strong leaders. Benign dictators that prescribe "how things should be done". Ruby developers can model themselves on the approaches recommended by these leaders. Better still these leaders are developers, themselves, so there is an instant bond of trust. The Python community has demonstrated this phenomena even more so, with a single all knowing leader Guido van Rossum. Rossum even dictates how code should be laid out and how tab spaces should be used!

So in contrast how do languages like Lisp and Smalltalk compare? Well let’s start with Lisp. I like to think of Lisp as a Meta-language; a programming language for writing other programming languages. A good example of this can be seen at the Vista Smalltalk blog. Peter Frisk is using Lisp to build a Smalltalk interpreter on top of Flex. So as far as Lisp is concerned, Smalltalk is just a DSL, created using Lisp macros.

With Lisp you deal with fundamentals. The smallest construct in Lisp is called an atom. An atom is a single character and you can combine atoms to produce symbols, and symbols to produce s-expressions (lists) etc, all the way up to a full class hierarchy of objects and associated functions. You can even determine how s-expressions are evaluated with Lisp macros, so basically you can do what you like!

This power puts a great deal of control and responsibility in the hands of the programmer. Of course there are established patterns to help guide you, but there is no benign dictator making a bunch of design choices upfront. You have to make your design decisions yourself. You are on your own!

Some people will revel in this power and flexibility. Others though, are likely to find it daunting! Smalltalk follows Lisps lead, but provides a lot more pre-defined structure. It has a small syntax, just like lisp, and like lisp has a meta-model built to support meta-classes, classes, and object instances. Unlike lisp though, all objects interact through message passing and are fully encapsulated. Many objects in Smalltalk are part of the language, such as the Context object used as a stack frame, Block closure object used as a lambda expression, and compiler objects used to turn strings into byte code. So Smalltalk gives you a lot of structure.

Smalltalk wears it's heart on it's sleeve. With Smalltalk all this structure is written in Smalltalk, so as a programmer you can change any part of it as you see fit. So this is fantastic if you want to create your own specific Smalltalk dialect. But if you do, Dan Ingalls or Adele Goldberg won't be there to help you out. And you won’t be able to turn to the Smalltalk-80 "Blue Book" either. You will be in the same camp as the Lispers, on your own!

When I first came across Smalltalk I saw all the dialects as a concern. All these semi-compatible versions surely can't be a good idea? As I have become more experienced as a programmer though, I have come to see diversity as a good thing. Two analogies come to mind. The first one is biological. In nature animals ensure that there is sufficient diversity in the gene pool. Each individual is not a clone of all the others, so if a sudden virus attacks, some of the species will be wiped out, but hopefully, others will have immunity, so the species as a whole survives. I think Smalltalk has this strength. Depending on what is important, there is a variant of Smalltalk to fit the bill, and if there isn't, a dialect can be readily mutated to meet the need (in most cases). Languages that can’t adapt in this way, face the risk of dying out through natural selection (something I believe Java is in danger of).

The other analogy is spoken language. Spoken language is a living and changing thing. We do not speak the same way today as we spoke 300 years ago. Also we have regional dialects, a Scoucer for example, sounds very different to a Cockney, yet they both claim to speak English (the Queens English, not US English :^)).

In their own domains Scoucers and Cockneys get on fine speaking their own dialect. But in situations where they may have to communicate with each other, like with written English, they both fall back to 'Standard English". For Smalltalk, "Smalltalk-80" is the equivalent of Standard English.

So that's the language landscape as I see it from a cultural perspective. Where I think I agree with Steve, is that change is slow in software because of a number of reasons, many of which are cultural. Where I believe things are inevitably heading though is into a pluralistic world containing many languages and dialects, but also sharing a common base, a lingua-franca. I see the lingua-franca as being based on late-binding and message passing, but I’ll save a detailed discussion of this for a later blog. In this new world I see many domains with leadership being dispersed across them, and with several individuals taking a leadership role at different times and in different circumstances.

For this to occur, developers will need to be more comfortable taking the lead themselves, and getting rid of the "training wheels". Technically, there are tools on the horizon that could help here, protecting the less self-assured. I see Language workbenches as described by Martin Fowler as perhaps helping here. A language workbench could provide a reassuring wall between the meta-language and the domain specific language, providing reassurance and safety for domain language programmers.

Supporting tools aside, with the rise of open source and open source languages, I believe there is strong evidence of this cultural change happening already! I see this change as inevitable as the industry grows up and matures.

Saturday, March 03, 2007

Objects - I know that already!

I recently received an email from an old adversary from TSS (The Server Side). Steve and I are kind of friends now - which is nice considering that we have never met, and only know each other through posts on TSS, e-mail and through our blogs.

Anyone who follows the news threads on TSS knows that I can be pretty vociferous with my opinions about Objects and the shortcomings of Java. Well I've infuriated Steve on many occasions, leading to long exchanges... One of Steve’s pet peeves, is me continuingly quoting Alan Kay. So you can imagine my surprise when Steve sent me this link to a keynote speech given by Alan Kay at OOPSLA in 1997

BTW, For anyone interested in Object technology, there are a whole set of videos available on the web showing the history of Objects and the primary players involved going back to the 1950s.

Steve is an ardent Java supporter, and I had posted a link to this same video and several others, many months ago in an attempt to cure him of this unfortunate affliction :^) Well many months later he stumbled across the same video himself, and he wanted to discuss it with me. Steve has over 20 years software experience (a fact that he is fond of sharing :^)), and has used several OO languages over the years including Smalltalk. So what was there to discuss prompted by a 10 year old video by Alan Kay?

Well you can all judge for yourselves. I would urge any programmer to watch this video. It deals with fundamental programming concepts, which most of us have dispelled from our consciousness long ago. Why? Because we know it already! We all know what an operating system looks like. We know what professional, industrial strength programs look like too. And we all know an "enterprise strength" programming tool (language + IDE) when we see one! We've all used/seen Eclipse, IntelliJ and Visual Studio. All of these tools are marketed as 'Object Orientated', and all of them are supposedly state of the art!

If you look a little closer though, and peel off the shiny veneer from these tools, underneath they look remarkably like 'C', 'vi’, 'make' and ‘cc’. Not much has changed since C/Unix in the 1970s. We still use the same old while loops and if statements, still the same edit/build/run cycle. If a C/Unix programmer had been put in a time capsule in 1977, and re-awakened today he would find tools like Java and Eclipse pretty familiar and would be up and running with them in days.

So why has so little changed in 30 years? Here is an explanation I've lifted from an Article by Dafydd Rees on Croquet and Squeak:

"Kay blames this lack of innovation on the fact that most adults employ instrumental reasoning to evaluate and apply new ideas. This means that adults have difficulty evaluating new ideas because they're carrying too many existing goals, and too much context to be able to see the full potential of new ideas."

One of the beauties of children is that they are untainted by our pre-conceptions. Each new generation looks at the world afresh, with new eyes, and kids perennially ask the question why?

My plan is that this post will be the first in a series, where I will be questioning strongly held assumptions about object technology. Hopefully Steve will comment too (apparently his epiphany was only short lived!). Free from marketing and spin; the idea is to have a useful exchange on where we've been with objects, where we could/should have been and were we should go next.

Like Alan Kay says: "The Computer Revolution hasn't happened yet".

If you are genuinely interested in Object technology; in a language neutral sense; then book mark this blog. It should be interesting and your input is welcomed.

Monday, February 12, 2007

Java, Objects and Static Types

I seem to be on a roll myth busting. This one is prompted by a comment by Peter Kriens, in response to my previous post on Java and component models. The sentences that drew my attention where "interestingly, the type information in the language allows us to provide quite a few guarantees." and "for large systems where you get legacy parts from all over the place there is something to say for type information ..."

Now it would be only fair to allow Peter to explain himself further and not to read to much into what he has said, but his words did bring to mind a common myth, namely that dynamic languages have no type information, and perform no type checking. Well they do!

I'm old enough to remember programming in C, with no stack trace and the dreaded "Segmentation fault - Core dump" Unix message on program failure. C allowed no type information at runtime. You could declare pointers of type (void *), which meant that they could point to anything you liked. You could cast to an assumed type and the language would not check for you - in the end you would try to access a segment of memory not allocated to you by the System and bang!

Here is where my memory gets a bit hazy, but I believe things improved with C++. There was still void* pointers, but I believe C++ introduced a dynamic cast that did some checking at runtime.

So a long intro, but I wanted to get everyone on to the same page. So dynamic languages like Smalltalk do their type checking at runtime. If a type mismatch is detected, the program doesn't go bang like with C, but the language notifies you of the error and will even launch a debugger at the appropriate line of code. So Smalltalk is a strongly typed language, the difference between it and say C++ is that the type checking occurs at runtime not compile time (Smalltalk also gets rid of void* pointers, Java does likewise).

So what are the consequences of these differences?

* Well with Smalltalk, all type mismatch errors can only be detected by running your programming. So no coincidence then that test driven development and SUnit came from the Smalltalk community. With C++ the compiler at compile-time performs checks statically. So some believe that unit testing is less critical (I disagree).

* With Smalltalk, the overhead of dynamic message dispatch and the runtime checks makes the language inherently slow. With C++ there are few runtime checks, all function call indirection is removed at compile time and hard coded into a VTable. This allows for the use of an optimising static compiler, which inherently produces faster runtime code.

* Smalltalk is fully polymorphic at runtime. What this means is that objects can take many forms (implementations) at runtime. So any class of object that satisfies a caller’s request can be substituted in at runtime. This is known as late-binding, and has significant implications. C++ is only partially polymorphic at compile-time, common interfaces must share a common implementation (A common base class or abstract base class). At runtime object type is fixed (hard-coded), so no polymorphism.

So along comes Java. Without justification, I will assert that Sun produced Java in a hurry. Oak was aimed at low powered, low memory home devices, so the static optimising compiler route was the obvious one to take. So Java has inherited many of the properties of C++. Namely, highly performant and static (fixed).

For Objects that come 'alive', you require different properties. Sun did a lot of research in this area. The Self Project determined that Objects needed to poses a number of properties:

* Directness (you manipulate objects directly)
* Uniformity (everything is an object)
* Liveliness (modeless, no run/edit mode, fully dynamic)

These properties are inherent in Smalltalk, and the Self Project hoped to build on them. Self explored a number of important OO concepts that are still relevant today, but the implementation was a memory hog, and you needed a super fast computer to run Self.

Here is a good video on Self.

So the reason for saying all of this is that "compile-time' type safety was always an after thought with Java. The real reason why Java is static is performance. Also from a Marketing point of view the static label was useful, as the bulk of the developer community out there were C++ programmers and comfortable with the static programming approach. To them liveliness meant nothing.

In 1995 some of the original Self team were ready to launch a new Smalltalk implementation known as Strongtalk. The idea was to address what was seen by many as the shortfalls of Smalltalk - slow performance and no compile time type checking. They addressed the performance problem by optimising out dynamic method dispatch at runtime using a dynamic compiller. The approach was similar to that used in the JVM JIT today, but their approach also allowed for de-optimisation on the fly back to interpreted code, allowing them to satisfy the semantics of dynamic dispatch needed for 'liveliness'.

They also looked at the runtime type checking issue. At first they made the same mistake as C++/Oak etc and assumed that compile-time type checking had to do with the runtime implementation. It doesn't. Type declarations in a static language can be thought of as annotations. They annotate the code and tell the compiler how best to optimise method calls. They also allow the compiler to perform checks. What the Strongtalk team did was to delegate the optimising role to a dynamic runtime and retain the static type checking at compile time. To do this they needed to add Type declarations to the Smalltalk source code, this is done by way of annotation. So Strong talk can compile both Type annotated code and un-annotated code. At runtime Strongtalk maintains full dynamic messaging semantics - so you end up with the best of both worlds.

As the startup company who built Strongtalk in secret where about to launch it on the world, Sun came along and bought them up. The Strongtalk developers where moved over to work on Java and the JVM and are responsible for the Java hotspot JIT technology we have today. Type-feedback and Type annotations sat on the shelf for over 10 years.

Fortunately Sun as recently released the Strongtalk code as Open Source:

Strongtalk Home Page

But imagine how things would have been different if they had released Strongtalk as a product in 1995? Or better still if they had ported Java to Strongtalk using Type annotations?

* There would be no OSGi (no need)
* Ruby probably would not have blossomed beyond the Perl community (no need).
* Smalltalk would perhaps still be thriving today.

Paul.

Sunday, February 11, 2007

Java, Component Models and Class Loader Hell

I've been drawn back to TSS again. There is an interesting thread where Bill Burke from JBoss-Seam goes head to head with Rod Johnson of Spring. Now it doesn't surprise me that Rod Johnson is a bit of an egocentric jerk - I know someone who worked him for years who told me as much. It was interesting though to see the JBoss guys break their usual aloof pseudo-intellectual demean a: read here.

Both of these frameworks claim to provide a component model for Java objects. So the first thing to clear up is what is a component? Well as far as I can tell, a component is a course grained object that can be bound to other coarse grained objects after compile time. So components are an invention for static OO languages, in dynamic OO, all objects are components.

Right with that out the way (anyone who is still not convinced, I would suggest exploring the runtime differences between a virtual function call as used in C++/Java/C# etc and polymorphic message sends as used in Smalltalk, Phython, Ruby etc), here is why do I believe that EJB's are a flawed component model.

Let’s take a simple example. Component A uses Component B. A configures B and registers a number of callbacks with B so that B can notify A. A is packaged in its own EAR (or 'war' or 'ejb-jar') and so is B. So what is the problem?

Well A must have access to the Interface of B, I will call this B' and B must also have access to the callback interface of A, which I will call A'. So there is a mutual dependency between A' and B'. With the J2EE class loader model, each component has it's own class loader (CL). Class loaders are arranged hierarchically. So the CL for B can be a dependent of the CL for A, or visa-versa, but two class loaders cannot be mutually dependent.

So you cannot have A<-->B relationships between J2EE components. Apparently OSGi is meant to be addressing this, but it seems a pretty fundamental language flaw to me. It will be interesting to see how they plan to get over this one!


Oh, yes I forgot Spring. Well IoC is nothing new. It allows you to separate interface from implementation, deferring binding untill runtime (back to message sends versus virtual function calls again). So again, by default all dynamic OO objects have this property. In Java it can be achieved with the use of an Interface and the reflection API, which is effectively what the Spring bean factory does. Fortunately Spring doesn’t go in for this separate class loader nonsense - so no class loader hell here - but if you choose to package your Spring application into a WAR, then you have no runtime binding of components either, so A<-->B, with Spring becomes AB. So if you want to change implementation to A<--->C at deploy time, where C implements B' you can't.

So it turns out that Spring isn't a component model at all - oh dear!

So neither of these so-called "Component frameworks" can do what Smalltalk did out of the box, over 30 years ago. Maybe they should have discussed this poignant fact on TSS, instead of squabbling like school girls.

Paul.

Sunday, February 04, 2007

Java - State of Denial

I've been participating in an interesting discussion on Tuple Spaces over on TSS

Actually the discussion was on JavaSpaces, but since Tuple Spaces is the "non-branded" name for the architectural pattern - we quickly moved on to Tuple Spaces. There were some useful posts by people knowledgeable in the "distributed" Java technology space - like Cameron Purdy of Tangosol, but the debate quickly deteriorated (IMO) into a discussion of what you should and should not do with a Tuple Space.

To me this seemed very odd, because Tuple Spaces as a concept has very few restrictions, in fact the API is very simple, just four verbs (read, write, notify and take). So why the big discussion about what you can and can't do? Then it dawned on me. Some of us were talking architecture, whilst others where still talking implementation. So what Cameron was perhaps saying was that the Java implementations of Tuple Spaces have various limitations. Given that Java and distribution is Cameron’s area of expertise, he could well be right.

Yet, I know of at least one other implementation, not in Java that isn’t limited in the way he describes. So why would someone with his level of expertise not know of other implementations too?

I think this is a symptom of something I've seen descend over the java community over the years. I think what has gone on with the Bush Administration over Iraq, in some ways resembles what has happened to the Java community:


The first similarity is having all the answers. The JCP was meant to be the answer to everything. By committee Java would flourish and evolve. All good ideas would emanate from the JCP, and we didn't need to look elsewhere - Baker/Hamilton, no thanks!


The second similarity is staying the course. What do you mean that AppServers and EJBs are dead and buried? We intend to stay the course despite the fact that no one is adopting EJB3.0.

The third is denial. Java is still cutting edge, we are leading the vanguard in technology innovation - we will win through!

The Java community has become somewhat inward looking and entrenched IMO. Much of the interesting ideas are coming from outside Java, but the Java community consumed in its own grandeur is blissfully unaware of the rest of the world.


The sad thing is, that even Microsoft is evolving. WPF and WPF/e will deliver on the promise of Applets, which Sun made years ago. In contrast the Java community has stagnated, stuck with a language designed in a hurry to power toasters. Nothing is wrong with that btw, we've all got to start somewhere, the problem is though, that through denial no one sees the need for change.

So C# 3.0 is adopting even more from functional languages, whilst Java's thinks about closures. Ruby comes along and shows what can be done with a dynamic message send, and the Java community, slays the messenger Bruce Tate. The last time I looked Bruce was hiding out on the IBM forums writing articles for people who where interested in "crossing borders" like he had.

So the experts in distributed data solutions in Java haven't heard of Croquet. I guess it is not of interest, even though the Croquet team lead is the guy that invented Objects in the first place!

I don't believe Java is going to go away any time soon, but unless this culture of denial changes I can see it suffering a long lingering death. That in itself is not necessarily a bad thing, times move on. The problem though is that a lot of programmers have identified themselves with "Java". So they are no longer just programmers, they are "Java" Programmers. I once responded to a post that was clearly antagonistic towards Ruby from someone who openly admitted that he had only "read about it". Given his ignorance, I was surprised at his antagonism. When I asked him why, he came out with all the usual static typing arguments, and he also mentioned that he would not go on to a Ruby forum and make comments on Ruby. I guess by implication, he meant that I should not comment on a Java forum because I was obviuosly a Ruby programmer. I guess it never crossed his mind that I may program in both!

I could imagine Rumsfield coming out with a stupid statement like that. So the Java community has become tribal and partisan - and even Microsoft is more open minded and outward looking knowadays (have you noticed that XAML looks a lot like Adobe Flex?).

I guess the final service Java can provide to the developer community is as a repository for developers with a closed mindset. People able to think for themselves can just leave, much like Bruce Tate did, and move on to more interesting things. And the "I've just put the latest JCP acronym on my CV" programmer can stay with Java safely out of everyone’s way!

Paul.