Friday, October 21, 2005

A TDD Episode

I'm at the tail end of a hairy refactor using JFreeChart. My client has assets which they need to manage the quality of. The application uses a JFreeChart to show a Colour Coded Quality (CCQ) representation of the asset. The Y axis identifies four quality measures and the X axis is the longitudinal position along the asset. A colour coded item at the X/Y vertex indicates the asset quality at that location and for that quality measure. (Sorry about the terse description, but I'm trying my best not to give away the identity of my client).

Anyway... There is an existing implementation, but the data generation, plotting, and rendering are all terribly intermingled. The rest of the app has been rewritten to get away from an abstract data model (don't ask, it ran like a dog), but the new charting was still yet to be done. So we needed the old chart, but we didn't want the old data generation bit.

How to proceed? Well I started with a standard JFreeChart Plot that sort of did what we needed, a ScatterPlot - A GREEN bar. So what to do next? Well the guts of the dataset code could be clearly identified scattered around the old code, so I copied and pasted that code from the old to the new. A few changes, and the creation of some canned data and... GREEN bar ( a running scatter plot using a CCQ dataset). The CCQ Chart shows rectangular data points of varying colours (DataItems) depending on the quality. For example a "compliant" dataItem is green and a "failed" item is red etc. The old renderer did this so we copied and pasted the old rendering code - GREEN bar (a running scatter plot with CCQ data and CCQ colour rendering). Along the way we identified lots of small refactors that needed to be done, but we just to a note of them. After all we didn't have a true green bar as our plot wasn't exactly the same as the original. We did the same process with the "Plot" and... Again with a bit of work GREEN bar.

Now we had the three main components needed for a chart: a custom dataset, renderer and plot. Our chart factory was still the original Scatter Plot, but everything else had been migrated to custom CCQ charting code. Yet our plot didn't look like the original. We were missing the labels on the Y axis and couldn't work out why.

After a while head scratching we decided to plug our new dataset, renderer and plot into the old code. We then refactored the old "factory" until it was all in one place and looked similar to ours. After each small change we would run the code and check that everything was still GREEN. We finally got down to the final difference. The refactored old code didn't set the chart orientation, relying on the default our new factory did. So we decided to add it this line of code to the old code and BANG, RED bar. It turns out that all we needed to do was remove this line from our new code - still not sure why though, possibly a bug in the JFreeChart library.

So time for a mini retrospective. What where the lessons learned:

  • You don't need JUNIT to do TDD. Often if the code compiles and runs then that is a good indicator of a GREEN bar. You may only have partial functionality, but if your code does what you expect it to do, then that's a GREEN bar.
  • Refactoring to patterns really works. It allowed us to make some real big refactors in one step. We identified our target design by looking at the JFreeChart ScatterPlot code.
  • When refactoring on the grand scale, getting the overall structure right and getting back to a green bar as soon as possible is the main goal. Small local refactors can happen later. By keeping a list of refactors you are able to choose the order in which to do them.
  • always "listen" to the code. Code smells will tell you what to refactor. Commenting out declarations and seeing the resultant errors in your IDE also tells you a lot. A refactor with a lot of dependencies isn't a good candidate to do first, because it could leave you on a RED bar for a long time. Do something easier first, and get back to GREEN .
  • If you find yourself on a RED bar for a long time, then undo the change, get back GREEN, and make a series of smaller changes. This is how we identified the orientation bug. If an ambitious refactor is failing you always have the option to go back and take smaller steps.
  • Small refactors are better. In hindsight, after identifying our target design pattern, we should have refactored the old code to the new design in small steps, rather than copy and paste. We would probably have maintained a better rhythm and learned more about the old code, whilst spending shorter intervals on RED.

Thursday, October 20, 2005

How ReSTful are you?

I've joined the REST discussion group on yahoo groups. And a very knowlegeable and friendly bunch they are too.

After following the group discussions I'm beginning to realise that their is a bit more to this REST stuff then I first thought.

For example, one would assume that Hessian Webservices is pretty ReSTful (obeys the REST constraints). After all what could be wrong with:

http://myserver/myServiceInterface?method=doSomething&param1=value1


Where 'myServiceInterface' is a remote web service and 'doSomething' is the method on the remote interface you wish to invoke.

Well the uniform 'HTTP' connector has its own generic commands: GET, PUT, DELETE, POST etc. So what happens when your client sends:

http://myserver/myServiceInterface?method=deleteStuff


Using a GET?

Sunday, October 16, 2005

REST: A not so Uniform Connector Example

Still buzzing with the possibilities of REST. One of the things I don't like about http is that it has a tendency to reduce everything down to the lowest common dominator "Strings". So all your parameters are strings and often your data is one long string in the form of XML. This is as a result of the REST constraint for a uniform connector across the entire web. But the world isn't uniform, it contains different realms each realm with its' unique characteristics. Isn't it suffcient that your connector is uniform within its given realm?

For example say I want to get access to a resource on a remote server running Squeak. Why not create a squeak specific connector by sublcassing:

SqueakConnector subClass HttpConnector
" A connector that understands the Squeak object representation"

formUrl: squeakServer object: anObject: message: aMessage params: parameters
"Answers a URL object"

GET: url
"Answers the squeak object at url"

A Squeak client would look like:

con := SqueakConnector new.
url := con formUrl: #myServer object: #UserDB message: #getUser params: (Dictionary new at: #user put: 'fred').
fred := con GET: url.

The actual URL passed to "HttpConnector>>GET" would be:

"http://SqueakUniverse/myServer/UserDB/getUser?user=fred"

As long as all servers in the "SqueakUniverse" agree on their object representation and use an agreed URL format then this should work. And my client code doesn't need to worry about dealing with the actual object representation on the wire.

I'm sure this is still RESTfull, honouring the concept of a uniform connector, but constraining its scope to a specified namespace. I need to see if this type of stuff is allready being done.

Saturday, October 15, 2005

REST and OO Abstraction

I'm still getting over just how simple REST is. In my first post on REST I was wondering where a protocol like IIOP fits in. Mark's comment kindly explained, that as far a REST is concerned it doesn't. So I've been thinking how does abstraction fit into REST. In REST a datum is data that is exposed by a component. Which makes sense, since data that you want to communicate to another component cannot be hidden. So the datums exposed by my object are resources that I can lookup:

// client side lookup
datum1 = connector.GET(
"protocol://myServer/myServiceObject/Datum1");

OK this is pretty fine grained, what if I want to get a lot of data at once. Well I can think of two options. Option 1: Expose all of the data in my object and send it through the connector to my client. This is what IIOP does through marshalling. What IIOP goes on to do is to un-marshall my data back into an object at the client end which breaks the REST constraints on a connector.

So without unmarshalling we have:

// client side lookup for a data structure
struturedDatumXML =
connector.GET( "protocol://myServer/myServiceObject/ObjectDataAsXML");

// data structure access
structuredDatum = new XMLObject(struturedDatumXML)
datam1 = struturedDatum.getElement("datum1").getValue();

But we've broken encapsulation, so the messaging is no longer Object Orientated. Going back to IIOP my structuredDatum would be a data transfer object (DTO), with a set of getters for each data element. But a DTO is really just a data struture also, so IIOP doesn't help. So how can we do this in REST, and not break encapsulation. Option 2: How about mobile objects:

// client side lookup for a real Object
objectDatumSerialised =
connector.GET( "protocol://myServer/myServiceObject/Serialise);

// client side use of object
objectDatum = ObjectLoader.load(objectDatumSerialised);
datum1 = objectDatum.getDatum1();

Treating an object as a resource should be possible with VM based languages.

So after my fears about abstraction and encapsulation, it turns out that the REST constraints do not preclude abstraction at all. I'm new to this REST stuff, so the above could be very wrong and all comments are welcomed. In particluar I'm interested in finding out more about "self describing data" as it sounds like a mechanism that will help abstraction.

So it looks like I don't need to loose Object Orientated abstraction to use REST. Until someone explains otherwise.

Friday, October 14, 2005

REST is so Simple

In my last post I was struggling to understand REST . After looking at this example it all made sense. Almost too simple to be true. After reading the dissertation , I was expecting REST to be a lot more complex than it is. In practice the REST style (set of architectual constraints) is simplicity itself.

If you intend to read the dissertation, make sure and read chapter 1 before reading chapter 5. Chapter 1 gives a precise definition of terms, which you'll need. Or if you're like me, just jump straight to the example. Don't be put off by its brevity; connectors, components and datums as defined by REST are all there.

Will REST win out over SOAP? Well I for one won't be using SOAP/WSDL/UDDI etc. Why would anyone want to use that stuff given the choice?

Thursday, October 13, 2005

REST to the Rescue?

I've heard a lot of good things about REST (Representational State Transfer). Apparently REST represents a distributed computing architectural pattern that brings simplicity and consistency to the WEB. Well at least that is what I have been lead to believe. Anyway I've finally found a description of REST, that I can understand.

After reading through the description (and it is quite lengthy) I was somewhat disappointed. I'm not sure why, but some how I was expecting more. I come from a telecoms background and I am pretty familiar with network protocols and communications. I remember reading the GSM protocols for the first time, I was immediately impressed. GSM is a very complex system with a number of orthogonal protocols that work together to address the various issues associated with a single application, mobile voice and data services. After reading the REST description, I was left thinking what is this architecture meant to do? What problem is it meant to solve?

This seems to be the key issue. With the exception of hyper-links, there doesn't appear to be a generic web application, instead their is a number of disparate applications that happen to share the HTTP protocol (out of convenience) and URIs as a common addressing mechanism. REST some how tries to come up with an architectural pattern that addresses all these disparate needs, and in the end becomes so unspecific it is almost meaningless (IMHO).

The only benefit I can see, is that it dissuades extensions to the web that are not consistent with good network communications practice. It appears to be an attempt to limit some of the network protocol abuses that have characterised the web from its inception.

What is missing is specific protocols for specific applications. For example, I once hoped that IIOP would become the established protocol for client/server applications over the Web. In my opinion HTTP is just inappropriate for such applications and if the W3C promoted an alternative I'm sure firewalls would be opened up (in the same way that ftp is accepted as the protocol for file transfer, telnet for terminals, POP3/STMP for mail etc).

In the article it talks about mobile code, but again nothing specific. I may be missing the point here, In which case can some one please enlighten me. I was hoping that REST would be the antidote to SOAP,UDDI,SOA etc Please tell me that my hope isn't misplaced.

Wednesday, October 12, 2005

The Backlash Against Open Source has Started

I've recently been looking into Open Source Software. Whilst searching for blog entries on the subject, I came across this post.

It's worth a read. The basic gist of the argument made against open source is that open source developers are being exploited by unscrupulous Software Vendors. To stop 'exploitation' the author suggests that the government should step in and 'regulate'.

It may come as no surprise that the author is a Microsoft supporter.

If anyone out there comes across other interesting blogs/articles on Open Source please let me know.

Monday, October 03, 2005

Is .NET (Mono) The Open Source Platform of the Future?

Been looking into Mono, The Open Source .NET clone. In an attempt to win people over from Java it appears that Microsoft has gone to some lengths to ensure that .NET is percieved as open.

In fact, my understanding is the the .NET VM (Common Language Runtime or CLR in Micorsoft talk) specification has been submitted to the ECMA, an international standards body.

I guess that the people at Microsoft thought that as long as they could control the .NET implementation, then .NET would be proprietary. But with an open standard anyone can create an implementation.

The Mono implementation supports Windows, Linux and OS X on the Mac. In addition, Mono has ported a broad set of languages to the .NET platform, including Ruby.NET, #Smalltalk and even Java.

I first thought that this common language VM thing was just a marketing ploy. But if you think about it, it is a great idea. What normally locks developers into a given platform is the supporting tools. But with a VM your code will run on any platform that the VM has been ported to. Thanks to Mono, for .NET that means all major platforms. The secound barrier to developers changing technology is library support. If the only good user interface library for Windows is written in C#, then I guess your stuck using C#, but with .NET/Mono you could choose to program in say Ruby.NET and still use your C# libraries.

So you program in Java, but there is a great library in GtK#, no problem, just compile your GtK# libraries into a Java jar (Java stubs) and link at run-time.

I'm not sure just how good Mono is, but I do know that it is backed by Novell, and with time it can only get better. From the look of things on the Mono website, it is more than ready for production use. The other concern of mine is that I'm not sure just how well the CLR supports languages other than C#. In particular I'm not sure how much support is present for dynamic languages like Smalltalk and Ruby. If the CLR byte code does support both dynamic and static languages fully or is extended to do so, then .NET has got to be the ideal open source platform.

Think about it. You decide that Ruby is more productive than C#, well just use Ruby.NET. You've got loads of Java libraries you want to use, well if you've got the source, just re-compile for .NET or if not dynamically translate the jars to .NET at runtime.

I don't think the Java/J2EE world has an answer to this kind of flexibility. The EJB monolithic container model has failed, and the light wieght, mix and match services a la carte model is available on .NET as Spring.NET and Hibernate.NET.

I'm keeping an eye on Mono. If it delivers, we as developers will be free to use what ever platform, language or libraries we choose. And with .NET locked into the ECMA standards process, I don't think Microsoft can do anything about it.

Microsoft setting the world free. Isn't that a thought!

Open Source - Who's doing it and why?

In my last post I discussed the effects of open source software on the Java community. In my opinion, Java was a reaction to a perception by many in the software industry that unless something was done that Microsoft would end up owning both the desktop and the server. If this was to happen, then in time Microsoft would be sure to use its dominant position to shut out other Server Software Vendors. After all, this is exactly what they did with Lotus, WordPerfect and others on the desktop.

As a consequence Sun, Oracle, IBM et al got together and agreed that the Java VM would be "the common means of passage" on the server. Cross-platform, Machine independent, OS-neutral and with open APIs, Java would fend off Microsofts' attempts to own the Server.

Interestingly Java has become a run away train, no longer under the control of the vendors. The developer community have taken the open Java APIs and crafted tools and APIs of their own, all open source. So who owns Java now? Well it appears to me that the open source community are in the driving seat.

So who are the open source community? Well things have changed a bit from the days of Richard Stallman. It appears that many open source projects have big backers. The Mono project, a .NET clone is backed by Novell, and IBM backs Eclipse (the leading Java IDE). Linux has become an open source phenomena in it's own right, with backers such as Sun, Oracle and most of the big players. The only large player not to be some how involved in open source is Microsoft.

Open source has become a viable business model for many. Look at Skype which was recently sold for billions. A less dramatic open-source business model is to offer consulting and bespoke variants on the back of an open source product. It seems that different people are involved in open source for different reasons.

I have always thought that their should be a "common means of interaction" in software, much like language or law in society. This common base should be freely available to everyone and owned by no one. This seems to be exactly what we're now getting in Software.

But why are the vendors backing it? One argument is that producing software open source is cheaper. True, but you can't sell it. I've read an article that suggest that open source is being used as a means of wrestling market share from the incumbent monopoly (Microsoft). It is difficult even for a monopoly to compete with something that is free. Once the free software becomes the de-facto standard (70-80% market share), a new market is available where the backers of the open source product can sell consultancy services and add-ons. I believe this is already happening with Linux.

For example, I wonder how many Oracle licenses have been sold under Linux. Now without the success of Linux, what percentage of those licenses would have gone to SQL Server under Windows? Most right.

If this strategy is true, then this sounds like a dangerous game for the vendors, as they are sure to loose control in the same way that they have lost control of Java. And in a world where customers feel empowered to pick and choose their own software without vendor endorsement, innovation and new ideas could quickly leave the vendors flat footed. Look at the emergence of Ruby on Rails.

Anyway I'm going to get myself a good book on open source. I think it's time I took the open source movement a lot more seriously.

Saturday, October 01, 2005

Java for Sale

I was an early adopter of Java. I was easily persuaded by the arguments in favor. A few years before Java came out, I had dabbled with Smalltalk, so I knew the advantages of a VM. Besides things couldn't go on the way they were with C++ and memory leaks.

Back then Java still had many detractors. Like with Smalltalk, performance was the main criticism. Yet I knew that micro-processors would get faster over time and Java was fast enough for most applications even back then (desktop GUIs were sluggish though). I knew Java wasn't as elegant as Smalltalk and the use of a VM wasn't anything new, but Java was type safe, C-like in syntax, and reasonably performant. For the majority of C++ developers trying to produce server side software, I could see that Java and J2EE would be a great leap forward.

It took me about 2 years to convince my company to use Java. By this time Java had gained quite a momentum in the industry. J2EE Application server implementations were beginning to emerge, and consultancies like Valtech where championing Java as the technology of the future. In fact quite a few big name companies had thrown their weight behind Java. By this time Sun had repositioned themselves as a software company and were practically betting the business on Java. Oracle had jumped on the bandwagon, and as well as creating their own J2EE App. Server, had written Java into the core of their database product. Initially IBM were on the fence, backing both Smalltalk and Java as the OO champions that would steal developer mind-share from Microsoft. Over time though IBM reduced support for Smalltalk and became massive backers of Java (I think even bigger backers than Sun).

I could see the power behind the Java promise of "Write once, run anywhere" especially in light of the threat from Microsoft. Without Java, we could all end up Microsoft developers, with anti-Microsoft vendors forced out. Microsoft had shown their intent for world domination by refusing to "play fair" within the OMG. It was all out warfare.

The anti-Microsoft camp was throwing Millions into Java, downloadable free over the internet. The thought crossed my mind more than once, "How did these companies expect to make money?" If Java software was available free, then developers would try it? Yeah OK. So if developers were using Java then companies would need to support it? True, but what could you sell to companies?

The first commercial Java tool that I used was WebLogic (no not BEA WebLogic, Just WebLogic as it was back then). This was a relatively light weight App. Server written totally in Java. Easy to use, and configured through a single property file. It worked a treat. A developer license of £2K per developer and a run-time license of £11K per server seemed a bit steep, but was reasonable given the gains in developer productivity.

But none of this money was going to Sun. How was Sun going to make money out of Java? I'm not sure if the people at Sun knew, but if world domination was the name of the game, it would be better if the glue pulling everything together on the server was the Java VM rather than Microsoft Windows OS.

The first big non-commercial Java tool I used was the Apache Web Server with built in J2EE Servlet support in the form of Tomcat. The Tomcat servlet engine perfomed much of the server side resource management that full blown App. Servers did, and it wasn't long before some began to recognise that for most applications, the full J2EE stack (Servlets, Stateless/Stateful/Entity Beans, JMS, JNDI, JTA , etc.) just wasn't needed. Besides if you really did need a full J2EE App. Server you could download one for free, JBoss.

By this time the marketing machine behind Java was in full swing. The App. Server market had matured with IBM's WebSphere and BEA WebLogic as the market leaders (Sun missing out again). These new App. Servers weren't simple and clean like the original WebLogic. Instead, they were cumbersome and clunky. Awkward web based App. Server administration was in vogue, and complex XML application configuration could take weeks. Feature bloat had set in, and they where sold as all things to all men. The compexity of these App. Servers was a real problem. Class loader hell, was a common experience for many J2EE developers, and gaining mastery over your App. Server took time and dedication.

Microsoft had woken-up to the marketing power of the Application Server. Middle-ware was the new industry buzz. After being forced to give up on "Microsoft Java", Microsoft produced their own "clean-room" Java clone, C#. They then went on to market their new middle-ware vision. In addition to "Write-once run anywhere (Microsoft chooses)" they added "Write in any language you like (if we choose to support it)" and "Expose your business services over the web (to anyone who uses are protocols)". The .NET era had arrived, it would take about three years before working .NET code would ship, but in marketing terms .NET was real and the industry was holding its' breath.

Sun had put in place the JCP process as a "community" lead way to create and improve Java standards. The problem was that the JCP was a community of Java Vendors not Java Users. Sun has allways flirted with the open-source community as a way of gaining credibility with developers. The fact that you could download the Java JDK source code for free, made Java appear almost open-source, yet there was a growing group of vendors who had bet a lot of money on Java and were looking for a return on investment.

Every .NET announcement would result in a corresponding JSR being raised to provide the same functionality in Java. The Vendors were playing the "my App. Server has more features than yours" game with Microsoft ane each other. Whilst this was going on, the developer community were realising that much of the stuff coming out of the JSRs didn't actually work that well (EJB 2.1 Entity Beans comes to mind), and they were busy producing better open-source solutions themselves. Eventually the gap between what worked and what the JSRs had specified became untenable.

Spring and Hibernate showed what could be done when you actually tried to solve problems rather than sell product. The cat was out the bag and all the JCP could do was try and standardise what the developer community had allready chosen for themselves. I would argue that this is exactly what the JCP should have been doing all along.

They say you can't serve two masters. The JCP process can't be in the interests of both Vendors and Developers at the same time. The developer community have taken matters into their own hands and are busy shaping their own future. Microsoft has gone very quite with .NET recently. The open-source community has targeted .NET too with Mono. The Java Vendor community are still trying to pretend that it is business as usual with JSF (JSR127) and EJB3 (JSR220). The JSF technology is practically obsolete before the first decent vendor tool has been shipped. Everyone knows that the future of web development lies with Continuations and Ajax, both missing from JSF, but available open-source (Squeak Smalltalk and Seaside). EJB3 is available open-source too, as Spring and Hibernate. Why wait to pay for a buggy Spring/Hibernate implementation from BEA?

So my take is that Ruby will carry on picking up momentum on the back of Rails, and who knows even Smalltalk may see a resurgence in popularity with Seaside (I hope). As for Java, well its' still for Sale, but I think there will be less takers in the future.