Saturday, April 26, 2008

Newspeak - First Impressions - Underlying Beliefs

First thing to say, after watching Gilads presentation on Newspeak, is that I feel that I'm not qualified to deliver a rigorous critique. This in itself raises an interesting question. After 18 years software engineering experience my "education" is insufficient to evaluate a new OO language that builds on ideas that are over 20 years old. Why? Well, my professional career (the usual diet of Java APIs and frameworks) hasn't prepared me for Newspeak. In fact it is only my self education which I performed at home in my spare time that allows me to grasp most of what Gilad is speaking about. This telling fact says more about our industry I believe then it does Newspeak.

Beliefs and Motivation
You may find beliefs a strange place to start an evaluation of a programming language, but I think that the beliefs of the protagonist behind a language lay at the core of what a language is about, and whether it will appeal to me. Language design, like any other form of design is about trade offs, and it is your beliefs that determine the relative value you place on competing ideas and principles. Well the first thing to say is that Gilad is still a Smalltalker at heart, and he describes Newspeak as a direct decedent of Smalltalk. The other significant influences he mentions are Self, Beta and Scala in that order of significance. Scala seems like the odd man out here, but when you realise that its influence is much smaller then the others (the formalisation of object constructors, which is something Smalltalk never did), then the idea of Newspeak as a successor to Smalltalk makes sense.

My concern was whether Gilad believed in pandering to the needs of the tool over the beauty of the language from the perspective of the programmer. Well he makes his feeling clear in is preference for:

id = letter, (letter | digit) star. (Newspeak)

in comparison to:

id = letter().seq(letter().or(digit()).star(); (Java)

When attempting to express the BNF Grammar:

id = letter, (letter | digit) *

He refers to the redundant parenthesis in the Java version as "solution space noise". He says that what we should be doing is expressing our problem. Stuff in the code that is there to satisfy the tool (Compiler) is noise and should be reduced to a minimum. This belief sits fine with me. It also fits with my artist analogy. An artist is more interested in his subject then he is in his tools. Languages like C++ and Java just miss this point, and when I think about it, this is the central idea I learned from my first encounter with Smalltalk all those years ago. Just going through Squeak the other day I stumbled on things like this:

(2 + 3i) sqrt.

What do you think this does? Yes, you guessed it. It answers the square root of the complex number 2+3i. Now imagine expressing this in Java?

OK, if Smalltalk is so beautiful what is left to be solved in Newspeak? Well plenty actually. In terms of beauty, Smalltalk has its warts too. Like having to place the word "self" before each message send when sending a message to your self. Newspeak, like Self makes the receiver as "self" implicit.

The previous BNF grammar expressed in Smalltalk is:

id := self letter, ((self letter ) | (self digit)) star.

Which is more pretty then Java but not as pretty as Newspeak.

Smalltalk also lacked the ability to define encapsulated abstractions that were larger than a single Class. So although Smalltalk is uniform and "turtles" all the way down, when it comes to Classes everything is horribly coupled in a single global hashmap. Newspeak solves this lack of modularity problem too. So Newspeak is turtles all the way down and all the way up as well, leading to pluggable libraries etc. It achieves this by borrowing the idea of nested classes from Beta. So a module or a library is a class that contains other nested classes. I'll discuss this feature in more detail in a future post, but it is very significant and Gilad highlights it as the major difference between Newspeak and Smalltalk. It allows you to swap out abstractions at a category or library level with Newspeak without worrying about ugly inter-class dependencies (coupling). So Newspeak can be shrunk as well as extended unlike Smalltalk.

I've said that Smalltalk is turtles all the way down. Well that isn't quite true. All OO abstractions (turtles) should encapsulate their insides from the outside world. Smalltalk does do this for member variables, but in Smalltalk-80 all methods are accessible from outside the class in which they are declared, including methods that are meant to be private. When I first came across this, I thought it was a minor oversight that would be fixed soon, but interestingly the situation is still the same today in Squeak. Gilad intends to fix this oversight in Newspeak. So one of Gilads motives seems to be to finally address the design flaws in Smalltalk.

I find that many of Gilad's throw away ad-lib comments are more informative then much of his precise technical explanations. Throughout the presentation, and especially when he would switch to his Newspeak IDE, Gilad would make reference to the past failures of Smalltalk, and how he hoped to address those issues in Newspeak. In the eyes of many, Smalltalk never really made the transition from a research project to an industry ready software engineering platform. Strongtalk was one attempt to complete that transition. Strongtalk dropped the "many windows" class browser approach of Smalltalk-80 and adopted a collapsible pane approach that would be familiar to users of Eclipse today, and feels more natural to people coming from a "file" based metaphor for organising classes. Newpeak does the same. Thankfully Newspeak drops the type annotations of Strongtalk, with Gilad admitting that many of the late bound scenarios that Newspeak is designed to address would be almost impossible to verify statically. Gilad also plans to have a much improved FFI (foriegn feature interface?) for Newspeak, when compared with a typical Smalltalk dialect like Squeak.

Smalltalk as always been an island. Once you were on it the weather and the scenery was great, but you couldn't bring anything with you from the outside world except your bathing trunks, and you found yourself missing a bunch of stuff you had gotten attached to. In contrast Ruby fits into the Unix ecosystem like a dream, and you can use almost any windowing system, foreign language library and API available under Unix(/Windows/Mac OS X) from within Ruby. You can also "shell out" to a Ruby script from a Unix program too, so their is two way communication. Gilad spoke about a similar ambition for Newspeak with a FFI that allowed callbacks from foriegn functions, although he didn't go into this in detail. He does mention the use of Actors as a means of inter-process communication, so perhaps this is a pointer to how some integration may be done.

So to sum up, Gilad believes in clearly expressing the problem over assisting the tool (Man over Machine) and uniformity and modularity ("turtles") all the way up as well as all the way down. He is motivated by creating a successor to Smalltalk-80 that complies with sound software engineering principles and interfaces well with the current incumbent technology ecosystem. His aim is to create a language which is both esthetically pleasing and software engineering sound. The end goal is an elegant and consistent software engineering platform for use in industry.

I was going to give my impressions on how well Newspeak in its current incarnation actually delivers on these lofty goals, but this post is already long enough. Besides you can watch the presentation for yourself :) I'll be coming back to Newspeak in future posts, where I intend to pick up where I've left off here.

Update 29th April 2008
Gilad has kindly offered the following corrections and clarifications:

Re: your review. Well thanks! A quibble or two:

1. The basic FFI with callbacks is working; the higher level convenience layer isn't there yet. We are just calling to/from C (or Objective C) at the moment. Using actors to interface to the outside world was Alan Kay's original plan, but it was far ahead of its time.

2. Types. I haven't given up yet. It's just our lowest priority and a very hard problem. I would like to support pluggable types in Newspeak, and we currently support and encourage type annotations even though they are not checked.

Friday, April 25, 2008

Making Great Music

Thinking out load on the web is a risky business, but it has it's benefits. With a little help from your friends it can lead to a deeper insight. The logic of the case made by Yardena and Andrew in response to my original post on Man versus Machine made sense, but it still left me feeling a bit uncomfortable.

Had my artistic analogy run out of steam?:

I love reggae music. I grew up on it: Bob Marley, Dennis Brown, Gregory Isaac, the reggae greats of the 1970's. In recent times Reggae as become more commercial with modern"artists" placing less emphasis on spirituality. I find this change rather regrettable, because without spirit the music becomes just noise.

Then I got to thinking more about those old Reggae classics. If you listen to the original vinyls the music is punctuated by pops and clicks, and even drop outs in places. Sir Clement "Coxsone"Dodd, the owner of the Studio One record label had humble facilities at his recording studio in Kingston Jamaica. I believe many of those originals were recorded using nothing but a simple eight track in a small unsound proofed room.

Some of those classics have been remastered, which gets rid of the pops and clicks, but sometimes you loose a bit of the feel too. Don't get me wrong those old classics are still great pieces of Music, but imagine how they would have sounded if Clement Dodd had the benefit of Modern Recording facilities back then? Great Musicians and Singers with great tools. Hmm...

The Answer: Man and Machine in Unison

In my last post I proposed the question. What is more important the Tool or the Man when it comes to software development? It was in response to Gilads attempts to eliminate certain dangerous "features" from programming languages. I questioned whether this was the right focus and I got some interesting responses.

This one in particular rang a bell:

Rather than looking at it as Man against the Machine, I see this as a challenge of building a bridge between the two, giving both sides respect and attention. I agree with Andrew - if the machine can't solve the problem it should delegate to the person, but do it simply and clearly. In a more philosophical sense, I think because computer science is so young, we have not figured out properly how to transition from "science" to "engineering" yet. To do this we need tools that are controllable and safe alongside creating good programming curricula to train "the pilots".

I think both Andrew and Yardena are right in their responses. It isn't necessarily one or the other, Man versus Machine. We need both working in unison.

Wednesday, April 23, 2008

Who rules - Man or Machine?

I've been following Gilad Bracha's blog on programming language design. The concepts Gilad touches on are profound and his blog is a real interesting read, I recommend it.

Some of Gilads most recent posts have left me pondering whether he is focusing on the right things? Gilad calls himself a Computational Theologist. This is an interesting title since theology has to do with beliefs and belief systems. From Gilads posts on Monkey Patching and Cutting out Static, I get the feeling that Gilad believes that the machine should 'help' the programmer to do the right thing. What beliefs that underpins such an assertion? Perhaps such a believe stems from an acceptance that most programmers are poor to mediocre and need all the help they can get? Maybe this believe stems from an idea that a computer program is more a piece of mathematical logic then a piece of creative art?

I am assigning a lot of beliefs here to Gilad, which are probably things he doesn't believe. The point remains though, at a fundamental level we have the choice of either believing in people or believing in machines. Andrew has a post on Smalltalk where he refers to the original Smalltalk Byte article. The Smalltalk researchers had a human powered approach to 'computer research'. Reading their paper it is clear that they believe in people and our inherent creative abilities. To them the machine is merely a tool. Man, the creative artist exploits tools and mediums to express himself. A computer is merely one such tool with a unique set of characteristics. The purpose of a programming language is to allow Man to exploit the Computer.

It was interesting to hear John McCarthy say a similar thing. For example he rubbishes the commonly held belief that "goto" is evil and should be banned. As an artist I'm not sure how I feel about people banning things and limiting my expression. Imagine a word processor that didn't allow you to use certain words like "fuck" or "piss"? There is a strong argument that such words aren't the most effective form of communication , but can we say they should be banned in all instances?

There is an interesting point of debate here, and I have no firm conclusions, other than to say that people have far more potential then machines. At best machines can help people to explore their full potential and extend their influence and reach. But a machine has no consciousness, no intelligence, no imagination. A machine has no spirit.

As humans we have two parts to our brain. A logical side, and an emotive artistic side. I believe that both sides are involved in everything we do, including programming, and we need to appeal to both. This means that the most correct and safe programming language may not necessarily be the most useful.

I love reggae music. I grew up on it: Bob Marley, Dennis Brown, Gregory Isaac, the reggae greats of the 1970's. In recent times Reggae as become more commercial with modern"artists" placing less emphasis on spirituality. I find this change rather regrettable, because without spirit the music becomes just noise.

Thursday, April 17, 2008

What Really Matters?

My posts of late have been pretty opinionated and uncompromising. Why? Well as a Software Industry we are pretty good at creating reasons to do a bunch of stuff that doesn't really matter. I guess doing this other stuff is easier then tackling the difficult task of doing what does matter.

Anyway, I stumbled on this Quote on the C2 Wki:
What really matters?

Software is too damned hard to spend time on things that don't matter. So, starting over from scratch, what are we absolutely certain matters?

1. Coding. At the end of the day, if the program doesn't run and make money for the client, you haven't done anything.
2. Testing. You have to know when you're done. The tests tell you this. If you're smart, you'll write them first so you'll know the instant you're done. Otherwise, you're stuck thinking you maybe might be done, but knowing you're probably not, but you're not sure how close you are.
3. Listening. You have to learn what the problem is in the first place, then you have to learn what numbers to put in the tests. You probably won't know this yourself, so you have to get good at listening to clients - users, managers, and business people.
4. Designing. You have to take what your program tells you about how it wants to be structured and feed it back into the program. Otherwise, you'll sink under the weight of your own guesses.

Listening, Testing, Coding, Designing. That's all there is to software. Anyone who tells you different is selling something.

-- KentBeck, author of ExtremeProgrammingExplained
If I had my way these four points would be prominently displayed in every IT Department out there.

Monday, April 14, 2008

Doing It Twice

As an Agile Coach I have found it somewhat disheartening the way Software Development Organisations choose to cherry pick Agile practices that fit in with there current beliefs and culture. I mentioned this in my "Me too Agile" post.T

Interestingly it is not the first time this has happened. After hearing several reports that Winstons Royce original paper on Managing the Development of Large Systems proposes a very different approach to software development then the way "Waterfall" is has been interpreted.

It turns out that Royce understood the importance of feedback and Emergent Design too:

After documentation, the second most important criterion for success revolves around whether the product is totally original.If the computer program in question is being developed for the first time, arrange matters so that the version finally delivered to the customer for operational deployment is actually the second version insofar as critical design/operational areas are concerned

If you replace the word documentation with communication then there is nothing here that would look out of place in a modern paper on Emergent Design. It is a shame that people chose to cherry pick back then. Taking the bits that were convenient and leaving the bit that weren't.

Skimming through the paper, there are a few eye openers that show that nothing is new in Software Development, we just choose not to listem. This was one of the more surprising ones that jumped out at me.

Friday, April 11, 2008

Architects - Who needs them?

William as responded to my post on "Me too Agile" and his response made me think that my post required further explanation. An Architect as a software development role is a relatively new idea, appearing in the mid 90's along with the boom in 'OO' technologies and middle ware.

Before this the only architects I was aware of produced paper drawings for buildings. The architect would design, and builders would construct. The term Architect is borrowed from the building trade. Its roots in the building industry are quite revealing I believe and say a lot about the assumptions, philosophy and organisational cultures that led to its use in Software development.

It seems strange to have to say this, but in the early days of software development, there wasn't roles like Project Manager, Business Analyst, Systems Analyst, Architect, Test Manager etc. There was usually a couple of guys who sat down and decided that they wanted to write a computer program. When Thompson and Ritchie decided that they wanted to write the Unix Operating System they didn't have an Architect. They did the design and wrote the code themselves. They even went as far as producing their own programming language, so they created their own tools too. As a team they were self reliant, with a single focus the creation of a working program.

The purpose of my post was to question the relevance of the role of architect in an organisation that subscribes to Agile values. William in response quoted the Agile manifesto as a definitive statement of Agile Values. Well before the term Agile was coined, the category of software development methodologies we now call Agile went under the name light weight processes. What all these methodologies had in common was a rejection of an high ceremony approach to software development. Approaches like OMT, Objectory, RUP (which now coincidently claims to be (me too) Agile :)) that prescribe a number of distinct development phases with a number of distinct roles, handing off work to each other through a number of concrete intermediate work products. 'Light weight' meant getting rid of as much of this as possible and getting back to a way of working that would be more familiar to Thompson and Ritchie.

This view of the world, builds on a management philosophy that says that decision making should be delegated to the lowest level possible within an organisation. In short the people who do the work are best placed to make the right decisions. This idea of worker empowerment builds on the work of Edward Deming, and reaches a pinnacle in the Toyota Production System as described in the writings of Taiichi Ohno.

In keeping with this philosophy, the Japanese have developed a number of approaches to new product development, which we commonly refer to as Lean, Just-in-Time, Agile, etc. One of these approaches is deferring design decisions to the last responsible moment. The reason for this is to allow concurrent design. Toyota do not have Architects. They have a chief Engineer who champions the project and as an overall vision, but he does not make design decisions or tooling decisions. These decisions are delegated to the design teams who are autonomous. These teams try to keep their options open avoiding looking themselves into decisions which are not easily reversed later. The reason why they do this is because they do not own the whole design. For example the boot design team may need to change their design late into the development cycle. If the team responsible for the interior space of the car decide that people in the back really need an extra inch legroom to compete with the latest competing models in their target market segment. Then the boot design team need to be in a position to respond quickly, by changing their design reducing the boot space to accommodate the larger interior. This agility is why they defer irreversible design decisions.

Simon's paper on architecture, although it doesn't use the word "Agile" explicitly, makes much play of this Agile principle. The point I was making is that this principle is part of a wider philosophy, and that philosophy is not consistent with the idea of Architects and Architecture as borrowed from the building industry.

Taiichi Ohno would not recognise the use of deferred decision making in a context where the architect was the technical authority yet not responsible for doing the actual work. What we have here is a mix of two separate belief systems. The belief system that says that decision making should be centralised stems from the Management teaching of Fredrick Winslow Taylor and his idea of Scientific Management. A good example of this philosophy in practice was Henry Ford and the Ford Company. The Japanese came to the US after WWII and looked at the Ford production System and largely rejected it as inefficient and wasteful.

The point I'm making is that these are two separate contradicting belief systems, based on different values. We can question the relative merits of each, but cherry picking practices from one belief system and applying them out of context into an environment with the opposite belief system is not likely to produce the intended results. The beliefs come first and the organisational structure stems from those beliefs.

So whilst it is true that Simon did not appropriate the label Agile, he did attempt to reconcile the role of Architect which is essentially a taylorist construct, with the principles of Agile product Development which stem from the beliefs of Deming.

This is the background to the point I was trying to make.

Thursday, April 10, 2008

Generics - An OO Anti-Pattern

I'm obliged to use Java 1.5 at my latest client. One of my gripes with Java is that it doesn't encourage an OO programming style. As a Coach, I tend to find that most Java programmers lack a full understanding of OO design principles. In fact I can count on one hand the number of Java programmers I've come across who have an understanding of OO which is at least as good as mine.

In contrast all the Smalltalk programmers I've met, understand Objects very well and I'm sure most could teach me a thing or two. So Sun decided to revamp Java, a supposedly OO language. You would have assumed that they would have borrowed even more from Smalltalk, but no. Instead we get Generics. So why?

Before answering this question. I should really spell out why Generics are not compatible with OO design principles. Objects are meant to be loosely coupled runtime entities. Objects do not exist at compile time, they come into existence when you run your program (or with Smalltalk, they come into existence the moment you load your image). Objects should hide both their state and implementation from others, this is how they achieve low coupling. All that is exposed is their message interface. In Smalltalk, this interface is known as the Object's Protocol. So to communicate with an object, and get it to do something useful, you need to know its protocol and nothing else.

OK. Lets compare this with Generics. Firstly in the Java view of the world, message sends are replaced with virtual functional calls. A Function call as a way of sending a message to an Object is only possible in Java if you know the type of the receiver. In Java the receivers' Type is either a reference to its implementation (its Class) or to one of its implemented Interfaces. So straight away the idea of hiding knowledge of the implementation through message sends is lost. So along comes generics. Does it improve matters any? Well no, in fact it makes things worse, a lot worse.

If the answer (return value) to the message sent to the receiver is a collection, then the Type of Objects that it may contain is also part of the message interface in addition to the Type of the Collection (container) itself. So Generics leak a lot more information about containers. It gets worse when you consider subclassing and method overriding with generic containers. The complexities of what should represents a valid answer to the same message is mind boggling.

So generics leak information like a sieve, and break the basic OO tenant of information hiding (encapsulation) and low coupling. Information hiding is useful, because it allows you to substitute objects at runtime. The substitute could extend the protocol of the original, or implement the same protocol in new ways with different side effects. This idea is what is commonly known as polymorphism, and gave birth to the idea that OO programming could lead to components and re-use.

So back to the question Why? Information hiding is powerful when it comes to malleability and extensibility, but for some perhaps it is too powerful. In Smalltalk to know the Type of an object you need to send it a message. There is no other way. The Type is not manifest in the code. So the Smalltalk IDE is a running Smalltalk program containing a bunch of Objects. Classes themselves are Objects and to know what Type an Object is the Class Browser Object sends it a message to which the answer is the Objects Class. So with Smalltalk you only get to know anything about an object once it is running. Because of this the Smalltalk environment is always alive, always running, all the way through the programming cycle. The Smalltalk image, when loaded contains both IDE objects such as the Class Browser and developer Objects such as application Classes. The Browser sends messages to Class Objects to reveal their methods, and to method objects to reveal their source code. Programmers edit the code, and then send a message to the Compiler Object to compile the method. None of this is possible without running the image.

This approach doesn't help you if you don't like running the code to find out what it does. If you want to perform a static analysis you need more information at compile time. So using Generics is a way of providing this information removing the need for dynamic casts. So why would you want to get rid of casts? Casts are one of the gaps in your ability to fully analyse your program statically. A cast is an explicit admission that static analysis can't help in some scenarios, and you still need to defer some type checking until runtime. Generics is an attempt to bridge this gap, in an attempt to provide complete type safety at compile time.

For programmers who like their code to tell them what it will do before they run it, then this meta-data laden declarative approach is viewed as a benefit, but as Steve Yegge points out, programmers should not need such "training wheels". To know what a program does, what you should do is run it (test it). This unnecessary meta-data obfuscates the code and limits the degree to which the code can be deemed fully Object Orientated.

True Object Orientation relies on late-binding, which occurs at runtime. The whole point is that "you don't know for sure" what it is you are sending a message to. Allowing the receiver to be substituted. Manifestly stating that you know, limits polymorphism and artificially restricts the computational model.

Thursday, April 03, 2008

Pattern Languages and Painting by Numbers

Steve Yegges post on Noobs has really got me thinking about models and the purpose they serve. I think that models help fill a void in the design space. To begin with all we have is a problem and a blank canvas. A skilled artist will look at his subject and start sketching perhaps at first with a pencil. He will then refine and re-work his sketch before moving on with oils.

To the novice artist the blank canvas must be daunting. I have noticed that novice developers with a poor understanding of OO find OO programming from first principles a daunting task too. They are much more comfortable starting out with a template or a framework. A standard sketch. So if the subject is a web page, then the Struts framework which implements the model-2 web application pattern is a comforting place to start. All the novice then has to do is colour in the details. Just like painting by numbers.

When coaching, I have had an uncomfortable feeling about patterns. Novices are great at learning patterns by rote, not so good at knowing when to apply them. And even when they do apply them correctly, few seldom know how to tailor such patterns to create a novel design themselves. Worst still is where someone else has imposed a pattern and the programmer follows it religiously not noticing that the pattern doesn't quite fit for the problem at hand.

Artist do not paint by numbers. Sure they learn a lot of styles (patterns), but ultimately their creations are there own. So how do they create and come up with something novel when faced with a blank canvas? Well they rely on first principles. They understand perspective and how it works. They understand light and how it works too. They understand their medium: oils, canvas, brushes and strokes. They have a deep understanding of their subject aswell. Bone structure, muscles, skin tone etc. They also have a good eye for detail and can see things in their subject of interest, drawing out important nuances, and giving them prominence in their final piece of work.

Sound technical understanding based on first principles, a catalogue of prior work (patterns) for inspiration, a keen eye for significant details, and a well honed instinct for creativity. These are the quality of a good artist. They are also the qualities of a good programmer too.

I might be over ambitious, but I believe that average programmers can obtain this level of skill with coaching and practice, and model obsession as we have come to know it is not needed and can be replaced with something better. In a series of blogs I hope to strip away standard patterns and get back to first principles in an attempt to show the cost we are enduring by painting by numbers. I also want to show that going back to first principles isn't that daunting and can be both beneficial and a lot of fun.

Me too Agile

William Martinez has pointed me to a paper which talks about Emergent Design from an Architects perspective. Apparently Architects are now deferring architectural decisions to the last responsible moment in an attempt to be more Agile.

On the same site they present a skills matrix defining the role of an architect. looking at the matrix it looks as though the new Agile Architect is a cross between a Systems Design Engineer a Senior Development Lead and an Agile Coach.

To me the term Agile Architect is a misnomer. Architecture as I understand it is about technology selection high level design. In an Agile team, the team are responsible for making such decisions. If the Architect is part and parcel of the team or is an outside consultant that the team can consult then I guess that this approach is still consistent with say Scrum. But ultimately the team must decide. Since the team is ultimately responsible for delivery. Anything else would brake the central tenant of Scrum which is that the team should be self organised and make their own decisions.

It puts me in mind of Scott Amblers website on Agile Modeling. Again a central tenant of Agile Development is feedback. Yet you don't get much feedback from a non-executable UML Model until pretty late in the development process. So again Agile Modeling is a bit of a misnomer.

So why are we seeing such things? Well in my opinion this is the consequence of organisations that have structured themselves around a waterfall "production line" view of software development who never the less like the sound of Agility, yet do not want to confront the need for organisational change. To me these are just symptoms of Agile as a fad. Wannabes jumping on to the bandwagon. Me too Agile.

True Agile seems to be taking root in environments with less cultural baggage such as new startups. But for those who are merely interested in trying out the latest fad without making a real commitment to cultural change, they can change their practices a tad and re-brand themselves as Agile. "Me too" Agile. Which is what I believe we are seeing here.