Accessing External APIs


In my previous post we saw how to connect to the Hive AlertMe API to track the variation of temperature in a home over time to produce a simple graph.  In this post we are going to correlate these readings with the external temperature as recorded by the UK Met Office.


The Met Office provides a publicly accessible API for both weather forecasts and historic meteorological observations called DataPoint.  To access the API you need to register for an account here http://www.metoffice.gov.uk/datapoint.  The account includes the provision of a personal API key which is required to call the API.

The basic service is free with quotas being imposed on the total number of calls  and the maximum frequency of these calls.  The quota is more than enough for individual usage.  DataPoint is a restful API providing data as either XML or JSON format.  The API is fully documented here, http://www.metoffice.gov.uk/datapoint/support/api-reference.  As before when exploring the Hive AlertMe API, DataPoint can be easily investigated using the Postman App within Google Chrome.

Sites, Forecasts and Observations 

The API is designed to support the access of both forecast and observation data for a number of UK locations or in DataPoint API likes parlance, sites.  The Met Office provides forecast data for around 5,000 sites and collects observations at approximately 120 sites.  Here we will be using the JSON based API.

Two API functions are provided by DataPoint for obtaining the list of available sites, one for forecasts and one for observations.  Substitute your own API key to make these work.


As we are interested in tracking historic weather conditions we will use the latter API function, which will return a result similar to the following (this have been abridged and only shows the first entry of the hundred and twenty)


You can see the basic information returned includes name, elevation (I have been unable to confirm the units but this might be in feet as wind speeds elsewhere in the API are reported in miles per hour an imperial unit) plus latitude and longitude.  We use this call to identify the sites we are interested in and in particular their ids.

A Weighty Problem

Because observation coverage is sparse when compared to forecast locations my home town is not listed which it is for forecast data.  Since we wish to track historic temperatures we will calculate the weighted average of temperature at the three nearest sites to my home town based on distance.  If there are any meteorologists reading this, I am open to suggestions on if there is a better approach given the data available?


In the above diagram we can see three separate locations, shown by grey diamonds, with the three temperatures 3.5C, 4.0C and 5.0C and the corresponding distances 10, 12 and 11 respectively.

The temperature at ‘Home’ is calculated as follows,

t1 × 1/d1 + t2 × 1/d2 + t3 × 1/d3
1/d1 + 1/d2 + 1/d3

3.5 × 0.1 + 4.0 × 0.0833 + 5.0 × 0.0909
0.1 + 0.0833 + 0.0909

0.35 + 0.3333 + 0.4545

= 4.15C


The three nearest Met Office locations to my home town are Charlwood, Gravesend-Broadness and Kelly which have the site ids 3769, 3784 and 3781 respectively.


The above chart shows the variation of temperature for these three sites over a 24-hour period plus the calculated weighted temperature for my home town shown in red.  The data was obtained by calling the DataPoint API three times with the site id of each location.  The data is obtained using the following API call,


In the above the site id is incorporated into the URL, here 3784 for Gravesend-Broadness, and the res parameter requests that the data is provided on an hourly basis.  Currently hourly observations are the finest resolution for observational data.  Below is an example of the response provided.

             {“name”:”G”,”units”:”mph”,”$”:”Wind Gust”},
             {“name”:”D”,”units”:”compass”,”$”:”Wind Direction”},
             {“name”:”S”,”units”:”mph”,”$”:”Wind Speed”},
             {“name”:”W”,”units”:””,”$”:”Weather Type”},
             {“name”:”Pt”,”units”:”Pa\/s”,”$”:”Pressure Tendency”},
             {“name”:”Dp”,”units”:”C”,”$”:”Dew Point”},
             {“name”:”H”,”units”:”%”,”$”:”Screen  Relative Humidity”}

The temperature values that we are interested in are highlighted in bold.

Pulling It All Together

Using the data gathered from the Hive AlertMe API and the Met Office DataPoint API we can see the correlation between the two over a 24-hour period.  I extended the simple Java client from last time to call the DataPoint API to generate the dataset used to produce the following graph.


The red line at the top represents the internal temperature gathered via the Hive AlertMe API and the line at the bottom is the weighted external temperature gathered from the three nearest Met Office observation sites.  Note that no data was gathered from the Hive AlertMe API  prior to 11:00 AM on 8th February.  Also notice that the heating was switched off from around midnight to 6:00 AM where you can see the temperature fall.  Interesting to note the ripple effect from about 16:00 to 23:00 as the temperature is maintained at the 20C set value.

Over the next couple of weeks I plan to capture a much larger dataset which can be analysed for trends and correlations.

In the meantime in my next blog we will explore how to build a Graph Database on a Relational Database, some of the challenges that this provides and the benefits this gives.


An exploration of connected devices


Recently chez Architect has acquired a number of Internet of Things (IoT) devices, from a couple of Sonos wireless music speakers to more recently Hive, British Gas’s wireless home heater thermostat and boiler management system.  If you are familiar with Nest you get the idea.  More than 360,000 units have been installed across the UK alone and the rate of adoption appears to be increasing according to Nina Bhatia Managing Director of Centrica’s Connected Home Unit.

Hive has a number of features for controlling the heating and hot water in your home.  It comes with a fancy mobile app (I was impressed by the UX) and website for controlling your appliances remotely.  You can even set up a schedule for when devices are on or off.  The app allows you to adjust the temperature of your heating and also reports the current temperature in your home.  This article is not intended to be a review of Hive but an exploration of the APIs that these types of ecosystems provide.

After seeing the mobile app I realised that there must be an API to support it.  It is not widely advertised, or promoted by British Gas, but there are a number of blogs that make reference to it.  The original company that developed and then operated the Hive ecosystem was called AlertMe.  AlertMe was acquired by Centrica the parent of British Gas in 2015.  Slightly out of date documentation for the API can be found here http://www.smartofthehome.com/wp-content/uploads/2016/03/AlertMe-API-v6.1-Documentation.pdf.  This is the documentation for V6.1 of the API although the most current version is V6.5.  I’ve been unable to locate the latest documentation so here is a gentle appeal to Centrica to release it.

Hive supports a Restful JSON API, so I thought that I would write a simple application that would allow me to record the historic temperature within the home using the API.

Logging into Hive

The base URL for the Hive API is, https://api-prod.bgchprod.info:443/omnia.  In order to use this interface clients, users, need to be authenticated using the user-id and password setup for the device.  Once authenticated a 24-hour access token is provided that is used for all subsequent calls.  In order to experiment with the API I would recommend using the Postman App available for use with the Google Chrome browser.

To use the restful API a number of attributes need to be set in the HTTP header. For the initial client authentication call, [POST] https://api-prod.bgchprod.info:443/omnia/auth/sessions, three header attributes need to be set with the following values.

Content-Type: application/vnd.alertme.zoo-6.5+json
Accept: application/vnd.alertme.zoo-6.5+json
X-Omnia-Client: Hive Web Dashboard

Along with setting the header attributes, the POST request requires data to be passed in the HTTP body as a JSON object.  Below is an example of the request body.

    “sessions”: [{
        “username”: “xxxxxx”,
        “password”: “yyyyyy”,
        “caller”: “WEB”

Looking at the sessions structure it looks like the call can be used to establish multiple sessions in a single call but I have not tried this as I do not have access to multiple user accounts.

Once posted, the AlertMe API will respond by returning a JSON object in the HHTP body. Below is an example response

  “meta”: {},
  “links”: {},
  “linked”: {},
  “sessions”: [
      “id”: “…”,
      “links”: {},
      “username”: “xxxxxxx”,
      “userId”: “vvvvvvv”,
      “extCustomerLevel”: 1,
      “latestSupportedApiVersion”: “6”,
      “sessionId”: “…”

The session id is valid for 24 hours and is used in all subsequent API calls and is added as a fourth parameter to the HTTP header with the key X-Omnia-Access-Token.

Accessing Hive

Hive provides a number of function calls such as those for accessing the topology of your devices and discovering their status, in Hive terminology devices are called nodes.

The API function [GET] https://api-prod.bgchprod.info:443/omnia/nodes returns a JSON structure with all the details of the device nodes.  It is a bit of a kitchen sink function returning many details such as the thermostat settings and the schedule settings for each node.  When a node id is specified the results of a single node can be returned.  For instance by using [GET] https://api-prod.bgchprod.info:443/omnia/nodes/ the following structure is returned (collapsed).

  “meta”: {},
  “links”: {},
  “linked”: {},
  “nodes”: [
      “id”: “ … ”,
      “href”: “…://api.prod.bgchprod.info:8443/omnia/nodes/”,
      “links”: {},
      “name”: “Thermostat”,
      “nodeType”: “…/node.class.thermostat.json#”,
      “parentNodeId”: “ … ”,
      “lastSeen”: 1485983701626,
      “createdOn”: 1481892545442,
      “userId”: “vvvvvvv”,
      “ownerId”: “vvvvvvv”,
      “features”: {
        “transient_mode_v1”: { …. },
        “temperature_sensor_v1”: {
          “temperature”: {
            “reportedValue”: 20.35,
            “displayValue”: 20.35,
            “reportReceivedTime”: 1485983701626,
            “reportChangedTime”: 1485983461820
        “featureType”: { …. },
        “on_off_device_v1”: { …. },
        “heating_thermostat_v1”: { …. },
        “thermostat_frost_protect_v1”: { …. }

By examining the nodes structure it is possible to determine the currently reported temperature of the thermostats in your home.  The time values are Unix timestamps representing seconds since 1st January 1970.


AlertMe appears to sample Hive devices on a two minute interval so there is no benefit in calling the API more frequently.  Sampling is also unreliable so updates to the sample values may be missed.  I have observed periods of up to eight minutes where the same value was reported before a revised sample update was successfully received.

The precision of the thermostat value is high to two decimal places which was a surprise.  I am not convinced the temperature reading is that accurate, but it was refreshing obtaining values to a fraction of a degree.

Hive Analytics

Now the real reason for why I was interested in accessing the AlertMe API for Hive was to build a picture of how temperature varies over time.

I wrote a simple Java application that logged in and polled the state of the devices in my house on a two minute basis outputting the results to a CSV file initially.  I used the GSON JSON parser which I found very straight forward and intuitive to use.

Below is a graph of temperature variation over twenty four hours.  The left axis is temperature in Celsius and the bottom axis is date and time.  You can see where the temperature dips when the heating was switched off over night and also during the day.


In my next blog I will describe how these values can be correlated to historical weather temperatures provided by the Met Office through their DataPoint API.

This is a departure from the usual blog entry, but I thought I would share the findings from a little research that I had been conducting recently on tech start-ups near Silicon Roundabout.  I wanted to get a feel for the number and location of these companies.

For international readers, Silicon Roundabout, is the name given to a small and growing tech cluster that has started to mushroom around Old Street roundabout in London.  Start-ups are attracted to this area of London due to the relatively affordable office rental whilst still being in the centre of London.  A number of large international firms such as Google have also been attracted to the area to be at the centre of this cluster.  Google have recently opened their London campus here.

Silicon Roundabout Map Image

Above is an interactive map pinpointing the companies that are within a mile (1.6 km) of Silicon Roundabout.  Clicking each pin reveals details of each company including a link to its web-site.

Although the survey was not especially scientific, a couple of Google searches, I did manage to identify approximately thirty companies.  The data is here to share in the attached Excel file. Tech Companies Survey 2012.

Based on the sample the majority of companies have been created in the last four years.

Silicon Roundabout Start-ups Bar-chart

In constructing the survey I also categorised each company fairly arbitrarily, resulting in the following table showing the frequency of each category.



Visualization or presentation of data.


Retail financial services.


Fashion retail.


Provision of an end-user service such as booking a cab.


Cloud provided services.


Software companies.


Travel sites.


Advertising and marketing.


Food retail.


Games vendors.


Mobile phone related services.


General retail.


Computer and web security.

Although the sample is not big, Analytics and Finance are at the top accounting for a third of start-ups.

I’m unsure what can be concluded with this data, all thoughts are welcome, however I thought I would share what I had found.

Just a brief entry to let you know that I have added a description of the Bluprint Architecture to the project web-site.

A few years ago I received a lift home from a colleague of mine after a very pleasant dinner. It was a windy New England fall night, the red and gold coloured leaves where swirling around as our headlights picked them out in the road ahead. During the journey home I mused, “could you name one thing that Microsoft has innovated”? This might seem an unusual thing to raise, however I was genuinely interested as my driver had spent fourteen years at that illustrious company. I was stumped and so was my friend. Here were some of the things we batted about.

  • HTML Browser – in the early days of the Web there were a few, remember Cello and Mosaic, however it was Netscape that innovated the browser so that it became popular and usable. Microsoft was initially very dismissive until they realised that it potentially threatened their livelihood and out sprung Internet Explorer.
  • MP3 Player – again there were many initially on the market, but it wasn’t until the iPod that they became really cool. Wasn’t the iPod invested by Micro … no, no, it was other computer company, Apple. Hmmm, Zune, isn’t exactly cool and doesn’t roll off the tongue. I never hear children demanding a Zune for their birthday, “Mummy, mummy can I have a Zune?”. At this point, we best not mention the iPhone …
  • Windowing Operating Systems – ok it was an idea perfected at Parc Place by Xerox, which Apple pinched for the Lisa and then the Mac, but it was years later that Microsoft released Windows and it’s been clunky and quirky ever since.
  • Next there is DOS, the first PC operating system, it wasn’t even developed by Microsoft originally but licensed from another company when IBM came calling for a PC operating system. Pity because there were much better rivals. Raise your hand if you had heard of CP/M, or better still CP/M-86. What a frustrating world it was to be forced to limit your file names to eight characters plus a dot and a file extension. How daft!
  • Games Console – Xbox 360 – if I had a billion or so dollars to burn I could come out with a pretty nifty games console – but it was a market built by the likes of Sony and Nintendo.
  • Search Engines – remember Alta Vista before Google, now we have Bing, how me too!

I could go on and on, the spreadsheet, no, the word processor, no, drawing packages, no, the database, no …

You would think it would be easy to recall something from such a successful company, but given its size shouldn’t innovation be raining down like those leaves off the trees?

In this entry I wanted to share my thoughts on how Agile MDA can be used for the 99% of projects that don’t have the luxury of being new Greenfield Blue Sky developments.

The Reality of the Real World

I am sure we have all experienced joining a new company or transferring to a new team and being assigned to a project that has an existing code base.  Usually the code base, often your company’s flagship product, has been careful crafted, lovingly maintained and nurtured over a number of years.

Being a top-down kind of person, one of the first questions I ask is ‘can you show me the architecture’? This is met with varying reactions ranging from embarrassed laughter to frenzied scribbling on a whiteboard.  It is very unlikely, in fact I have never encountered, that a set of up to date UML diagrams representing the current state of the application will be produced.  In more mature organizations, if we are lucky, we may be pointed at a file directory containing the detailed design documentation for the various components that have been implemented over the years.  Naturally these are never maintained, as there isn’t time.

So as a new team member you are usually left to familiarize yourself with the code under the careful guidance of a developer who is experienced with the code base of the application.  As familiarity builds you may fix a few simple bugs and after a few weeks or months enough confidence may be established to implement some functional enhancement or even new components.  Most of these changes are made directly to the code without a formal design and review process being followed.

The above is an illustration of the reality for the majority of us.  We are rarely in the position to be starting from scratch.  So how can we start picking ourselves up by the bootstraps to improve our productivity?  In this type of environment it appears to be next to impossible to introduce improved productivity techniques such as Agile MDA.

Architecture Modernization, Software Archaeology and Surveying

Many people assume that MDA is only suitable for brand new implementations where code is generated fresh from a model. If your recall a tool like Bluprint allows existing code to be merged with the engineering model describing the software.  In order to perform code merging, Bluprint internally constructs a model representing the existing code.  The model of the existing code is then merged from with the engineering model.  It is from the merged model that the new code is then generated.

Now imagine the situation where we do not have an engineering model, e.g. an existing code base without a description of its design – much like the scenario we described earlier.  You can see that Bluprint would still work in this case as Bluprint would create a model of the existing code that when merged with an empty or nil engineering model would remain the same.  If we could preserve the resulting model, we would have uncovered the original blueprint for the code.

This is incredibly useful because now we have bootstrapped ourselves by creating a description of the application code in the form a model.  We can then proceed to maintain the application by making changes to our newly recovered model.  At this point it would be as if we had been using Agile MDA all along.

The process of model recovery is sometimes called Software Archaeology, but often I like to think of it as more like software surveying as it is consistent with the metaphor of a model blueprint.  The Object Management Group (OMG) has a task force looking specifically at evolving software assets using model driven approaches.  It operates under the rather grand title of Architecture-Driven Modernization (ADM).

Some pragmatics

Part of the value of adopting MDA is to have a diagrammatic representation of the application.  When recovering a model from its code, there is no diagrammatic representation for it.  As a consequence any recovery tool will need to automatically generate this.  This presents a few challenged, as what is the best approach generating the model diagram?  Many software products, especially UML modelling tools, incorporate function to automatically layout a model diagram.  However no matter how sophisticated they are, they can never perform this in a satisfactory manner. It will always require manual tweaking afterwards.  From practical experience, it is better for the tool to create an initial layout of the model in diagrammatic form, but to realise that this will then be subsequently rearranged.

If we were creating a UML model from a code base, such as Java, we would create a class diagram for each package.  On each class diagram the classes that inherit from each other would be arranged in balanced class hierarchy trees.  Classes from other packages would also be included on the diagram if there are direct associations from the classes within the package represented by the diagram.  Theses external classes would be placed at the periphery of the diagram.

The layout of associations is a complex problem to solve and this is where most tools flounder.  If is difficult to design an algorithm that ensures the lines representing associations avoid crossing over or under other classes and that also minimise the number of times these lines cross.  In practice it is better to include all associations as straight lines, creating a tangled web of lines and to assume that an engineer will manually rearrange these in their modelling tool.  This usually only takes a few minutes to perform.  Once an engineer has rearranged a diagram its layout can be saved and preserved.  By rearranging the diagram, engineers also start to familiarize themselves with the structure of the application.

Enabling Bluprint for ADM

Currently Bluprint contains most of the components needed for deployment within existing software implementations.  As described previously it currently constructs internally a model of the existing code when it is creating code from a model.  However it does not contain the components for generating the diagrammatic representation of the constructed model or for a means of persisting it.  These changes are currently in the works and I will keep you abreast of developments.

I am pleased to announce that Project Bluprint is now up and running.  The project web-site is here http://bluprint.sourceforge.net/Bluprint is a pragmatic Agile-MDA code generator.  It is a fuss-free tool supporting code generation from UML modelling tools.  It supports model/code merging features so that engineers can easily extend and maintain generated code.  Bluprint is unique as it has been developed using itself.  From the site you can download the software or get involved as a developer.

Bluprint Icon


I just wanted to update you on some of the exciting things that have been happening since my last post. Previously I mentioned that I am incubating an open source software project. That project is called Bluprint. Bluprint is an MDA tool designed to support Agile MDA. Key-points are:

  1. It is built using itself – although to the end user this isn’t too relevant, it does mean it has been put through its paces and can accommodate non-trivial software projects. The current Bluprint software model and code contains over a hundred classes.
  2. It makes very few uses of UML stereotypes. I was aiming to use none, but I had to concede and adopted one, called <<nogen>>. This is used to prevent the generation of code from some parts of the model and is useful when you are modelling third-party APIs.
  3. It supports full lifecycle software development, you can generate code from models, amend the generated code, amend the model and re-generate. All user modifications to the code are preserved because the tool supports intelligent model/code merging. There is no need to use sub-classing for user written extensions or annotations to mark and protect sections of the code.
  4. Bluprint interprets UML models to create Java code currently (but other target languages will be possible in the future). It supports:
    • Classes
    • Interfaces
    • Enumerations
    • Inheritance, both extends and implements
    • Associations
    • Generation of member variables and corresponding accessors.
      • unary members
      • non-unary members, from associations with cardinality greater than one
      • qualified associations
    • Generation of constructors
    • Generation of stub-methods, including throws clauses

Family Tree Example

Let me show you Bluprint in action, to give you a taste of some of Bluprint’s capabilities. It will demonstrate some of Bluprint’s code generation and model/code merging features showing Agile MDA in action.

Imagine we are going to create a program for the creation and management of family trees. In the following example we’ll be able to create people and define their descendants.

  • The Initial Model
    We start by modelling a simple class for representing people in our family tree called Person, which initially looks like this.
    Person class step 1

    Person class step 1

    Bluprint has been developed, initially and tested, with MagicDraw 16.0. The model is created in MagciDraw and exported as an EMF UML 2 (v1.x) XMI file for use by Bluprint. Bluprint is designed to work with any tool that can export UML models in this format.

    After running Bluprint for the first time, the following code is generated. Notice the generation of member variables, accessors and stub methods for main(), print() and the constructor.

    package org.family;
     * Class Person has been generated from a UML model by @author Bluprint.
     * Date: Tue Mar 24 15:18:23 GMT 2009
    import java.util.Date;
    import java.lang.String;
    public class Person {
    	private String name;	
    	private Date dob;	
    	public Person(String name) {	
    		// TODO - Auto-generated
    	public static void main(String[] args) {
    		// TODO - Auto-generated
    	public void print() {
    		// TODO - Auto-generated
    	public String getName() {
    		return name;
    	public void setName(String value) {
    		this.name = value;
    	public Date getDob() {
    		return dob;
    	public void setDob(Date value) {
    		this.dob = value;

  • Edit the Code
    We fill in the details of the stub methods, as follows. For the constructor we just set the name of the Person.

    	public Person(String name) {	
    		this.setName( name );

    For now the print() method is very simple

    	public void print() {
    		System.out.println( getName() );

    And in main() we create a single Person object and ask it to print itself

    	public static void main(String[] args) {
    		Person fred = new Person( "Fred" );

    At the moment the model and the code only support the definition of a single stand-alone person. Executing the program we get the following output.


  • Amend the Model and Code Merge
    Now we have the basics working, we want to be able to define the descendants of a Person. We amend the model to add a one to many associations from Person to Person. One end represents the parent and the other the descendants. We could have made it many to many, but I wanted to show that Bluprint can handle associations of different cardinalities.
    Person class step 2

    Person class step 2

    We also add a new private method for printing. Note, we don’t need to add this to the model, we could have added it straight to the code, but it is easy to do it here while we are thinking about it.

    We export the model again and run Bluprint against the new model and our original code. Bluprint interprets both and generates the following result for the Person class.

    package org.family;
     * Class Person has been generated from a UML model by @author Bluprint.
     * Date: Tue Mar 24 15:44:16 GMT 2009
    import org.family.Person;
    import java.util.Date;
    import java.lang.String;
    import java.util.Vector;
    public class Person {
    	private String name;	
    	private Date dob;	
    	private Person parent;	
    	private Vector descendants = new Vector();
    	public Person(String name) {	
    		this.setName( name );
    	public static void main(String[] args) {
    		Person fred = new Person( "Fred" );
    	public void print() {
    		System.out.println( getName() );
    	private void print(Person person, int depth) {
    		// TODO - Auto-generated
    	public String getName() {
    		return name;
    	public void setName(String value) {
    		this.name = value;
    	public Date getDob() {
    		return dob;
    	public void setDob(Date value) {
    		this.dob = value;
    	public Person getParent() {
    		return parent;
    	public void setParent(Person value) {
    		this.parent = value;
    	public void addDescendant(Person aDescendant) {
    		this.descendants.add( aDescendant );
    	public void removeDescendant(Person aDescendant) {
    		this.descendants.remove( aDescendant );
    	public Person getDescendantAt(int index) {
    		return this.descendants.get( index );
    	public int getDescendantsSize() {
    		return this.descendants.size();

    Notice the original code changes have been preserved. Bluprint has helpfully added member variables for the parent and the descendants. Some helper methods have been created to allow descendants to be added and removed. In addition members of the descendants collection can also be accessed.

    (The sharp eyed amongst you will notice the Bluprint understands plural names for collections and has created singular forms when adding and removing individual members. So although the Person has many descendants, you add, remove and get individual descendants.)

    A stub method has also been created for the private print() method, which will be used for indenting the name of a person based upon their depth in a descendant tree.

  • Edit the code again
    We add a printIndented() function

    	private void printIndented( String string, int tabs ) {
    		for( int i = 0; i < tabs; i++ ) {
    			System.out.print(  "\t" );
    		System.out.println( string );

    We don’t need to add every method to the model since Bluprint is designed to be pragmatic. You can decide what to include in your model and what to leave out as detailed implementation. Bluprint provides reporting so you can identify what has been coded and not included in the model.

    Next we amend our print() methods to look like this

    	public void print() {
    		print( this, 0 );
    	private void print(Person person, int depth) {
    		printIndented( person.getName(), depth );
    		for( int i = 0; i < person.getDescendantsSize(); i++ ) {
    			print( person.getDescendantAt( i ), depth + 1 );

    Finally we set up some test data in the our main() function

    	public static void main(String[] args) {
    		Person fred = new Person( "Fred" );
    		Person mary = new Person( "Mary ");
    		Person bill = new Person( "Bill" );
    		Person benn = new Person( "Benn" );
    		Person jill = new Person( "Jill" );
    		fred.addDescendant( mary );
    		fred.addDescendant( bill );
    		mary.addDescendant( benn );
    		mary.addDescendant( jill );

    When we run the program we get the following output


    The above is a very simple example, but it does demonstrate some of the powerful features of Bluprint including code generation and intelligent model/code merging. Bluprint removes the burden of creating and maintaining a lot of boilerplate code. The engineer can focus on the value add parts of their task which is the coding of the business logic. The power of Bluprint comes into its own for much larger, real world, models containing many classes, interfaces, associations and inheritance.

  • Things have been a little quiet lately in blog-world as I have been incubating an open source model driven project – watch this space. This has been motivated after taking a look at AndroMDA. Although AndroMDA is excellent I have a number of quibbles. Some of these aren’t unique just to AndroMDA.

    1. Lack of ‘eating your own dog food’. If an MDA tool is worth its salt, surely it must be capable of being used to generate itself from a model. None of the tools that I have seen contain a UML model that describes the software that can be used to generate itself. Obviously there needs to be a bootstrapping process, but once the kernel of the tool is written it can be used to further refine and develop the tool.

      It would be a pretty good acid test, since it would prove that the tool is sophisticated and pragmatic enough to develop a moderately complex application. It addition it would be put through its paces as the model for the application would go through multiple edit, generate and test cycles. (Oh did I mention Agile-MDA? MDA doesn’t preclude being used within an agile development project. I’ll expand upon Agile-MDA in a future blog entry if there is interest.).

    2. AndroMDA makes use of stereotypes to help drive the code generation process. For example the classes within a model being used to represent entities, those classes that are to be made persistent, are flagged with the «entity» stereotype. This helps direct the code generator to create the appropriate persistent code. To this end, the model must include the AndroMDA profile containing the valid AndroMDA stereotypes.

      Although the use of stereotypes provides expressiveness, it does mean that there is a deviation between the classes represented in the model and classes that get generated as code. The MDA purists would argue that this level of abstraction and transformation is acceptable.

      I personally prefer adopted a WYSIWIG (what you see is what you get) approach, where the model is a pretty faithful representation of the classes that will be generated. For example it is possible to model the frameworks and APIs that will be exploited by the implementation model. I argue that by modelling the frameworks and APIs you get a better insight into the structure of that which is being extended and used. In this way you can see clearly the relationship between the classes in your model and those of the frameworks that you’ll exploit.

    3. I’m not a fan of using inheritance to separate generated code and developer written code. We all realise that to be pragmatic, that developers will need to write code in addition to that which is generated. UML (+OCL) isn’t practical for detailed program description.
      Tools like AndroMDA allow developer written code to be contained in an ‘implementation’ class that inherits from a generated class. The generated class can be changed and regenerated, without loosing the developer written code in the inherited implementations class.

      Simple Inheritance
      However this creates an artificial separation between the generated and user written code. If I wish to sub-class the generated class, I can, but I will loose the extensions in the developer written implementation class. There are techniques for getting around this, but this is an extra burden which most developers cannot be bothered with.

      The answer I feel is not to use sub-classing, but to allow developer written code and generated code to be contained within the same class. The tool should be able to merge user written code with generated code so that no developer written code is lost. This is also in line with my previous point, that the model represents more closely what is being generated.

    4. Support of OO concepts. Most tools like AndorMDA can generate simple attributes and accessors (getters/setters). For example if I have a Customer class and an attribute called ‘id’ of type long, this will be faithfully generated. However when we start to use more complex modelling constructs such as a 1 to many association between classes or qualified associations these tools do not provide much assistance.

      Customer and Product
      For instance if the afore mentioned Customer class is associated with a Product class via a 1 to many association as shown, most tools will for create something like the following when generating code for Java,

      public Vector products = new Vector();

      Sometimes it may be declared private and a pair of accessors to the products variable will be provided. However in both cases, the implementation generated does not enforce encapsulation, something as all good OO practitioners we know is a good thing and would want.

      What would be better, would be for the collection to be encapsulated and for the following kind of interface with corresponding implementation to be generated,

      public void addProduct( Product aProduct );
      public void removeProduct( Product aProduct );
      public Iterator getProductsIterator();

      And perhaps optionally,

      public int getProductsSize();
      public Product getProductAt( int index );

      Notice that we don’t break encapsulation. We can add and remove Products to the Customer, and we also provide mechanisms for iterating over the Products associated with the Customer.

      In an ideal world we would also like qualified UML associations to also be supported. These are very useful for representing key, value pairs between classes.

      With this, a lot of boilerplate code can be generated removing the burden from the developer and enabling then to concentrate on creating the unique business logic.

    AndroMDA is an open source model driven architecture project, which is both surprisingly sophisticated and mature.  It allows, for instance, complete J2EE applications to be created from UML models.  The user-interface, business logic and persistence tiers can all be generated.  It supports both BPM4Struts and JSF UI frameworks, Spring, EJBs and Hibernate.  It even has the ability to generate .NET code.  Given its sophistication, I am surprised it isn’t more extensively used.

    In summary AndroMDA is a transformation engine, taking as input a UML model in XMI format containing the UML 1.4 metamodel, and a set of pluggable cartridges.  Each cartridge uniquely transformations the model into output code.  There are separate cartridges for example for jBPM, Hibernate, Spring and EJBs.

    Version 3.3 of AndroMDA provides full Maven integration, so the configuration and operation of the tool is very much simplified.  The Maven automation allows new AndroMDA projects to be created, for these to be developed and subsequently deploy.

    Virtually any UML tool can be used.  I have been using AndroMDA quite happily with MagicDraw 16.0 CE.  The AndroMDA site includes a UML tool support matrix, however this seems a little out of date.

    Installing AndroMDA
    The installation of AndroMDA in a Windows environment can be found at the AndroMDA site.  I have successfully installed and operated AndroMDA on my Mac under Mac OS X 10.5 (Leopard).  It works surprisingly well.  Below are outlined the installation steps for AndroMDA 3.3 on a Mac which mirror those for Windows.

    To set up the AndroMDA development environment you will need to install:

    • Maven
    • AndroMDA itself
    • A UML Tool such as MagicDraw
    • An IDE such as Eclipse

    For the run-time environment you’ll need:

    • A J2EE Application Service such as JBoss
    • A database, MySQL on the Mac works excellently

    Maven Installation

    1. Download Maven from here, http://maven.apache.org/download.html.  I installed and been using version 2.0.9.
    2. Unzip the contents and place it in a folder such as /Users/peterlong/usr/Java.
    3. Create a, or edit your existing, .bash_profile in your home directory (~) so that it includes the following

    4. export M2_HOME=/Users/peterlong/usr/Java/apache-maven-2.0.9
      export M2=$M2_HOME/bin
      export M2_REPO=/Users/.m2/repository
      export MAVEN_OPTS=’-XX:MaxPermSize=128m -Xmx512m’
      export JAVA_HOME=/Library/Java/Home/
      export JBOSS_HOME=/Applications/jboss-4.2.3.GA
      export PATH=$PATH:$M2

    Seeing Hidden Files within the Mac Finder

    By default on a Mac the Finder does not show hidden files and directories that are prefixed with a dot/period (‘.’).  Within a Terminal window use the following so that hidden files can be seen.

    defaults write com.apple.finder AppleShowAllFiles TRUE
    killall Finder

    To disable the display of these hidden files perform the following.

    defaults write com.apple.finder AppleShowAllFiles FALSE
    killall Finder

    Testing the Maven Installation

    1. Open a Terminal window.
    2. Create a temporary project directory, say ~/usr/java/test
    3. cd to the directory and test execution of the maven command with
      mvn –version
      Something similar to the following should have been produced.
      Maven version: 2.0.9
      Java version: 1.5.0_16
      OS name: “mac os x” version: “10.5.6” arch: “i386” Family: “unix”
    4. If this is successful, test the creation of a temporary project, by issuing the following
      mvn archetype:create -DgroupId=testapp -DartifactId=testapp
      After a minute or so you should see
      [INFO] ———————————————————–
      [INFO] Using following parameters for creating OldArchetype: maven-archetype-quickstart:RELEASE
      [INFO] ———————————————————–
      [INFO] Parameter: groupId, Value: testapp
      [INFO] Parameter: packageName, Value: testapp
      [INFO] Parameter: basedir, Value: /Users/peterlong/usr/java/test
      [INFO] Parameter: package, Value: testapp
      [INFO] Parameter: version, Value: 1.0-SNAPSHOT
      [INFO] Parameter: artifactId, Value: testapp
      [INFO] ********************* End of debug info from resources from generated POM ***********************
      [INFO] OldArchetype created in dir: /Users/peterlong/usr/java/test/testapp
      [INFO] ———————————————————–
      [INFO] ———————————————————–
      [INFO] Total time: 1 minute 6 seconds
      [INFO] Finished at: Wed Jan 28 16:47:17 GMT 2009
      [INFO] Final Memory: 7M/14M
      [INFO] ———————————————————–
    5. Check that you got BUILD SUCCESSFUL and that a testapp directory was set up containing the corresponding project files.
    6. Delete the test directory when you are finished as you have successfully set up the Maven environment.

    Install the AndroMDA application plugin

    1. Download the AndroMDA Maven plug-in from here.
    2. Unzip the contents and place the org folder into the Maven repository, in my case /Users/peterlong/usr/Java/repo.
      Now we need to build our plugin.
    3. Create temporary directory for building andromda, say ~/Java/andromda
    4. Create the following pom.xml file in this newly created directory

      <name>AndroMDA Repository</name>
      <name>AndroMDA Repository</name>
    5. Run Maven, by just issuing the mvn command in that directory.
      You should get BUILD SUCCESSFUL as before,
      [INFO] [compiler:compile]
      [INFO] No sources to compile
      [INFO] ————————————————————–
      [INFO] ————————————————————–
      [INFO] Total time: 26 seconds
      [INFO] Finished at: Wed Jan 28 17:23:23 GMT 2009
      [INFO] Final Memory: 4M/8M
      [INFO] ————————————————————–
    6. Delete the temporary directory when you have finished.

    Install a UML tool
    I have installed MagicDraw on my Mac, it works surprisingly well (it is a Java app after all).  I am using MagicDraw 16.0 community edition.

    Note that the projects generated by AndroMDA have a dependency on a number of AndroMDA profiles.  MagicDraw expects to find these in the Maven Repository.  For some reason although the directory structure was there, the required xml.zip files were missing.  I manually copied these from the AndroMDA installation profiles directory to the .m2 repository.

    I had to copy the following files:

    Installing Eclipse
    Download and install Eclipse for Mac OS X from http://www.eclipse.org/downloads/.
    Installing JBoss on Mac
    This is really straight forward.

    1. Download JBoss from http://www.jboss.org/jbossas/downloads/.  I have successfully deployed and used 4.2.3 GA.
    2. Unzip the file and move the contents to your chosen installation directory.  I put mine in /Applications.
    3. Edit .bash_profile so that JBOSS_HOME is defined, for example include the following:
      export JBOSS_HOME=/Applications/jboss-4.2.3.GA

    Installing MySQL

    1. Download the latest community edition of MySQL from http://www.mysql.com/.  At the time of writing this is 5.1.31.  Download the version for Mac OS X 10.5 i86.   I elected to use the package installer version rather than tar.
    2. Once downloaded, open the disk image.  The readme file tells you what to do.  Basically double click on the mysql-5.1.31-osx10.5-x86.pkg.  This is an option for installing the auto-start by clicking on the MySQLStartupItem.pkg.  We’ll skip this for now.
    3. Once installed, you can start MySQL using the following (if auto start hasn’t been used).
      cd /usr/local/mysql
      sudo ./bin/mysqld_safe
    4. Modify your ~/.bash_profile with the following
      export MYSQL=/usr/local/mysql/bin
      export PATH=$PATH:$MYSQL
    5. This will enable the mysql and mysqladmin commands to be used directly without having to specify the full path.
    6. Download the MySQL GUI Tools from http://dev.mysql.com/downloads/gui-tools/5.0.html.  Again I elected to choose the Mac OS X version.
    7. Open the Disk Image and drag the MySQL Tools folder into Applications.
    8. Download the MySQL Connector/J from http://www.mysql.com/products/connector/j/.  I chose the latest 5.1.7.zip.
    9. Unzip the file and copy the mysql-connector-java-5.1.7-bin.jar file to your JBoss Server installation into server/default/lib.  In my case I am using JBoss 4.2.3.  In the process I changed the file name so that it didn’t contain ‘-bin’.

    Testing your AndroMDA environment
    The AndroMDA site includes a comprehensive tutorial for the creation of a complete J2EE application which includes a Web UI and MySQL backend database, called TimeTracker.  Follow the steps outlined in the TimeTracker tutorial to verify your installation.  Note that you won’t need to installation the MySQL driver if you have followed the steps above.