Accessing External APIs


In my previous post we saw how to connect to the Hive AlertMe API to track the variation of temperature in a home over time to produce a simple graph.  In this post we are going to correlate these readings with the external temperature as recorded by the UK Met Office.


The Met Office provides a publicly accessible API for both weather forecasts and historic meteorological observations called DataPoint.  To access the API you need to register for an account here http://www.metoffice.gov.uk/datapoint.  The account includes the provision of a personal API key which is required to call the API.

The basic service is free with quotas being imposed on the total number of calls  and the maximum frequency of these calls.  The quota is more than enough for individual usage.  DataPoint is a restful API providing data as either XML or JSON format.  The API is fully documented here, http://www.metoffice.gov.uk/datapoint/support/api-reference.  As before when exploring the Hive AlertMe API, DataPoint can be easily investigated using the Postman App within Google Chrome.

Sites, Forecasts and Observations 

The API is designed to support the access of both forecast and observation data for a number of UK locations or in DataPoint API likes parlance, sites.  The Met Office provides forecast data for around 5,000 sites and collects observations at approximately 120 sites.  Here we will be using the JSON based API.

Two API functions are provided by DataPoint for obtaining the list of available sites, one for forecasts and one for observations.  Substitute your own API key to make these work.


As we are interested in tracking historic weather conditions we will use the latter API function, which will return a result similar to the following (this have been abridged and only shows the first entry of the hundred and twenty)


You can see the basic information returned includes name, elevation (I have been unable to confirm the units but this might be in feet as wind speeds elsewhere in the API are reported in miles per hour an imperial unit) plus latitude and longitude.  We use this call to identify the sites we are interested in and in particular their ids.

A Weighty Problem

Because observation coverage is sparse when compared to forecast locations my home town is not listed which it is for forecast data.  Since we wish to track historic temperatures we will calculate the weighted average of temperature at the three nearest sites to my home town based on distance.  If there are any meteorologists reading this, I am open to suggestions on if there is a better approach given the data available?


In the above diagram we can see three separate locations, shown by grey diamonds, with the three temperatures 3.5C, 4.0C and 5.0C and the corresponding distances 10, 12 and 11 respectively.

The temperature at ‘Home’ is calculated as follows,

t1 × 1/d1 + t2 × 1/d2 + t3 × 1/d3
1/d1 + 1/d2 + 1/d3

3.5 × 0.1 + 4.0 × 0.0833 + 5.0 × 0.0909
0.1 + 0.0833 + 0.0909

0.35 + 0.3333 + 0.4545

= 4.15C


The three nearest Met Office locations to my home town are Charlwood, Gravesend-Broadness and Kelly which have the site ids 3769, 3784 and 3781 respectively.


The above chart shows the variation of temperature for these three sites over a 24-hour period plus the calculated weighted temperature for my home town shown in red.  The data was obtained by calling the DataPoint API three times with the site id of each location.  The data is obtained using the following API call,


In the above the site id is incorporated into the URL, here 3784 for Gravesend-Broadness, and the res parameter requests that the data is provided on an hourly basis.  Currently hourly observations are the finest resolution for observational data.  Below is an example of the response provided.

             {“name”:”G”,”units”:”mph”,”$”:”Wind Gust”},
             {“name”:”D”,”units”:”compass”,”$”:”Wind Direction”},
             {“name”:”S”,”units”:”mph”,”$”:”Wind Speed”},
             {“name”:”W”,”units”:””,”$”:”Weather Type”},
             {“name”:”Pt”,”units”:”Pa\/s”,”$”:”Pressure Tendency”},
             {“name”:”Dp”,”units”:”C”,”$”:”Dew Point”},
             {“name”:”H”,”units”:”%”,”$”:”Screen  Relative Humidity”}

The temperature values that we are interested in are highlighted in bold.

Pulling It All Together

Using the data gathered from the Hive AlertMe API and the Met Office DataPoint API we can see the correlation between the two over a 24-hour period.  I extended the simple Java client from last time to call the DataPoint API to generate the dataset used to produce the following graph.


The red line at the top represents the internal temperature gathered via the Hive AlertMe API and the line at the bottom is the weighted external temperature gathered from the three nearest Met Office observation sites.  Note that no data was gathered from the Hive AlertMe API  prior to 11:00 AM on 8th February.  Also notice that the heating was switched off from around midnight to 6:00 AM where you can see the temperature fall.  Interesting to note the ripple effect from about 16:00 to 23:00 as the temperature is maintained at the 20C set value.

Over the next couple of weeks I plan to capture a much larger dataset which can be analysed for trends and correlations.

In the meantime in my next blog we will explore how to build a Graph Database on a Relational Database, some of the challenges that this provides and the benefits this gives.

An exploration of connected devices


Recently chez Architect has acquired a number of Internet of Things (IoT) devices, from a couple of Sonos wireless music speakers to more recently Hive, British Gas’s wireless home heater thermostat and boiler management system.  If you are familiar with Nest you get the idea.  More than 360,000 units have been installed across the UK alone and the rate of adoption appears to be increasing according to Nina Bhatia Managing Director of Centrica’s Connected Home Unit.

Hive has a number of features for controlling the heating and hot water in your home.  It comes with a fancy mobile app (I was impressed by the UX) and website for controlling your appliances remotely.  You can even set up a schedule for when devices are on or off.  The app allows you to adjust the temperature of your heating and also reports the current temperature in your home.  This article is not intended to be a review of Hive but an exploration of the APIs that these types of ecosystems provide.

After seeing the mobile app I realised that there must be an API to support it.  It is not widely advertised, or promoted by British Gas, but there are a number of blogs that make reference to it.  The original company that developed and then operated the Hive ecosystem was called AlertMe.  AlertMe was acquired by Centrica the parent of British Gas in 2015.  Slightly out of date documentation for the API can be found here http://www.smartofthehome.com/wp-content/uploads/2016/03/AlertMe-API-v6.1-Documentation.pdf.  This is the documentation for V6.1 of the API although the most current version is V6.5.  I’ve been unable to locate the latest documentation so here is a gentle appeal to Centrica to release it.

Hive supports a Restful JSON API, so I thought that I would write a simple application that would allow me to record the historic temperature within the home using the API.

Logging into Hive

The base URL for the Hive API is, https://api-prod.bgchprod.info:443/omnia.  In order to use this interface clients, users, need to be authenticated using the user-id and password setup for the device.  Once authenticated a 24-hour access token is provided that is used for all subsequent calls.  In order to experiment with the API I would recommend using the Postman App available for use with the Google Chrome browser.

To use the restful API a number of attributes need to be set in the HTTP header. For the initial client authentication call, [POST] https://api-prod.bgchprod.info:443/omnia/auth/sessions, three header attributes need to be set with the following values.

Content-Type: application/vnd.alertme.zoo-6.5+json
Accept: application/vnd.alertme.zoo-6.5+json
X-Omnia-Client: Hive Web Dashboard

Along with setting the header attributes, the POST request requires data to be passed in the HTTP body as a JSON object.  Below is an example of the request body.

    “sessions”: [{
        “username”: “xxxxxx”,
        “password”: “yyyyyy”,
        “caller”: “WEB”

Looking at the sessions structure it looks like the call can be used to establish multiple sessions in a single call but I have not tried this as I do not have access to multiple user accounts.

Once posted, the AlertMe API will respond by returning a JSON object in the HHTP body. Below is an example response

  “meta”: {},
  “links”: {},
  “linked”: {},
  “sessions”: [
      “id”: “…”,
      “links”: {},
      “username”: “xxxxxxx”,
      “userId”: “vvvvvvv”,
      “extCustomerLevel”: 1,
      “latestSupportedApiVersion”: “6”,
      “sessionId”: “…”

The session id is valid for 24 hours and is used in all subsequent API calls and is added as a fourth parameter to the HTTP header with the key X-Omnia-Access-Token.

Accessing Hive

Hive provides a number of function calls such as those for accessing the topology of your devices and discovering their status, in Hive terminology devices are called nodes.

The API function [GET] https://api-prod.bgchprod.info:443/omnia/nodes returns a JSON structure with all the details of the device nodes.  It is a bit of a kitchen sink function returning many details such as the thermostat settings and the schedule settings for each node.  When a node id is specified the results of a single node can be returned.  For instance by using [GET] https://api-prod.bgchprod.info:443/omnia/nodes/ the following structure is returned (collapsed).

  “meta”: {},
  “links”: {},
  “linked”: {},
  “nodes”: [
      “id”: “ … ”,
      “href”: “…://api.prod.bgchprod.info:8443/omnia/nodes/”,
      “links”: {},
      “name”: “Thermostat”,
      “nodeType”: “…/node.class.thermostat.json#”,
      “parentNodeId”: “ … ”,
      “lastSeen”: 1485983701626,
      “createdOn”: 1481892545442,
      “userId”: “vvvvvvv”,
      “ownerId”: “vvvvvvv”,
      “features”: {
        “transient_mode_v1”: { …. },
        “temperature_sensor_v1”: {
          “temperature”: {
            “reportedValue”: 20.35,
            “displayValue”: 20.35,
            “reportReceivedTime”: 1485983701626,
            “reportChangedTime”: 1485983461820
        “featureType”: { …. },
        “on_off_device_v1”: { …. },
        “heating_thermostat_v1”: { …. },
        “thermostat_frost_protect_v1”: { …. }

By examining the nodes structure it is possible to determine the currently reported temperature of the thermostats in your home.  The time values are Unix timestamps representing seconds since 1st January 1970.


AlertMe appears to sample Hive devices on a two minute interval so there is no benefit in calling the API more frequently.  Sampling is also unreliable so updates to the sample values may be missed.  I have observed periods of up to eight minutes where the same value was reported before a revised sample update was successfully received.

The precision of the thermostat value is high to two decimal places which was a surprise.  I am not convinced the temperature reading is that accurate, but it was refreshing obtaining values to a fraction of a degree.

Hive Analytics

Now the real reason for why I was interested in accessing the AlertMe API for Hive was to build a picture of how temperature varies over time.

I wrote a simple Java application that logged in and polled the state of the devices in my house on a two minute basis outputting the results to a CSV file initially.  I used the GSON JSON parser which I found very straight forward and intuitive to use.

Below is a graph of temperature variation over twenty four hours.  The left axis is temperature in Celsius and the bottom axis is date and time.  You can see where the temperature dips when the heating was switched off over night and also during the day.


In my next blog I will describe how these values can be correlated to historical weather temperatures provided by the Met Office through their DataPoint API.

This is a departure from the usual blog entry, but I thought I would share the findings from a little research that I had been conducting recently on tech start-ups near Silicon Roundabout.  I wanted to get a feel for the number and location of these companies.

For international readers, Silicon Roundabout, is the name given to a small and growing tech cluster that has started to mushroom around Old Street roundabout in London.  Start-ups are attracted to this area of London due to the relatively affordable office rental whilst still being in the centre of London.  A number of large international firms such as Google have also been attracted to the area to be at the centre of this cluster.  Google have recently opened their London campus here.

Silicon Roundabout Map Image

Above is an interactive map pinpointing the companies that are within a mile (1.6 km) of Silicon Roundabout.  Clicking each pin reveals details of each company including a link to its web-site.

Although the survey was not especially scientific, a couple of Google searches, I did manage to identify approximately thirty companies.  The data is here to share in the attached Excel file. Tech Companies Survey 2012.

Based on the sample the majority of companies have been created in the last four years.

Silicon Roundabout Start-ups Bar-chart

In constructing the survey I also categorised each company fairly arbitrarily, resulting in the following table showing the frequency of each category.



Visualization or presentation of data.


Retail financial services.


Fashion retail.


Provision of an end-user service such as booking a cab.


Cloud provided services.


Software companies.


Travel sites.


Advertising and marketing.


Food retail.


Games vendors.


Mobile phone related services.


General retail.


Computer and web security.

Although the sample is not big, Analytics and Finance are at the top accounting for a third of start-ups.

I’m unsure what can be concluded with this data, all thoughts are welcome, however I thought I would share what I had found.

Just a brief entry to let you know that I have added a description of the Bluprint Architecture to the project web-site.

A few years ago I received a lift home from a colleague of mine after a very pleasant dinner. It was a windy New England fall night, the red and gold coloured leaves where swirling around as our headlights picked them out in the road ahead. During the journey home I mused, “could you name one thing that Microsoft has innovated”? This might seem an unusual thing to raise, however I was genuinely interested as my driver had spent fourteen years at that illustrious company. I was stumped and so was my friend. Here were some of the things we batted about.

  • HTML Browser – in the early days of the Web there were a few, remember Cello and Mosaic, however it was Netscape that innovated the browser so that it became popular and usable. Microsoft was initially very dismissive until they realised that it potentially threatened their livelihood and out sprung Internet Explorer.
  • MP3 Player – again there were many initially on the market, but it wasn’t until the iPod that they became really cool. Wasn’t the iPod invested by Micro … no, no, it was other computer company, Apple. Hmmm, Zune, isn’t exactly cool and doesn’t roll off the tongue. I never hear children demanding a Zune for their birthday, “Mummy, mummy can I have a Zune?”. At this point, we best not mention the iPhone …
  • Windowing Operating Systems – ok it was an idea perfected at Parc Place by Xerox, which Apple pinched for the Lisa and then the Mac, but it was years later that Microsoft released Windows and it’s been clunky and quirky ever since.
  • Next there is DOS, the first PC operating system, it wasn’t even developed by Microsoft originally but licensed from another company when IBM came calling for a PC operating system. Pity because there were much better rivals. Raise your hand if you had heard of CP/M, or better still CP/M-86. What a frustrating world it was to be forced to limit your file names to eight characters plus a dot and a file extension. How daft!
  • Games Console – Xbox 360 – if I had a billion or so dollars to burn I could come out with a pretty nifty games console – but it was a market built by the likes of Sony and Nintendo.
  • Search Engines – remember Alta Vista before Google, now we have Bing, how me too!

I could go on and on, the spreadsheet, no, the word processor, no, drawing packages, no, the database, no …

You would think it would be easy to recall something from such a successful company, but given its size shouldn’t innovation be raining down like those leaves off the trees?

In this entry I wanted to share my thoughts on how Agile MDA can be used for the 99% of projects that don’t have the luxury of being new Greenfield Blue Sky developments.

The Reality of the Real World

I am sure we have all experienced joining a new company or transferring to a new team and being assigned to a project that has an existing code base.  Usually the code base, often your company’s flagship product, has been careful crafted, lovingly maintained and nurtured over a number of years.

Being a top-down kind of person, one of the first questions I ask is ‘can you show me the architecture’? This is met with varying reactions ranging from embarrassed laughter to frenzied scribbling on a whiteboard.  It is very unlikely, in fact I have never encountered, that a set of up to date UML diagrams representing the current state of the application will be produced.  In more mature organizations, if we are lucky, we may be pointed at a file directory containing the detailed design documentation for the various components that have been implemented over the years.  Naturally these are never maintained, as there isn’t time.

So as a new team member you are usually left to familiarize yourself with the code under the careful guidance of a developer who is experienced with the code base of the application.  As familiarity builds you may fix a few simple bugs and after a few weeks or months enough confidence may be established to implement some functional enhancement or even new components.  Most of these changes are made directly to the code without a formal design and review process being followed.

The above is an illustration of the reality for the majority of us.  We are rarely in the position to be starting from scratch.  So how can we start picking ourselves up by the bootstraps to improve our productivity?  In this type of environment it appears to be next to impossible to introduce improved productivity techniques such as Agile MDA.

Architecture Modernization, Software Archaeology and Surveying

Many people assume that MDA is only suitable for brand new implementations where code is generated fresh from a model. If your recall a tool like Bluprint allows existing code to be merged with the engineering model describing the software.  In order to perform code merging, Bluprint internally constructs a model representing the existing code.  The model of the existing code is then merged from with the engineering model.  It is from the merged model that the new code is then generated.

Now imagine the situation where we do not have an engineering model, e.g. an existing code base without a description of its design – much like the scenario we described earlier.  You can see that Bluprint would still work in this case as Bluprint would create a model of the existing code that when merged with an empty or nil engineering model would remain the same.  If we could preserve the resulting model, we would have uncovered the original blueprint for the code.

This is incredibly useful because now we have bootstrapped ourselves by creating a description of the application code in the form a model.  We can then proceed to maintain the application by making changes to our newly recovered model.  At this point it would be as if we had been using Agile MDA all along.

The process of model recovery is sometimes called Software Archaeology, but often I like to think of it as more like software surveying as it is consistent with the metaphor of a model blueprint.  The Object Management Group (OMG) has a task force looking specifically at evolving software assets using model driven approaches.  It operates under the rather grand title of Architecture-Driven Modernization (ADM).

Some pragmatics

Part of the value of adopting MDA is to have a diagrammatic representation of the application.  When recovering a model from its code, there is no diagrammatic representation for it.  As a consequence any recovery tool will need to automatically generate this.  This presents a few challenged, as what is the best approach generating the model diagram?  Many software products, especially UML modelling tools, incorporate function to automatically layout a model diagram.  However no matter how sophisticated they are, they can never perform this in a satisfactory manner. It will always require manual tweaking afterwards.  From practical experience, it is better for the tool to create an initial layout of the model in diagrammatic form, but to realise that this will then be subsequently rearranged.

If we were creating a UML model from a code base, such as Java, we would create a class diagram for each package.  On each class diagram the classes that inherit from each other would be arranged in balanced class hierarchy trees.  Classes from other packages would also be included on the diagram if there are direct associations from the classes within the package represented by the diagram.  Theses external classes would be placed at the periphery of the diagram.

The layout of associations is a complex problem to solve and this is where most tools flounder.  If is difficult to design an algorithm that ensures the lines representing associations avoid crossing over or under other classes and that also minimise the number of times these lines cross.  In practice it is better to include all associations as straight lines, creating a tangled web of lines and to assume that an engineer will manually rearrange these in their modelling tool.  This usually only takes a few minutes to perform.  Once an engineer has rearranged a diagram its layout can be saved and preserved.  By rearranging the diagram, engineers also start to familiarize themselves with the structure of the application.

Enabling Bluprint for ADM

Currently Bluprint contains most of the components needed for deployment within existing software implementations.  As described previously it currently constructs internally a model of the existing code when it is creating code from a model.  However it does not contain the components for generating the diagrammatic representation of the constructed model or for a means of persisting it.  These changes are currently in the works and I will keep you abreast of developments.

I am pleased to announce that Project Bluprint is now up and running.  The project web-site is here http://bluprint.sourceforge.net/Bluprint is a pragmatic Agile-MDA code generator.  It is a fuss-free tool supporting code generation from UML modelling tools.  It supports model/code merging features so that engineers can easily extend and maintain generated code.  Bluprint is unique as it has been developed using itself.  From the site you can download the software or get involved as a developer.