Using Yahoo pipes with ArcGIS Online

I recently read a blog post about using Yahoo pipes on ArcGIS online to map out Flickr photos. That was the first time I’d heard about Yahoo pipes. This got me curious about what pipes actually were. So I did a little bit of digging around, and found that they were actually a very useful set of tools. Pipes essentially enable you to ‘mash up’ and create new content dynamically over the web, akin to ArcGIS online, except with any content (including non-geographic). This is not a new concept – speak to any web developer, and they’ll tell you how they’ve been doing this for years. However, the crucial advantage that Yahoo Pipes gives us is the relatively user friendly interface that allows the user to build up these ‘pipes’. I think it’s quite comparable to the model builder interface on ArcGIS desktop, where a user can drag and drop tools to put together a single geoprocessing task.


The end products of these pipes are web services, containing simple csv files or KML/RSS feeds. These urls are dynamic, which means the contents are constantly updated. A lot of pipes are not related to any mapping content – they are merely a mash up of various RSS feeds, or an updated listing of TV programs from various websites. However, pipes can also contribute to some really good, dynamic content for ArcGIS online. In this blog post, I’ll take you through how I created a useful pipe for News feeds from the BBC website.

There are a lot of good news feeds served up by the BBC as RSS feeds, and one such feed is the World news one. This feed has all the top headlines from around the world in an RSS feed. One of the drawbacks of having this data as an RSS feed is that we can’t put it on a map. However, we can use Yahoo pipes to do that clever bit of processing to convert it into a GeoRSS feed so that we can use it on ArcGIS online. The first step is to import our feed into the Pipe:



We use the text input to get the URL as text; We then supply the url to the news feed in the ‘default’ field. This gives us a link to the BBC news feed so that it can be passed into the pipe.

We then need to pass this url through a geocoder that uses the place name in the RSS feed to give it a lat and long. In this example, I use the geonames service, which is a free Global city level geocoder. In order to use that service, I just need to add a bit of text at the beginning of the url – I can use the url builder module to create my url, which then sends the information to geonames and receives a lat and long against each record.



We can then use the fetch feed module to bring that feed back into Yahoo Pipes, and finally use the output to generate a dynamic link to my brand new GeoRSS feed!


On the Yahoo pipes run page, I can see a yahoo map with my geocoded news articles:


To use this in ArcGIS online, all we need to do is grab the link from the ‘more options’ menu, and Right click on Get as KML, and copy the link location. If we then switch over to ArcGIS online, we can use the ‘Add layer from Web’ option to add this dynamic KML layer. If you need control of the symbology, then we can add it as a csv file. All we need to do is change the url from ‘=kml’ to ‘=csv’:



Once we hit add layer, voila! We have a dynamic GeoRSS feed on ArcGIS online (with some nice symbology to boot!):

Once it is on ArcGIS online, we can add other content (either from ArcGIS online or from your own geographic data) to add more intelligence to this webmap. This webmap will only work as long as it can establish a connection to the Yahoo Pipes service.

Related Content:

Help documentation for Yahoo Pipes -

Tutorials -

FAQ and further information -

My top ten of what’s new in Desktop 10.1

There are many enhancements to ArcGIS for Desktop coming in 10.1 (opens PDF) and here are my top ten favourite features (in no particular order) using a scenario based on a long weekend in Edinburgh. For my basemap, I’m using the EsriUK Online OS MasterMap ‘Carto’ premium map service which we can see if we open the Service Credits icon bottom right of the map.

1. Support for GPS data

There are 85 new tools and 117 improved tools available in Toolbox and for my first enhancement I’d like to show you one of the new tools providing better support for working with GPS data.

You really can’t come to Edinburgh just now without making a trip to the zoo to see pandas Tian Tian and Wang Guang (Sweetie and Sunlight). Whilst visiting the pandas, I took some photos with my GPS enabled smartphone and I can use the new GeoTagged Photos to Points tool to create a new point layer showing the locations where I took them on my map.

Obviously all the media attention has been too much for Wang Guang!

2. Editor Tracking

Editor tracking has been implemented within Geodatabases so you can now record information about who made changes to datasets and when the edits were performed using auto-populated fields.  

Here I can see that the routes of the two Olympic torch relay stages in Edinburgh on the 13th and 14th of June have been digitised so people can go and cheer the runners along their mile.

Through editor tracking, an editor’s username and time stamp are stored in attribute fields directly in the dataset (shown in the image above). This can help you enforce quality control standards, maintain responsibility and also create a log of changes that have happened on each dataset.

3. Symbol Support

My third improvement is greater image support for symbols. As you can see, I have another layer displaying the start and end points of the relay routes using picture symbols. At 10.1 you can now use PNG and JPG images for your symbols.

4. Python Support

Python is now supported as a scripting language for all locations where scripting is used. You can program your own buttons and tools using Python add-ins and you can now create geoprocessing toolboxes entirely in Python.

For example, with the new label expression parser, you can now use Python to add logic to your label expressions, including conditional logic and looping.

The screenshots below show a before and after using the label expression parser. The first image is showing the names of all the torch bearers running one mile of that leg of the relay but they aren’t displayed in a suitable format. This is clearly far too long to be an appropriate label!

A python expression that creates stacked labels, based on text from one field by using the comma between the torch bearers’ names to specify where the stack happens, makes the labels much easier to read.

So, for python users, we have far more flexibility in how we can label features without modifying the field contents.

5. Maplex Label Engine

The Maplex for ArcGIS extension functionality has been moved into the core Desktop and is now called the Maplex Label Engine. So it’s now available for everyone to use! 

It provides a special set of tools that help you to improve the quality of the labels on your map, tools that were previously only used for specialist cartographic production.

I’ve heard that there is a great mountain biking centre just south of Edinburgh at Glentress so a day’s biking is on the agenda. The trails are colour coded by their level of difficulty – green, blue, red and black, just like for skiing.

As you can see from the screenshot below, important bends or trail features are shown with labels. The labels are currently using the standard label engine. By changing to using the Maplex Label Engine, they look much better, even before applying any of the label placement rules available.

By default, new Map Documents (.mxd’s) open using the Standard Label Engine but you can change to Maplex and specify a different font name and size for labelling in Customise menu > ArcMap Options >data view tab, as shown in the screenshot above. 

6. GPX to Features Tool

At number six is another tool providing better support for working with GPS data called GPX to Features. This converts the point information inside a GPX file into features, storing the geometry (including elevation or Z-value) as well as attribute fields for name, description, elevation etc. So this could be useful for all sorts of field data capture such as asset inventory or mapping the extent of a flood.

Whilst mountain biking at Glentress I’ve captured my bike route with my phone and can now use this new tool to add that to my map. Searching for the tool, we can see it’s a Python script that can be edited if you need it to do something different.

Help for tools have also been enhanced to show the licence level required (whether basic, standard or advanced).

I’ve created a simple workflow using ModelBuilder to automate this conversion process and take it a bit further. This model creates the point features and then selects just the track points to create a linear track layer.

As you can see, we did the red route and because I have converted the waypoints and tracks, you can see that we stopped for some well-earned carrot cake at the end...showing that descriptive information can also be captured and converted along with the geometry.

7. Search Window

After the ride, I’d like to see where the showers and bike wash are. The new spatial filtering options can help me to narrow down my search for relevant facilities using map-based, text-based and scale-dependent searches and the results can be sorted and grouped.

With no spatial filter set, my search for ‘facilities’ finds both the Glentress and Innerleithen layer files. If I change it to ‘within or overlapping current extent’ it filters down to just the Glentress data.        



I can also view thumbnails and preview the result contents to check this layer is what I want. Each search result also has an enabled context menu enabling adding the layer and zoom to the layer’s extent.

8. Map Legends

I want to create a map in layout view to remind myself of the biking trip.

The new dynamic legends in 10.1 will be particularly useful for people using dynamic data pages and creating map books. They support the display of only features in the visible extent and the ability to show feature counts i.e. how many features of a certain kind are showing on my map.

The new Map Extent Options enable me to limit the features displayed in the legend to only those features currently visible on the map and also add feature counts. This works on an individual layer basis so can be set for only selected layers. I only want feature counts displayed for the facilities, not the trails.

Fitting Strategy is also really useful if, like me, you’ve ever added a new layer to your map and the legend has shot off the top of the layout! You can fix the frame size so the items will change size to fit within the frame, rather than the other way around. Here are a few examples showing the dynamic legend and how the legend items fit into the available space when fixed frame is set.

9. Password Protected PDFs

If I have sensitive information in the map that I want only certain people to see (maybe I took a sneaky shortcut) I can now set a password to limit access to it if I save it as a PDF. This was possible at 10 with python scripting but at 10.1 you can use the security tab in the PDF options of the Map Export dialog box to set a document-open password and other PDF security features.

10. Share As...

Lastly, I want to make my Glentress route map available to my biking friends and the new sharing tools make this really easy to do.  

The wizard makes it very simple to take my map and publish it to both my server and ArcGIS Online. 

In the service editor I can specify all the service properties. I can also choose to share my map with and of the groups I belong to. Then I just click Publish and it’s done. 

So that concludes my top ten enhancements in ArcGIS for Desktop at 10.1 but of course we’ve only just scratched the surface. There are many more, including new analytical and geodatabase management tools, which you can read about here


Using 3D tools to make informed decisions

There is a lot 3D development going on with esri software at moment, a part of which has led to the acquisition of a gaming company called Procedural (we’ve reported this in an earlier blog post). The software that Procedural developed is called CityEngine and it provides a rule base parametric modelling engine that allows you to build impressive 3D cityscapes in minutes. I’ve been lucky enough to get my hands on this piece of kit, and its fair to say that I’ve been blown away by its capability. In addition to CityEngine, I’ve also been able to test the new LiDAR tools available in the soon to be released ArcGIS 10.1. Its all very well having new and fancy 3D tools, but the question is, what does that really mean?

GIS has traditionally enabled informed decision making using complex spatial analytics. With the improvement in 3D tools, this analysis can be extended to 3D, for city planners, landscape designers, emergency services etc, to understand how 3D space affects decision making. In this blog post we will look at how we can use LiDAR data, CityEngine and the new 3D tools to make an informed decision about the design and construction of a hotel complex in Bristol.

Our first task is to create an accurate 3D model of the surrounding buildings. This will allow us to see how our planned hotel will fit in with the neighbourhood and might flag up restrictions such as the height of the hotel based on the surrounding building heights. This is where we can use LiDAR data to help us out. LiDAR, for those of you who aren’t familiar, stands for (Light Detection and Ranging) and it is generated by a device emitting laser pulses at the ground to detect heights. Within ArcMap, we can use the new LiDAR tools to append the height values from the LiDAR data to our building footprints.


Model used to append heights to building footprint


Data sourced from BLOM

Now that we’ve got accurate heights for our buildings we can import this data into CityEngine and start creating our 3D models. The great thing about CityEngine is that it is really quick and easy to generate 3D content and allows you to create impressive visualisations of cities in seconds. All it requires is a set of ‘rules’ that describe how the GIS data is to be rendered within CityEngine. In this instance we’ve written a rule file which extrudes the buildings to the heights given by the LiDAR and also adds textures to give a good idea of what the area will look like.

However, we don’t just want a visualisation of the development; we want to do some powerful 3D analysis. Using ArcScene, we can perform different types of 3D analysis to aid our decision making process. In order to do this, we must first export these models into ArcScene. CityEngine provides us with a standard export function that allows export into a Collada data format (Collada is an industry standard format for display of 3D data). Within the ArcGIS 3D toolset, we’ve got an importer that then converts Collada into Multipatch (which the Esri standard format for analysing and managing 3D datasets). Once in multipatch format, you can then use the 3D tools to do some analysis.

The first thing I want to know is what shadow the proposed building might cast on the surrounding area. In order to do this, I can use the shadow analysis tool (new at version 10.1). We could also run a line of sight analysis to work out which rooms in the hotel will have ‘good’ views of the surrounding area. Perhaps, this could provide us with a metric to price the rooms accordingly (good views = more expensive?).


Shadow cast by planned building                   View from each room, where Green = good, Yellow = moderate, Red = poor

So as you can see, the new 3D tools working in unison provide great potential for not only analysing the world in 3D, but also being able to make informed decisions based on 3D analysis. The future is bright, the future is 3D.

Thank you to BLOM UK for supplying us with the LiDAR data that we used in this post. 

What is BIM and where does GIS fit in?

Update: since this post was written the BIM landscape in the UK has matured. You can read a more up to date overview here and about the role of GIS through the building lifecycle here.


Over recent months, I’ve been looking at BIM (Building Information Modelling) and how GIS could play a role within the engineering and construction industry. Over the last year or so, BIM has become a hot topic of debate due to a government regulation that requires every government construction project to have a full 3D BIM implementation by 2016 - see for more information. 

So the question is what is BIM?

I like to describe BIM as a ‘workflow’ in which all of the building objects that combine to make up the building design coexist in a single database. This concept is important because it allows us to understand the entire building lifecycle (encompassing design, build and operation) from a single, central data store. So, in theory, a BIM implementation should allow a single, logical, consistent source of information associated with the building.

BIM can be defined as a process or methodology that:

  1. Facilitates the integration of data from multiple sources/formats
  2. Combines the data to form a Common Operating Picture for the entire building lifecycle
  3. Enables a comprehensive assessment of the financial and environmental cost of any building project

Where does GIS fit within BIM?

So now we’ve defined BIM, where does GIS fit into an industry that has been predominantly CAD and architecture based? The remit of a BIM implementation requires that projects have a holistic view of the data and an integrated set of processes. This is also a common framework of an enterprise level GIS implementation.

An enterprise GIS system will ensure that:

  1. All the data is stored in a central repository
  2. Any data with a common geography can be related to each other
  3. The project can be assessed at any scale (from individual assets to a whole country)

Out of the three points made above, the third one is possibly the most significant. The strength of a GIS is the ability to work at any scale (and move between them seamlessly). It is for this reason that we see GIS gaining importance in the BIM and Facilities Management space. With the ability to scale, facilities managers and building contractors can have a common operating picture which delivers an appropriate level of detail based on the context. The real power of scalability comes with the ability to view data at both levels. This way, the questions around life cycle cost, carbon footprint etc. can be answered easily.

Benefits of using GIS within BIM

Some of the major benefits that a GIS will bring to a Facilities Management system include the following capabilities:

  1. Spatial analytics – which enables project managers and designers to understand the impact of their design and proposed implementation before the project has even started. This is a concept known as Geodesign.
  2. Logistics and networks – This is applicable for both the transport of materials as well as movement of people.
  3. Ability to model and forecast – for both the actual construction and also specific instances like emergency scenarios. An example case study can be found here.
  4. Use throughout lifecycle – the same platform can be used in the planning, construction and operation phase
  5. Enable sharing of information – a GIS will standardize data and processes, ensuring that different organisation or departments will be able to share information (also commonly called a common operational picture).  An example case study can be found here.
  6. Visualization – Creating maps, models and reports for board level audiences is easier using a mapping solution. See here for an example.  

So, as it looks like BIM is here to stay, many organisations will have to find a way to implement this as a core process to their business. I think GIS is crucial component of the BIM process, but instead of hearing it from me, I’ll give the last word to Kendall James in his blog post “BIM – can GIS & CAD make peace?

Update: since this post was written the BIM landscape in the UK has matured. You can read a more up to date overview here and about the role of GIS through the building lifecycle here.

Working with Date Fields in Field Calculator

First thing first - I am not a developer.  Sounds a simple enough thing, but sometimes, when I want to work with date fields in my feature classes, it feels like I need to be.  If this statement resonates with you, then help is at hand.  If you spend hours staring at the field calculator blankly, trying to work out what Date ( ) and Now ( ) actually mean, then I share your pain.   

In my work, I frequently have to work with date and time data in tables; creating new data, reformatting existing data or performing analysis using this data.  The trouble is, I always forget the correct expression required to perform some of these calculations, and as the required syntax is not always straight-forward, I usually end up pestering my colleagues who have developer experience to help me out.

And so, as an aid-memoir to me and hopefully for the benefit of a few of you, I have listed below some of the most common challenges I face and examples of the expressions required to get the required results.  If this inspires you to try some more, then I have also listed some other resources (which I have used in preparing this list) at the bottom of this post.

Although my preference has been to write expressions in VB Script I’ve also included equivalent Python examples as well. Python is becoming more and more integrated within ArcGIS so if you’re not familiar with using it the examples I’ve included are an easy way to dip your toe in.

Example Data Calculations include:

  1. The difference between Shapefiles and Geodatabase date fields – not an expression, but very useful to understand before you carry any out!
  2. How to field-calculate today’s date
    • As Date and Time
    • As Date only
  3. How to field-calculate a specific date
  4. How to field-calculate random dates for a specific time period
  5. How to convert dates in a String/Text field into a Date field

1. The difference between Shapefiles and Geodatabase date fields

One of the first things to be aware of is a subtle, yet crucial, difference between the way a shapefile and a geodatabase store date values.

  • A shapefile (shp) stores dates in a date field with this format: yyyy-mm-dd.
  • A geodatabase (gdb, mdb or sde) formats the date as datetime yyyy-mm-dd hh:mm:ss AM or PM.

Therefore if your data contains times, then you must use geodatabases (file, personal or SDE) or your times will be truncated to dates.

Settings on your Windows system determine how the dates are displayed in ArcMap—M/d/yy, MM/dd/yy, yy/MM/dd, and so on. ArcMap uses the system short date format (numerical) for displaying dates. To alter your settings, go to Start>Control Panel>Region and Language

2. How to field-calculate today’s date

This may seem a simple task, but even this requires the correct Expression.  These expressions may be used to populate a Date field or a Text field.

a. As Date and Time (for Geodatabases only – expression will work for shapefiles but will return the date only)

Using VB Script

MyField = Now ( )

Using Python

MyField = )

b. As Date only

Using VB Script

MyField = Date ( )

Using Python

MyField = time.strftime("%d/%m/%Y ")

3. How to field-calculate a specific date

Sometimes, you have a specific date you want to populate numerous records with.  To do this, you simply surround the date with symbols as per below:

Using VB Script

Date format: #DD-MM-YYYY# or #DD-MM-YYYY HH:MM:SS#

Example expressions:

e.g. for dates only:                         

MyField = #30-01-2012#

 e.g. for dates and times:              

MyField = #30-01-2012 12:35:15#

Using Python

Date format: “DD-MM-YYYY” or “DD-MM-YYYY HH:MM:SS”

Example expressions:

e.g. for dates only:                         

MyField = "30-01-2012"

 e.g. for dates and times:              

MyField = "30-01-2012 12:35:15"

4. How to field-calculate random dates for a specific time period

This may not be an everyday requirement, but it is something I need to do a lot when creating fictitious demo data.  For example, the following code will create random dates between 01/01/2010 (inclusive) and 01/01/2011 (exclusive). In other words, the random dates go from 01/01/2010 to 31/12/2010, i.e. any date in 2010.

Using VB Script

Check “Show Codeblock”, and, in the “Pre-Logic Script Code” section, enter the following:

MinDate = #2010-01-01# MaxDate = #2011-01-01# Randomize Dim MinDate, MaxDate, RandDate RandDate = MinDate + INT((MaxDate - MinDate)*RND)

Edit the MinDate and MaxDate as appropriate. The MaxDate should always be the day after the maximum date you want to allow. So if you want to allow all dates in February 2010, “MinDate” should be 2010-02-01 and “MaxDate” should be 2010-03-01.

Then, beneath:

MyField = RandDate


Using Python

Check “Show Codeblock”, and, in the “Pre-Logic Script Code” section, enter the following:

import random def randomDate(minDate, maxDate): minDate = datetime.datetime.strptime(minDate, "%Y-%m-%d") maxDate = datetime.datetime.strptime(maxDate, "%Y-%m-%d") diff = (maxDate - minDate).days return minDate + datetime.timedelta(random.randrange(0, diff))

Then, beneath:

randomDate("2010-01-01", "2011-01-01")


5. How to convert dates in a String/Text field into a Date field

Often when data is provided to me or imported into ArcMap from Excel, dates are brought in as text fields rather than date field.  On some occasions, the dates are nicely formatted in one field whilst other times, the date may be spread over a number of fields.

If the date is already in a correctly formatted (e.g. ‘30/01/2012’ or ’30-01-2012’ or ’30 Jan 2012’) then the field may be directly calculated by referencing the field only as follows:

Using VB Script


MyField = [StringDateField]

Using Python


MyField = !StringDateField!

If, however the required date elements are split over numerous text fields or an element of the date is missing, the following expression style may be used which forms the date by adding each date element together:

Using VB Script

Example input 1:              3 fields formatted as 30 | 01 | 2012 (Day|Month|Year)

MyField = [Day]&"-"& [Month]&"-"& [Year]

Example input 2:              2 fields formatted as 30 | 01 (Day|Month) with no year

                                                let’s assume we want to set the year as 2009

MyField = [Day]&"-"& [Month]&"-"& 2009

Using Python

Example input 1:              3 fields formatted as 30 | 01 | 2012 (Day|Month|Year)

MyField = !Day! + "-" + !Month! + "-" + !Year!

Example input 2:              2 fields formatted as 30 | 01 (Day|Month) with no year

                                                let’s assume we want to set the year as 2009

MyField = !Day! + "-" + !Month! + "-" + "2009"


If that has left you wanting more, a good place to start are the following references I referred to:

The Field Calculator Unleashed:

Simplify Date and Time Calculations

Fundamentals of Date Fields:

Date/Time Field manipulation in Python:

Useful Python Resources for ArcGIS:

CityEngine - Esri’s gone all 3D

If you go to Esri’s Aylesbury office, climb the stairs to the 2nd floor, turn left and see someone wearing a granddad-style jumper, you’ve probably found me. My name is Caroline Steer and I’m the Technical Solution Group’s (TSG) placement student for the year, after taking some time out from studying geography at UCL. I’ve been at Esri since September and so far I’ve been working on a wide range of things, including creating webmaps, a bit of geocoding, a smattering of Python, some beta testing and now starting to demo our online solution for Local Government, LocalView Fusion.

So in my first week at Esri, I was asked to investigate some new 3d technology that Esri Inc. had recently acquired. This software is called CityEngine, and harks from the exciting world of movie graphics. It’s been used for the likes of the cityscapes in the movie Cars 2, adverts for the Ministry of Sound and hard-core gaming. All very exciting, but what’s it got to do with GIS? Well CityEngine’s main function is to quickly create 3D models from 2D data and then making it look life-like by using a rules engine, allowing us GIS folk to build up cityscapes in no time. These buildings can then be imported into ArcScene, where we can do shadow or line-of-sight analysis. For example, this process will be of interest to the urban planning market but also links into some really interesting research some of us in TSG have been doing around Building Information Management (BIM).

There are clearly many useful and interesting applications for CityEngine in generating 3D urban environments for city planners, architects, the military and of course those working in film and entertainment.

So now you’ve got a rough idea of what CityEngine does, I’ll share my experience of it. I sat down at my new desk and after reading several help files, trying out lots of bits and pieces and overcoming various challenges I can now say I am a competent user of CityEngine. I was impressed at how I was able to create cities, which were relatively realistic with no programming skills. Those lucky enough to have some Python skills will be able to create some amazing cityscapes.

To test out my new found knowledge and CityEngine’s capabilities we decided to set ourselves the challenge of creating a 3D tour of the area surrounding our Aylesbury offices. The buildings were created using a shapefile of Ordnance Survey MasterMap building outlines and applying a series of rules which apply images to the building fascades. The streets were created by importing street networks, again from MasterMap, and applying rules to insert 3D cars and texturing. The final detail was provided by an aerial photograph taken from the Esri Imagery basemap. The result was a really impressive 3d model of Aylesbury which only took 3 days and to make it look even better I could have added lampposts, higher quality aerial imagery and use the Facade Wizard to create highly detailed facades.

So what have I learnt about CityEngine? I found the software pretty simple to use and its unique scripting language (Computer Generated Architecture) is relatively easy to pick up. I’m sure the future looks bright as Esri plans to develop tighter integration between existing ArcGIS software and CityEngine making it even easier to use.

I’d like to thank BLOM for letting us use their aerial photography and model data for testing. CityEngine is still evolving so look out for updates on the blog to see what’s been happening. For example, our friends over at the Centre for Advanced Spatial Analysis(CASA) have already started using CityEngine with a renderer and have created some really impressive models

To see what CityEngine can do see:

GIS, Crowdsourcing, Social media and Crisis Mapping, Day 2

The second day started early and switched over to what the conference called 'self-organised' talks where those who have something to say can put up a title and summary of their talk and see who would sign up for them. Last night, a whole bunch of topics were put forward for consideration and throughout the night, delegates were voting. The more popular the talk, the more people will sign up for them. The boring ones got zilch. Quite democratic in my view and kept up with the very self-help, 'doaucracy' nature of this conference. The top ten (or so) choices were then split across 6 different rooms across five time slots. Yes, I can count - some talks were repeated while empty slots were used for social chat and networking. Additionally, there was a technology demonstration area in the main foyer of the conference hall with people milling around.

I attended a number of these self-organised talks:

'Making Crisis Mapping data information actionable for the humanitarian community.'

'Best practises for verifying crowdsourced data'

'Google Mapping Tools. What's new? Q&A and Brainstorming'

I don't want to go through a complete regurgitation of the above talks (they are available on the Crisis Mappers site) - but I can summarise some as follows:

Making crowdsourced data, 'actionable' remains a problem. While everyone is familiar with twitter, facebook etc. Representatives of 'user' organisations, especially the major UN agencies such as the World Food Programme (WFP), UNICEF etc, say it would be very unlikely that they would rely solo on crowdsourced data to trigger relief actions. Certainly crowdsourced data would give a strong corroboration to more established lines of communications and could speed up the pace of actions. For example, tweets (if they're geocoded) could give strong evidence of a specific event with a geographical property. Very useful for speedily allocating resources for example. However, it is unlikely that an international response will be triggered off by a single tweet or a million tweets. There are major issues around the standard of data collected via crowdsourcing. Data collected gives an indication of something but not the priorities. The utility of crowdsourced data as an exploratory tool was readily acknowledged.


  • We must be accountable for data we are gathering and taking action on it.
  • The use of social media might change the way humanitarian agencies work by making them accountable to the beneficiaries through rapid feedback. Once you increase understanding and knowledge within the beneficiaries, and increase the quality of information, you increase their ability to articulate how they evaluate the impact of humanitarian agencies.
  • The goal of a map is not the map itself; it’s the data behind it. What created it and how it is represented. Maybe we need to look into sorting the data differently?

Verifying crowdsourced data. Can one trust crowdsourced data? A great example of trusted crowdsourced data is OpenStreetMap (OSM) - where tens of thousands of users each can edit any part of the OSM map with their work being peer reviewed, probably in real time. However, the thrust in the talk was the more well known crowdsourced data sources which are very subjective in nature. Examples include tweets, short wave radio and SMS to name three. Questions over quality, accuracy and 'trustfulness' were all explored. One cannot verify and QA a tweet, can we? Thus the problem of event-driven data, ephemeral in nature; very fluid and subjective in content poses significantly problems to decision makers. However, if this data is geolocated (many can be), if there's 1000s of tweets from one area, if all are about the same event (told through the eyes of the tweeter) - surely this is significant?


Verifying data from unknown sources, essentially collecting data from those who shout loudest or longest does fill most GIS professionals with dread. It isn't empirical enough. In many cases, the inability to verify the data, in terms of accuracy, known standards and comprehensiveness isn't an issue. The data occurred and the value of the data is that it is being generated and therefore an indication of some event in a specific time and place and therefore worthy of inclusion as part of the overall decision making.

My own thoughts:

The entire event was a success with all participants hungry for their work to be mainstreamed and accepted into the humanitarian practise. GIS and intelligent mapping was evident throughout the conference with everyone aware of the utility of having a location or place to link up a wide variety of ephemeral crowdsourced data. Data and information remain big topics as everyone tries to grapple with the well known issues of access rights, ownership, data standards, interoperability, quality and in the case of social media, trust.

The International Network of Crisis Mappers is the largest and most active international community of experts, practitioners, policymakers, technologists, researchers, journalists, scholars, hackers and skilled volunteers engaged at the intersection between humanitarian crises, technology and crisis mapping. The Crisis Mappers Network was launched by 100 Crisis Mappers at the first International Conference on Crisis Mapping in 2009. This website has since been accessed from 191 different countries. As the world's premier crisis mapping hub, the Network catalyses communication and collaboration between and among crisis mappers with the purpose of advancing the study and application of crisis mapping worldwide.

The purpose of ICCM 2011 Geneva was to bring together the most engaged practitioners, scholars, software developers and policymakers at the cutting edge of crisis mapping to address and assess the role of crisis mapping and humanitarian technology in crisis response. Following the myriad of responses to the Haiti 2010 earthquake, the crisis mapper community is fanning out into new domains. A number of reports highlight a great deal of excitement over the potential of Information and Communications Technologies, new social media as well as crowd-sourcing to strengthen planning and delivery of aid.

Oh, Willow from Geeks without Borders took notes as on her iPad and to be honest, her notes are far, far better than mine!


GIS, Crowdsourcing, Social media and Crisis Mapping, Day One.

The purpose of ICCM 2011 Geneva is to bring together the most engaged practitioners, scholars, software developers and policymakers at the cutting edge of crisis mapping to address and assess the role of crisis mapping and humanitarian technology in crisis response. Following the myriad of responses to the Haiti 2010 earthquake, the crisis mapper community is fanning out into new domains. A number of reports highlight a great deal of excitement over the potential of Information and Communications Technologies, new and social media as well as crowd-sourcing to strengthen planning and delivery of aid.
Read More

Creating Animated or Time Aware Hotspots

One of the challenges I often face is trying to show how clusters of events (e.g. crime locations) have changed over time.  ArcGIS through Spatial Analyst and/or Crime Analyst allows you to create great hot-spot maps for a period in time, but how do you create animated hot-spot maps?  Here are a couple of methods I’ve come up with:


1. Using Animated Group layers

ArcMap allows users to create animations by cycling through a series of layers within one group layer one at a time either in the order they are listed or in reverse order.  This is achieved by creating a hotspot for each of the time periods desired (in my example one per month) and grouping them together as a group layer.

Using the Animation toolbar, it is then possible to Create a Group Animation which will cycle through all the layers in the group one at a time.  Fading transitions and blending can also be set to improve the visual effect of the animation.

NOTE: If you are animating semi-transparent layers over a background map which is also set to be semi-transparent you may experience a flashing effect.  This can be resolved by setting the background layer to be completely opaque.

Additional Comments

There are many benefits to this approach of animating time data, but here are a couple I have noted:

A. It doesn’t matter what time periods we are using, so long as layers have been created.  You can easily animate years, months, times, weeks etc. In just the same way.

B. You can use this method to animate through any number of layers which in turn may be grouped.  For example, I have created a Group Layer which contains a sub group for each month comprising a hotspot and mean centre point location.


Relevant sections of help file:  Creating Group Layer Animations


2. Using Mosaic Datasets and Time Awareness (ArcGIS 10 only)

One of the downsides to the first approach is that it doesn’t leverage the new Time-Awareness capability released at ArcGIS v 10.  This makes it very easy to bring vector data with a time or date to life by playing back through time using the time slider tool.  But how can you achieve the same result with raster data.  I have created a number of demonstrations recently where I have created hot-spots for the same area but based on data for different time periods.  Here’s how:

i. Create a raster for each time period – if your data is suitably formatted, this is a simple process to automate using an iterative model. (e.g. for each unique month, select all the data, create a hot-spot and export as a new raster using the month name as the file name).

ii. Create a new Mosaic Dataset (new at v10) and load all these rasters into it – again this could be part of the model.

iii. When you look at the Mosaic Dataset, one part of it is the raster footprints.  If you open the attribute table for the footprints, you will see that you have one record per raster that you’ve added. 

iv. Now add an additional ‘date’ field to this table and populate with a date.  For month based data, I set this to the 1st of that month.

v. Once this is set up, you can make the Mosaic dataset ‘Time Aware’ and use the Time Slider to animate


Additional Comments

There are many benefits to this approach of animating time data, but here are a couple I have noted:

A. Using this method, you can also publish as a time-aware service and play back through a web application, BUT this requires Image Server extension

B. By having all the rasters in the same layer, you only need to set the symbology once and you can then ensure the classification settings are consistent across all rasters. In other words, you will only get a ‘really’ hot spot for the month when the values were particularly high.

Relevant sections of help file:  What is a Mosaic Dataset

A Cautionary Tale

ESRI inc have always stated that ArcGIS service packs should be installed across the board, especially since the ubiquity  of Direct connect, so Desktop, ArcGIS server and geodatabase should all be upgraded together.

I’m sure we’ve all been very blasé over this in the past and upgraded as and when we saw fit, I know at times I have and it never seemed to matter too much. However it’s time to think again. On a project I’ve been working on we’ve had our fingers burned twice by assuming that everything was service packed when it wasn’t. As it turns out we’d been caught both ways. On the first occasion the geodatabase was at 9.3.1 SP2 and the client at SP1. We were trying to turn on archiving for layers stored in oracle spatial and the archive tables were getting created but the user_sdo_geom_metadata was corrupt so no data could be inserted. Applying SP2 to the client fixed this.

Having got correctly patched clients we were given access to another database to update and needed to run the fix utility ( for re-versioning tables once they have been versioned and un-versioned. The utility would not run until we realised that in this case the new database wasn’t patched to SP2. Again patching the database fixed the problem.

So, from now on I’ll believe what it says – though in fairness neither of these was caused by intentionally not patching the whole landscape and in both cases these were test servers. It’s worryingly easy however to miss patching machines in a large environment, especially where desktop software is installed directly on user’s own desktop or laptop machines. We are increasingly seeing organisatoins implementing desktop through Citrix or other RDP technologies and this makes consistent patching of all machines a lot easier to manage.

Time to upgrade

I guess it's coming around to the time when customers who jumped to ArcGIS 9 or 9.1 are again thinking about moving up to the latest releases. Combined with that, the DBMS of choice back then - maybe 6 years ago now, may be out of support now so a new DBMS release is part of the plan too. Hence I get emails dropping in along the lines of "we are on ArcGIS 9.1 Oracle 9i and want too move to ArcGIS 10 Oracle 11g, what is the best upgrade path".

Now these questions are being asked because they will have discovered the conundrum, ArcGIS 10 doesn't support their 9i, ArcGIS 9.1 doesn't support 11g so which way to jump.

My response to this is, hang on let's step back a bit and look at the bigger picture.

Firstly the hardware, presumably this is now 6 years old also and server perfromance has moved on and prices come down so you get more for your money than you did when the old system was installed.

Secondly, is the geodatabase which you want to move really that clean? What "mistakes" have happened over the 6 years which have been recovered from, what versioned editing/compress horrors are lurking and what classes where created in a rush of enthusiasm and have never been referenced since. Are there flaws in the permission model which should be rectified. Also if the original system used shared logfiles how much space is wasted in logfiles, maybe belonging to people long gone from the project.

Thirdly is upgrading the database completely transparent? Reading the SQL Server 2005 documentation soon shows that an upgraded database from 2000 somehow isn't quite as clean as a fresh 2005 db - and of couse we are now on 2008.

I'd therefore suggest that it is really worth considering starting again, Get a new server, install the latest db, Oracle 11g, SQL Server 2008, PostgreSQL;  bear in mind that you could change dbms at this stage. Create a new geodatabase and use the ArcGIS tools (maybe with the backward compatible direct connect drivers) to copy the data into the new database. This will doubtless throw up new questions like how to tidy up the versioning model etc but in the long run will be worth it. The result should be a clean database automatically created with high precision coordinates. Versioning can then be added and a permissions model. The logfile settings can use session logfiles possibly with temporary tables.

This may also be an opportunity to consider the spatial type. If the existing database is SDE Binary consider changing to ST_Geometry (Oracle) or Geometry (SQL server). Also review any Raster data in the old database and consider whether the new Mosaic datasets offer improved performance and maintainability.

If archiving of edits is of interest now is the time to switch it on, plus if geodatabase replication is on the horizon, add GUIDs to the classes which will be involved.

Whilst it may seem to be a lot of work in the short term, and may involve making decisions that it would be easier to sweep under the carpet, I think for many sites this is the only realistic way forward and will ensure a future-proofed ArcGIS 10 solution.

So, go on then, pick up the phone and order that server. It'll make sense in the long run!

OSTN02 supported in ArcGIS desktop

Are you using GPS captured data within ArcGIS desktop?

Ever wanted a more accurate transformation for your GPS captured data when displayed on a BNG basemap?

Making this possible has been a personal challenge for myself and a number of colleagues for sometime as over the past 12 months we have seen an increasing number of customers requesting this capability.

I am happy to announce, after working in collaboration with DGC and Ordnance Survey, Esri UK have released OSTN02 support within ArcGIS desktop.   For those of you not familiar with the intricacies of transformations in the UK, OSTN02 is a very accurate grid based transformation between British National Grid and WGS84 (Lat/Long). It can be used when collecting data from a GPS that is to be stored in a British National Grid feature class or for transforming British National Grid data to WGS84.

Aware that the Defence Geographic Centre (DGC) had previously carried out some work in this area I approached them to see if they were willing to share the transformation with the rest of the GIS user community.  DGC were happy for this to happen and Ordnance Survey have provided quality assurance, ensuring the transformation works as expected.

For those that are interested, the OSTN02 NTv2 transformation file can be downloaded from myEsriUK. You will also find a full set of instructions on how to configure the files at this location.  We have long supported the transformation in ArcPAD and you can download the relevant files for ArcPAD from this location too.

Thanks to DGC and Ordnance Survey for helping to make this happen.  I hope you are able to make use of the new transformation in your work. 

NB: In ArcGIS desktop the OSTN02 NTv2 transformation is called OSGB_1936_To_WGS_1984_7

Shortcuts in ArcGIS Desktop

I'm currently in the midst of working on a series of four webinars entitled 'Getting the most out of ArcGIS'. These webinars are a response to customer feedback saying that they want to understand how to get more from ArcGIS desktop.  So we thought we would share some tips and tricks on areas we most commonly get questioned about. I would encourage you to watch it again here (and get my viewing figures up!) but if there was one top tip to remember it would be the following doc:

This is a document of shortcuts for saving you time in ArcMap.  An absolute must read for all users wanting to reduce their click count!

For those who have also been asking for access to the other links mentioned in the webinar, here they are... both excellent:

Desktop Blog:

Geodatabase Blog: