There are 4,695 digital businesses in Birmingham

Now you know me, always interested in stats. Particularly creative/digital economy stats about Birmingham. Being interested in such stats isn’t as much part of my job as it used to be, but I think anyone teaching the media and preparing students for a life in ‘the industry’ should have a fair grasp of the size and scope of said ‘industry’.

Mapping the UK’s Digital Economy
A new report is out. It argues that the way the UK usually counts digital businesses (using SIC codes) may not give us the whole picture. Using a model called ‘Growth Intelligence’  it says there are 270,000 digital companies in the UK rather than the 167,000 previously thought. They are responsible for 11% of jobs in the UK. ‘Growth Intelligence’ is a ‘big data’ approach that uses Companies House info along with other sources to create a more nuanced set of classification for companies and thereby seems better able to identify which are or aren’t digital. It results in a list of 1.868m companies but that will still be an underestimate (20% of companies fail to put themselves against any SIC code).

What’s a Digital Company?
One whose outputs, the product or service they offer, are digital. So if you use a computer system to help you sell bananas, that doesn’t make you digital: “we restrict ‘digital content’ to sectors where the only or principal outputs are digital products or services. For example, we exclude large parts of the architecture sector, but include firms specialising in CAD and technical drawing. By the same token, we exclude supermarkets, but include retailers whose principle offering is digital (such as digital music stores)” (p: 9)

What about Birmingham?
I’ll cut to the chase. Via the SIC code system, Birmingham had 3,116 companies regarded as digital. This new method says there are 4,695 companies. These companies are actually located in the wider Birmingham TTWA (Travel to Work Area) which goes up to Tamworth and down to Redditch. In this area are about 600,000 people of employment age. By contrast, Manchester’s TTWA has about 700,000 in it and has more digital firms in it also (7,324).

Is this good news?
Yes. There’s a section in the report that says that other areas have more ‘clustering’ of digital firms but a city like Birmingham has too diverse an economy for digital to show up as a distinct cluster (though there are clearly significant local clusters in areas like Eastside and Jewellery Quarter).

What the report does and doesn’t tell us.
The report doesn’t tell us much at a local level other than the number of firms. However the information about a typical  UK digital firm is interesting.  Digital firms are slightly younger than other firms (about 9 years old rather than 10). There’s the same number of start-ups as in any other sector. They tend to employ more people on average. They tend to have lower revenues.

Will the number of digital companies grow?
Yes, says the report. Largely because of the number of firms ‘inflowing’. That is, more existing firms are becoming digital firms – moving from analogue to digital as such. However, data suggests that digital firms are as susceptible to the ups and downs of the economy as any other sector.

Not just London.
What’s really useful about the report is that it goes some way to correct the view that ‘digital’ is just something cool firms in East London do. It’s clearly not. Although there’s definitely a concentration in the South East,  Birmingham and other regional cities are very much playing their part.

138,309 hyperlocal news stories

Local News twitter

Last year, as part of my work on the Creative Citizens project, I set up a twitter account to keep track of news stories that were coming out of hyperlocal news websites. I had a hunch that if you counted the collective output of such sites the figures would be moderately impressive.

My hunch was right and the statistic of ‘one story every two minutes’ piqued the interest, to varying extents, of the BBC, Nesta and Ofcom. I’ve just had published a journal article (online, open access) that draws on the data I produced. Research colleagues of mine have produced a content analysis of some of the stories.

All I wanted to use this short post to do was point out that the @alllocalnews Twitter account I used has now come to the end of its life. It was ‘powered’ by a bundle of RSS feeds that were run through Google Reader. This bundle had its own RSS feed that then triggered an recipe, pushing an update to Twitter.

The death of Google Reader means the updates have stopped. No doubt I could use any number of services to restart it but I don’t really have a research reason to do so right now and none seem to easily facilitate ‘bundling’ in the way that Google Reader did. I have re-run my ‘counting’ for 2013 and I will shortly publish some new stats about the volume of news stories published by hyperlocals one year on from the original ‘count’.

So, during its life, 25 March 2012 until 2 July 2013, the @alllocalnews Twitter account published 138,309 hyperlocal news stories. That’s about 300 per day, 12 per hour. Not bad I think. I would say that figure is way less than the number of stories produced by the local press but perhaps way more than might be produced by the forthcoming crop ofLocal TV stations

You can access the @alllocalnews archive as a searchable web page or download a .csv file of all the tweets.

Hyperlocal – one every two minutes

I’ve been wondering exactly how many news stories UK Hyperlocal websites publish. Last month I had a meeting with Ofcom where I suggested that it might be a decent amount. So after a bit of research, outlined below, it looks like there is a news story published on a Hyperlocal website every three two minutes*.

Here’s what I did to try and find this out.

  1. I looked at all the RSS feeds listed on Chris Taggart‘s Openly Local database of Hyperlocals.
  2. I corrected those that weren’t resolving and omitted those pointing to now-dead sites.
  3. I took out those that were feeds for a forum as they tend to include lots of ‘stuff for sale’ messages along with their replies (plus, at this stage, I’m not too interested in ‘conversation’).
  4. I created an OPML file for the remaining 431 feeds.
  5. I created a Google Reader Bundle so that I would then have a single RSS feed.
  6. I used to push that feed to a new twitter account @Alllocalnews
  7. I counted how many tweets that feed produced. I use a google spreadsheet created by Martin Hawksey (who also gave me the ‘bundle’ advice above) to produce pretty visuals and to archive the tweets.

This is all a bit rough and ready as the Openly Local database is currently being worked on to update it and I know there are other data sources out there.

In the next few weeks I’ll be refining the list of RSS feeds, adding to them where appropriate and most importantly, deciding on a sampling period so the counting can be a bit more accurate (the ‘one every three minutes’ figure comes from the two hours of me watching it this morning – update: see note* below).

My spending time on this is part of a new research project I am part of which looks at ‘Creative Citizenship’. An examination of ‘Hyperlocal’ is one of three strands in the project. Overall the project’s research methods will be more qualitative than quantitative but one of the things we said we’d do initially is get a sense of the scale of Hyperlocal publishing in the UK.

So, roughly, kind of, one every three two minutes. More than I thought there’d be.

About that @Alllocalnews twitter feed. Follow it as your own risk as it just spits out lots of news stories with no location context. However, I like its lack of context as there’s a kind of mystery to it, the link could take you anywhere in the UK.

For more on Hyperlocal read Nesta’s new report, written by Damian Radcliffe.

When I pull my finger out I will get our research website up and running which will then be the home for these kinds of updates.

*Have updated this after spending the day monitoring it. Seems like a lot gets published from lunchtime onwards. 44 stories were published between 1pm and 2pm so the average is adjusted. As I say though, I’ll create a proper sampling period in due course once the data is cleaner.