There’s a problem in the use of open source intelligence in emergencies.
Or perhaps more accurately there are a series of ineffectively managed risks.
I think that, at least in prinicple, there is widespread acceptance amongst category one and two responders and others that digital communications tools play a key role, possibly the primary role, in warning and informing the public. That was not the case even a small number of years ago. The risk that someone senior would turn up in an emergency and say
“we’re not doing any of that Facebook nonsense”
has diminished. There remain considerable differences in culture and capacity around digital between different organisations and this does create an area of some risk.
That’s not the risk I want to talk about here though.
As organisations get more used to using these tools in emergencies they are starting to notice that there are useful data being shared in social networks. Increasingly managers are asking for information about what is happening online.
This is all to the good of course and one of the reasons I am backing the plan to bring the VOST concept to the UK.
It’s also where the risks come in.
Who is watching social media?
Often when you pose these questions in multi-agency groups a police officer with plenty of stripes will bristle slightly and point out that it is self-evident that the police service does the intelligence gathering around here.
That’s fine, police forces are clearly equipped for such work and I guess the average citizen would expect police forces to be gathering data from open sources around, for example, a controversial protest march. Are they as skilled in gathering data relevant to surface water flooding? Or animal diseases?
The risk is that if we aren’t clear on whose job it is, it becomes no-ones job.
Or the person who everyone looks to do it in an emergency gets made redundant.
You lot know about Twitter
And my experience is that in many organisations communications (media / PR / digital) teams are being asked to play this sort of role. Sometimes explicitly, sometimes tacitly.
On one level, again, that’s fine. Comms teams usually have the technical skill and familiarity with social networks.
Comms teams are trained for a different task. Mining data from social networks (as I do in my voluntary role with Standby Task Force) is a skill. It requires judgements to be made based on a set of incomplete understandings. Just like other forms of intelligence gathering (or research).
I’m not saying comms professionals aren’t capable of doing this, but are they being trained to make sure they are analysing the data objectively and presenting reports with accurate confidence weighting?
And I’m not convinced we’ve got the issues around what level of data we should be mining sorted yet. Running a search for mentions of place names in case people are telling each other (but not the council) of flooding sounds reasonable (doesn’t it?).
What about when I see a report from an account I’m not familiar with, how do I work out whether I can trust that report. I’d probably have a look down their social media profile, to see if they have sent messages about that location before. I might Google their name (or their user name) to see if I can find out more about them on other social media platforms.
Is this reasonable and proportionate?
That citizen has a right to privacy but it’s not an absolute right. The state (including, presumably, local authority comms officers) can infringe people’s privacy if it is lawful, reasonable and proportionate to do so. And the fact that people have put information about themselves in places where it can be seen does not, of itself, mean it is reasonable for me to go and look for it, in this context, at this time.
These feel like risks we should be talking about. They are all highly manageable through the key tools of emergency management: planning, training and exercising.
It’s certainly something we hope to build into the VOST model for the UK.
It’s just ignoring them that is risky.