By ASHER WOLF
This article was originally published on independent news website New Matilda and has been republished with full permission.
Reporting and analysing critical incidents such as Hurricane Sandy using Twitter presents a particular challenge: filtering high volumes of information, and getting information out not only quickly but accurately.
As BuzzFeed’s Deputy Tech Editor John Herrman points out:
“There was no shark in Brigantine, and certainly no beached seal in Manhattan. The NYSE trading floor did not flood, and the 10 or more Con Edison workers trapped at a damaged plant turned out not to exist.”
There was, however a 168-foot tanker ship that washed ashore on Staten Island, a major New York public hospital that had to be evacuated, and seven New York subways that flooded.
More than 80 houses burnt down in Queens, NY. 70-80 per cent of Atlantic City disappeared underwater, and at last count at least 133 people were confirmed dead throughout the Caribbean, US and Canada.
The claims appearing on Twitter during Hurricane Sandy beggared belief and demanded fact-checking — at least, for all of us who seek accuracy in news reporting.
Twitter in itself is not a truth machine — it is awash with crummy claims and false data. But the platform is a goldmine of open source information, waiting to be vetted and verified.
We know traditional media outlets are increasingly relying on social media. Checking ascertainable details from online sources takes time and commitment. “Horse-race” journalism is simply dangerous during critical incidents: sharing bad information may have potentially perilous consequences, as social media users attempt to gather vital information to respond to emergencies.