The internet of things (IoT) has conquered our world. Statistics platform Statista estimates that there are over 20 billion IoT objects among us, constantly monitoring and recording us and our activities.
By 2025, Statista projects that there will be more than nine IoT devices for every human on Earth.
Data gathered by IoT devices are of great utility in digital forensics. But the IoT world presents great challenges for forensic experts.
Of great interest in this area is where the data is actually stored since the integrity of the chain of evidence is important in prosecution. Most IoT devices have a very limited memory and they do not retain data for long. Device data is normally uploaded into the cloud and there is great difficulty in pinpointing exactly where the evidence is stored and how to secure it from attempts at modification. IoT data types also differ from the normal files on computers and servers addressed by digital forensics.
The normal IoT device is powered by a small battery and can usually hold its data for about ten days. It is one of the challenges of IoT forensics to download and preserve the data before it vanishes. The porous security demonstrated by these devices could also be penetrated remotely and its data rendered invalid. This presupposes that investigators could find all the IoT devices in the first place given these devices could be very small and hidden in other appliances or furniture.
In the end however, the question is will information gained from the forensic investigation of IoT devices stand up in court? Can they be relied enough to convict the accused?
The future is still cloudy. An excerpt from The Atlantic discusses the issue:
The legal system already draws on a range of technological self-tracking devices as forms of evidence. GPS devices and apps for tracking bike rides like Strava have been used in court proceedings around cycling accidents, and of course, there are multiple forms of remote tracking used by the police, like Automatic License Plate Readers (ALPR). The difference is that wearable devices are elective. And when they make that decision they are effectively splitting their daily record into two streams: experience and data. These may converge or diverge for reasons to do with the fallibility of human memory, or the fallibility of data-tracking systems.
This similarity—the fact that both systems can be fallible—is what courtrooms should keep in mind. Courts have experience with this. They know that eye witnesses can’t always be trusted, even if they were there to witness the crime. They understand that doctors and other witnesses have expertise, but they aren’t all-knowing beings. There are expert witnesses for each side, and judges and juries can consider the general range of human bias and inaccuracy. When large data sets are brought to bear, they should be treated the same way.
Prioritizing data—irregular, unreliable data—over human reporting, means putting power in the hands of an algorithm. These systems are imperfect—just as human judgments can be—and it will be increasingly important for people to be able to see behind the curtain rather than accept device data as irrefutable courtroom evidence. In the meantime, users should think of wearables as partial witnesses, ones that carry their own affordances and biases.