fbpx

Kyle Rittenhouse Trial Highlights Importance of Technology Expert Witnesses

Earlier this week, a jury heard closing arguments in the trial of Kyle Rittenhouse. Rittenhouse rose to national prominence in August 2020 after allegedly shooting three people at a Wisconsin protest, two of whom died, the third injured. Now, he stands trial for those alleged crimes.

One unexpected curveball: whether or not the prosecution should be allowed to use an iPad to zoom in on footage that allegedly shows Rittenhouse at the scene of the crime. The defense argued that when one uses the pinch-to-zoom feature available on Apple devices, it alters the footage:

“It uses artificial intelligence, or their logarithms, to create what they believe is happening,” said defense attorney Mark Richards. “So this isn’t actually enhanced video, this is Apple’s iPad programming creating what it thinks is there, not what necessarily is there.”

NOTE: Many publications that have published this quote allege that the defense meant “algorithms” rather than “logarithms.”

Prosecution insisted such alterations don’t happen, and that zooming is no different than putting a magnifying glass over a printed photograph. Judge Bruce Schroeder initially said the prosecution would have to bring in an expert to confirm this, otherwise they’d have to use the raw footage taken from a wider angle.

James Whitehead is the Associate Director of Digital Forensics at Contact Discovery and has served as a digital forensics expert witness on forensic issues.

“I think as we see AI evolve, a new breed of validation questions may arise as the computer begins to generate life like images of events and people that do not exist,” says James Whitehead, Contact Discovery’s Associate Director of Digital Forensics, who has testified as an expert witness in other trials. “There’s an entire industry of applications… that leverage or skew the underlying digital photography [so we can] create panda face versions of ourselves.”

While the panda face apps might be an extreme example, Whitehead was also quick to clarify that the fact these apps exist doesn’t automatically mean the raw data is unreliable.

 “These apps function because the underlying digital image is trustworthy as is the underlying technology,” he said. It’s perhaps paradoxical that as technology gets better, it almost becomes harder for people to trust it.

In court, Judge Schroeder admitted to not having a very good understanding of technology. “I know less than anyone in the room here I’m sure about all this stuff,” he said.

It’s easy for people who live and breathe tech to think that pinch-to-zoom is a standard feature that everyone is familiar with, but then… should that matter? How often laypeople use certain technology isn’t necessarily a good standard for whether or not that technology should be admissible in court.

If a judge knows they don’t understand a topic as Schroeder admits he doesn’t understand pinch-to-zoom technology, it does make some sense to err on the side of caution. Once a jury has seen evidence, the judge can’t change that if they later learn that evidence was unreliable. When judges know they’re out of their depth, deferring to experts before they make a call is quite reasonable.

“The fact finders are a diverse group of individuals,” says Whitehead.  “We must remember that education of the fact finder isn’t a factor to be lightly regarded. The rules of evidence have evolved over the years as has the evidence. It’s still good practice to explain what you are leveraging and why it matters. If evidence or story is technical in nature, [it’s best to] have an expert on standby who can assist with the explanations.”

At the end of the day, criminal trials aren’t about what the Twitterverse thinks; they’re about what judges and juries think. Whether or not the prosecution should need an expert to confirm the validity of pinch-to-zoom footage is beside the point if a judge says they do.

Later in the trial, Judge Schroeder had a change of heart and said the jury could consider the enlarged footage, however he wasn’t shy about expressing his skepticism for it.

 “You’re basing this extremely important segment of the evidence on something that I’m really queasy about,” Schroeder said. “I’m not going to give an instruction on it, but I’ve made my record on the high risk that I think it presents for the case.”

So what can other legal teams learn from this ruling?

For one, that just because technology seems commonplace, that doesn’t mean a judge will understand the ins and outs of the technology well enough to confidently make a ruling on its admissibility. Whitehead says people might be surprised at how often judges expect expert testimony for technology end-users might consider commonplace.

“Technology is wasted on the young, like naps and kindergarten,” he said. “Judges are often removed from the [supposedly] commonplace setting for which our matter may hinge.”

Be ready to defend any part of your case involving technology, and yes, that may even mean expert testimony when you don’t agree expert testimony is needed. In this case, Schroeder initially put the burden of proof on the party that presents that evidence, not the party trying to disqualify evidence. Could that open the door for other teams to cast doubt on opposition’s evidence even if they don’t have their own experts? After this tech debate in such a highly publicized trial, some attorneys that would’ve otherwise thought they can’t get evidence thrown out might try anyway. Even if evidence is ultimately admitted, a drawn out debate about its reliability could still shape jurors’ perceptions of that evidence. Having expert testimony at the ready can potentially prevent such a debate.

Another thing most legal teams know, but this case reinforces: don’t be overly reliant on one “smoking gun” if you can help it. Perhaps the prosecution will be able to get a conviction based on evidence besides their pinch-to-zoom footage. Only time will tell. The one constant of the legal world is that as much as we try to predict it, it remains unpredictable. You never know which evidence could be disqualified, so finding multiple “smoking guns” makes for a stronger case.

Signal vs. Cellebrite: What You Need to Know

If you’re part of legal investigations that involve any kind of electronic data, you need to know what’s happening between Signal and Cellebrite.

Cellebrite makes one of the industry’s most commonly used digital forensics tools, and Signal CEO Moxie Marlinspike has recently publicized alleged vulnerabilities in Cellebrite’s security measures. Continuing to use outdated versions of Cellebrite, especially without other best practices of digital forensics in place, could open the door for system hacks as well as opposing counsel questioning the integrity of your evidence.

These types of legal proceedings can cause substantial disruptions in forensic labs worldwide.  Forensic extractions and analysis would have to pause for the duration of the imaging process; forensic labs would need to relocate sensitive data to other platforms; ultimately the legal costs associated with these additional acquisitions and analysis could be significant. Luckily, there’s a few relatively simple steps you can take now to prevent the astronomical time and expense it would take to deal with any spoliation issues.

The Background

Signal and Cellebrite exist on two opposite sides of the technology spectrum: Signal is a messaging app that offers end-to-end encrypted messaging. Digital privacy is their primary selling point. Cellebrite is a digital forensics company.  When law enforcement seizes an electronic device for an investigation, there are good odds that someone, somewhere is using Cellebrite technology to unlock it and collect data. That means one of their primary selling points is the ability to circumvent privacy measures when the situation calls for it. You can understand why two such companies would end up at odds. It’s a never-ending cat and mouse game: a win in forensics is normally seen as a loss in security and vice versa.

In a blog post, Signal CEO Moxie Marlinspike made several serious allegations against Cellebrite’s security protocols:

  • That Cellebrite has not updated some of their source code files since 2012, despite hundreds of updates to these files becoming available since then.

  • That because most of the data extracted by Cellebrite comes from third-party apps rather than the device itself, it would be possible for any untrusted app developer to put files in their apps that would corrupt Cellebrite output and reporting. 

  • That if such an exploitation were to occur, not only would it undermine that particular collection, but any prior and future collections done with that same Cellebrite device.

  • That “Industry-standard exploit mitigation defenses are missing, and many opportunities for exploitation are present.”

  • That Cellebrite appeared to also include unlicensed iTunes software, opening the door for legal challenges from Apple to Cellebrite and its users.

Marlinspike’s blog post also concluded with some “completely unrelated news” about how new updates to Signal would feature files in app storage for “aesthetic purposes.”

Some have interpreted this to mean that not only is Marlinspike saying these vulnerabilities in Cellebrite exist, but that he intends to actively use his own Signal app to wreak havoc on Cellebrite investigations.

Of course, no can know for sure, but if that’s true it poses a substantial threat. Signal had over 40 million users as of January 2021, so it’s only a matter of time until law enforcement ends up investigating a phone where the app is installed.

Other Important Context

While it’s not exactly wrong to say that some of these vulnerabilities look like rookie mistakes to an outsider, it’s important to recognize that unlike the consumer-facing Signal app, Cellebrite is not intended for use by laypeople. Anyone using Cellebrite to extract data from a device is most likely an expert in digital forensics who’s taking other precautions to prevent the kind of corruption that Marlinspike describes.

Cellebrite’s original customer base consisted of government and law enforcement agencies.  Many of these organizations use forensic workstations that are isolated from internet accessible devices. They also sanitize their workspaces between cases to avoid cross-contamination between different devices’ data. Assuming these best practices are in place, the risk of rogue executables coming from mobile devices the way Signal suggests is incredibly low.  

However, as Cellebrite has grown, so has their number of private sector clients who use workstations that rely on the same networks as other company devices. That means that if someone were to exploit the vulnerabilities that Marlinspike mentions in his blog, the ramifications could be company wide, not just a matter of corrupting one device.

More remote collections in light of the pandemic also complicates things. In light of these developments, the concern of untrusted data on a mobile device corrupting an acquisition is real; unlikely, but real none the less. We also have to remember that in forensics, theoretical possibilities matter. Ideally, you do not just want to prove that no one tampered with your data, but that it was highly unlikely that anyone could have tampered with your data.

The publicization of Cellebrite’s vulnerabilities is already having real-world consequences. In Maryland, a defense attorney named Ramon Razos is asking for a re-trial because law enforcement relied heavily upon Cellebrite evidence to convict his client.

So… can I keep using Cellebrite in my investigations?

The short answer is, yes. You can keep using Cellebrite and significantly reduce your risk of data spoliation with just a few best practices of forensics. Namely, run the most recent version of Cellebrite.

According to Vice, Cellebrite issued an update less than a week after Marlinspike published his blog post. While Cellebrite did not explicitly say that these patches were meant to address Marlinspike’s grievances, the timing certainly makes it look that way. In the same Vice article, Cellebrite allegedly asserts that “Based on our reviews, we have not found any instance of this vulnerability being exploited in the real-life usage of our solutions.”

Again, those using Cellebrite should be forensic experts with other tricks up their sleeve. They’re not relying entirely on Cellebrite technology for effective preservation, but some combination of Cellebrite technology and their own failsafe measures. 

A forensic analyst should always spot check their work by manually reviewing the raw files to confirm the forensic software parsed out the intended artifacts. Spot checks of the data on the physical device can also reassure the investigative team that they have maintained data integrity.

If you’re a lawyer who’s paying someone else to handle your forensics, make sure your vendor is aware of the current Cellebrite situation and has applied the most recent patches. It’s also totally fair to ask your vendor what other non-Cellebrite measures are in place to ensure data integrity and defensibility. Are they sanitizing work stations between collections? Are they spot checking their data? You deserve to know.

While the risk of data corruption is most likely far lower than Marlinspike wants Cellebrite customers to believe, it is there, and the consequences of an exploitation are too great not to check all your bases.

If you have any other questions about digital forensics, you can reach out to Contact at info@contactdiscoveryservices.com.

Why Great Legal Technology Still Needs Great People

Necessity is the mother of invention. Thus, the legal technology market is full of great inventions. There’s so many that it can be intimidating, especially when everyone seems to be making the same claims that sound too good to be true.

There’s great legal tech coming from all corners of the market. Some solutions come from established names, others from up-and-coming players within the eDiscovery space. None of it does everything for everyone, but much of it can do something for someone. At Contact, we use all sorts of different platforms depending on what a given project calls for: Relativity, Nuix, Cellebrite, OpenText, CloudNine, ReadySuite, Magnet, and Metaspike, just to name a few.

As more great tech bursts onto the scene, many imagine a future where automation has significantly lessened dependence on service providers if not eliminated them altogether. It’s great that tech is empowering people with less-specialized skillsets to do more than they could before. However, those that do have more specialized skillsets in legal technology are still a necessary part of the equation.  

More Capabilities Require More Knowledge

Technological advancements usually mean that tech can now do more things than it could before. However, increased functionality can be a blessing and a curse. Oftentimes, as the list of things that tech can do gets longer, it becomes harder and harder for the average user to navigate extensive menus and solve the specific problem at hand.

For that reason, the widely prevalent and seemingly logical notion that better tech = less need for human help is actually not true. In fact, it’s the exact opposite of true. The more technology can do for us, the more it requires advanced knowledge of its capabilities. The more it can do, the further true visionaries can push it. It’s the same way that almost anyone can hop in a canoe and row around a small pond, but if you want to get on a cruise ship and travel the world, you’re going to need a staff of people who has sailed before and already knows the ropes.

The “increased functionality” that tech companies brag about doesn’t count for much if end users don’t even know it’s there. It counts for negative points if it’s cluttering an interface and making it harder to do tasks that were quite simple back when there were five options on a menu instead of 100. 

When your review platform has so many thingamabobs but you don’t know what to do with them.

One potential workaround is to simply live without those other 95 options in favor of a simpler, streamlined, but less advanced platform. Essentially, pick the canoe in a small pond instead of the cruise ship. For some organizations, that may very well be the best option. For many more, there will come a day when they need one of those other 95 options.

Legal tech specialists who work with these advanced platforms day in and day out understand the full gamut of what they can do. They can make these platforms conform to your needs. What’s more efficient, teaching every single attorney and paralegal every capability, or letting an expert evaluate your matter and coach your team on the 1-2 functions that will be most useful?

Investing in great technology means all those extra tools are still in your toolbox when you need them. Having great people means you can actually make sense of all the whozits and whatsits galore and put them to use while ignoring the ones that don’t make sense for the matter at hand.

Both the law and technology are constantly changing. People can change with them.

Rushing to a new platform in an effort to eliminate human service providers may very well work in the short term. But what happens when states pass new laws or suddenly a platform that worked great six months ago is obsolete? Even the best technologists can still only adapt to changes in the law so fast. Trust us, we like to hire the best technologists so we know better than anyone.

Meanwhile, there are always new solutions coming out from various legal tech companies. Some of it comes from real advancements, some of it is repackaging existing technology to varying degrees. Innovation is great, but “new” doesn’t automatically equate to “innovation.”

We can’t undervalue the human element because humans need to be the ones who decide what changes are actually necessary. Humans need to be the ones who balance healthy caution with innovation. Humans can become aware of legal changes as they happen and start adapting discovery strategies when technology hasn’t caught up yet.


New technology is usually designed to solve a problem that already exists. It is not designed to solve problems that might potentially exist one day in the future if not mitigated now. Humans on the other hand can imagine various scenarios where things could go wrong in order to ensure that they don’t go wrong. They can not only find ways to give attorneys what they need right now, but help attorneys make improvements so future matters run more smoothly.

It’s easy to imagine a world where AI can scan a pile of documents and find relevant information for a particular litigation or investigation. Heck, we don’t even have to imagine it, it’s here! However, it’s a lot harder to imagine a world where AI can scan a document, see a loophole that others might potentially exploit, and close that loophole years before anyone gets the chance to litigate it. It’s equally hard to imagine a world where AI tells you how much easier the next litigation will be if you make some tweaks to current information governance policies.

Technology can be a beautiful thing. When done right, it empowers attorneys to do their jobs better without having to rely on a massive team of support staff. In the future, attorneys will be more independent thanks to solutions that are being developed now. It’s not an if, it’s a when. The important thing is forming long-lasting relationships with the right kinds of experts who are there to advise and support when you need them, but don’t view your independence as a threat.

Capitol Breach Investigations are Changing eDiscovery

On January 6, supporters of then-President Donald Trump breached the U.S. Capitol in an attempt to prevent Congress from certifying Joe Biden as the winner of the 2020 presidential election. As authorities look into who is responsible and what kinds of repercussions perpetrators should face, they’ll have over 140,000 pieces of digital media to aid their efforts. Throughout the Capitol Breach investigations, officials will be reliant on something much of the world knows nothing about: eDiscovery.

eDiscovery is the art and science of sorting through digital data to find the relevant pieces needed to build a legal case. 5-10 years ago, much of this data came in the form of emails and their attachments. However, many of the arrests relating to the Capitol riots cite digital evidence uploaded to social media sites.

One Connecticut man was charged because of a YouTube video. Two Massachusetts citizens were arrested because of photos on Twitter. A New Mexico County Commissioner was connected to the riots in part because of videos he posted on a “Cowboys for Trump” Facebook page. A man from Texas was arrested in part due to his posts on Parler. One such post allegedly included a threat to return to Washington, D.C. on January 19 armed and ready for insurrection: “We will come in numbers that no standing army or police agency can match,” the post allegedly states. 

That shift away from email-exclusive discovery strategies was already happening, but the Capitol riots may expedite it. Investigators are still sorting through digital data, and we likely haven’t seen the last of arrests related to this incident. Many cases will hinge on whether or not eDiscovery professionals can connect individuals to the scene and whether or not there’s digital evidence that reveals offenders’ true intentions. Either way, the Capitol breach investigations shed a light on what kind of technology is available and how law enforcement is using it. Depending on the outcomes of these cases, we may see social media-based data integrated into discovery on a much larger scale.

The Value of Geolocation

Ordinary people probably know that investigators can find incriminating things people have published on the internet. However, they might be surprised to learn just how easy it is to figure out which electronic devices were actually at the Capitol on the day of the attack. Geolocation, or more specifically “geofencing”  involves drawing a virtual boundary around a specific location, and then using technology such as GPS or Bluetooth to find devices within that boundary.

“Right now, law enforcement can pull social media information from a geolocation at will or with relatively few roadblocks,” says James Whitehead, Contact Discovery’s Associate Director of Digital Forensics. “Law enforcement agencies can capture wireless communications and pull packets off wires. This technology/capability is expanding among law enforcement departments at a rapid pace.”

This is important because many people have said hyperbolic things on the internet, and that in and of itself isn’t a crime. One of the challenges facing investigators is separating those who simply wrote inflammatory messages from those who acted on their intent. With geolocation, investigators can prove that someone who published violent threats online was actually at the Capitol at the time of the attack.

An offender’s sentence could also vary quite a bit if prosecutors can use social media posts to prove there was prior intent to attack the Capitol. That’s a very different scenario from someone who showed up for what they thought was a peaceful protest, got caught in the moment, and then showed remorse after the fact.

Social media companies are also aiding law enforcement in matching locations to other parts of a user’s profile.

“At one point Facebook had 100+ metadata fields for its site,” Whitehead says. “This includes user names, likes, names of the likers, time of the likes and/or shares, and then most if not everything is geolocated. Often these metadata records include associations to the authoring/viewing device’s unique identifiers including IP address, which further aids in geolocating.”

In the case of Twitter, investigators can collect tweets in a geolocated fence and by hashtag.

“I could essentially drill down to the Capitol and then to hashtags of interest,” says Whitehead. “If I expanded my resources, I could cross-reference known individuals and pull all their tweets and anyone who shared or viewed them within a geofenced area.”

That combination of what people said online and their whereabouts at the time of the Capitol attacks gives investigators added insight. Suddenly they’re able to comprehend not only the “what” but the “who,” “where,” and “why” as well. Geolocation could also play an important role in providing alibis to those who published inflammatory statements, but were not physically present at the Capitol at the time of the attack.

Constructing Larger Narratives

Not only can law enforcement use social media data to pinpoint where suspects were the day of the attacks, they can also use it to show what kinds of things suspects were writing weeks before. This helps investigators tell a more complete story.

One suspect, Brendan Hunt, allegedly called for the murder of elected officials on an online video platform called BitChute. However, the charges against him also mention a Facebook post on or from approximately December 6, 2020, a whole month before the Capitol breach. According to the affidavit, this post called for “revenge on Democrats” and a “public execution” of Senator Chuck Schumer and Representatives Nancy Pelosi and Alexandria Ocasio-Cortez.

“If you [Trump] don’t do it, the citizenry will,” says Hunt’s post.

Another case revolves around a Utah man named John Earle Sullivan. Sullivan handed over 50 minutes of video footage to authorities. He’s also uploaded large amounts of video content regarding the riots to YouTube under the name JaydenX. The criminal complaint against Sullivan claims his voice can be heard on the tape saying celebratory things like “We accomplished this s**t. We did this together.”

At the time of this writing, JaydenX’s YouTube channel not only features footage of the Capitol riots on January 6, but other MAGA, Proud Boys, and Black Lives Matter protests dating back to June 1, 2020. If you’re the defense, you might argue this YouTube account proves that Sullivan is just an independent video journalist, attending and recording any protest he thinks will be of interest regardless of the cause. If you’re the prosecution, you might use it to establish that Sullivan is a dangerous agent of chaos and has been for some time. Either way, it’s hard to imagine that legal teams will look at what’s likely hundreds of hours of political protest footage from the last six months and think that only the January 6 footage is relevant.

General Awareness of ESI in Law Enforcement

Perhaps most importantly of all, the riots have made the general public more aware of how digital data can be helpful to law enforcement. Sometimes, public ignorance can aid investigators. People incriminate themselves largely because they don’t know their messages can be found later. The events at the Capitol have created large scale awareness of the role that social media posts and other electronic messages can play in investigations.  

That awareness is a double-edged sword. On the one hand, it could drive bad actors to alternative platforms where they’re harder to find. On a more optimistic note, well-intentioned people are more likely to be on the lookout for digital evidence in their day-to-day lives. Heck, one Twitter user even mentioned using dating apps as a way of getting perpetrators to volunteer evidence against themselves:

Only time will tell how this case shakes up the world of eDiscovery. What won’t change is the critical role that legal technology plays in finding the truth.

Subscribe to the Contact Blog to receive more updates on all things eDiscovery.