By Stacey A. Cozewith
As published by the New Jersey State Bar Association, New Jersey Lawyer
December 2024, No. 351
Artificial intelligence is a form of technology that allows machines, particularly computers, to perform human tasks. Generative AI is a form of artificial intelligence which can create new content such as text, images, music, audio, and videos. One of the most common examples of generative AI is the program called ChatGPT, which can answer questions, write text, create images, and more, based on a prompt that is input by the user.
As many family law practitioners know, domestic violence is a pattern of abusive behavior that is used to gain or maintain power and control over a person within a relationship. It is codified in New Jersey under the Prevention of Domestic Violence Act of 1991 and permits the entry of a final restraining order upon the occurrence of certain criminal offenses between two people having a relationship as set forth in the act.
In a domestic violence FRO hearing, the evidence presented at trial is often comprised of phone records, video recordings, email records, text, and other communication “proofs.” What happens when one party uses AI to create false communications or videos that show harassing contact or an assault? How do we, as attorneys, combat potential false evidence created with AI in the domestic violence context? This article addresses these difficult questions.
Deepfakes
Deepfake technology uses AI to create synthesized or false images or videos. A deepfake is an artificial or falsified image, video, or recording that has been altered so that it appears to be either a person who was not actually present, or someone doing something or perhaps wearing something that they were not wearing. Many people may recall the incident in March of 2023 when a deepfake photo of Pope Francis was widely circulated, while wearing a white modern puffy coat and a large jeweled crucifix as he took a walk in St. Peter’s Square. This photo was a rude awakening for many across the internet who realized that AI could generate an image that looked so authentic.
There are a wide range of deepfake technologies. Deepfakes can be used to spread misinformation – providing false information from what appears to be trusted sources. For example, a deepfake could be used to alter or create a message from the government. In 2018, actor and director Jordan Peele circulated a video where he used deepfake technology to transfer his own facial movements onto former Present Barak Obama to produce a falsified public service announcement.
In the domestic violence context, then, a trusted source can be telephone company records, records that appear to be text messages between the parties, or email correspondence. Normally, as attorneys proffering telephone records at a hearing, for example, we typically trust in the veracity of these documents. However, with the advances in AI, these too can be augmented and falsified.
Deepfakes can be used to create more than just harassing or false communications, since they can be used to create videos such as pornography, without consent. There are applications that “nudify” an image – creating a fake pornographic image. AI can also be used to create a pornographic video using an image of a person who was not actually present for the filming. There are several celebrities who have been victims of deepfake pornographic images, but you do not have to be a celebrity to be a target of such a practice.
As many family law practitioners will attest, it is not uncommon for perpetrators of domestic violence to threaten victims saying that they will send images or communications to the victims’ employers or family members. The advent of deepfake pornography makes this threat even easier to follow through with.
Not all deepfakes are bad, however. In the context of entertainment and fun, you can put your face on an elf that dances to holiday music, or your teenager can be excited to see their sports superhero appear in their favorite video game. Deepfakes can be used for entertainment or even to insert the cousin who missed the family wedding photos.
In terms of accessibility, the technology used to create deepfakes are highly available to everyone. There are programs and applications that can be downloaded that allow those without a technical background or programming experience to create deepfakes. The quality of the final products may vary. However, the more photos and videos that a person provides of their target and the more practice they receive in using this technology, the more accurate the products will become. With the popularity of social media, there are usually a plethora of videos and photographs easily available to a perpetrator to create a deepfake.
So What Can We Do About It?
As of this writing, New Jersey does not have any laws in place that regulate deepfakes. However, there is a bill in the state Assembly and Senate that aims to limit the use of AI to create deceptive digital content. The bill, S976, “prohibits deepfake photography and imposes criminal and civil penalties for non-consensual disclosure.” If passed, this bill would amend Section 1 of P.I.2003, c.206, New Jersey’s invasion of privacy law, and N.J.S.C. 2C:24-4, New Jersey’s child endangerment law, to include disseminating or creating “deceptive audio or visual media: as punishable crimes. This bill has been introduced and is pending in the Judiciary Committees.
It was always common for perpetrators of domestic violence to disseminate, or threaten to disseminate, explicit videos, photos, or texts to a victim’s family, friends, employers, and coworkers. However, with the availability of deepfake technology, perpetrators could easily generate fake pornographic media of their victim to use as blackmail. Moreover, the alleged perpetrators of domestic violence can sometimes really be the victims as the “evidence” of harassment of domestic violence victims that is most commonly provided to a court, can be manipulated and manufactured in a very realistic matter. So, how do we prove that the evidence is false
In FRO trials, both parties have the right to enter evidence to support their case. However, as a domestic violence trial is a summary action, or a hearing that is meant to be “short, concise and immediate” discovery is limited.[1] Thus, other than the incidents set forth in the temporary restraining order, there may not be any notice as to what evidence the other party is planning to present. This means that the alleged victim could seek to introduce a manipulated voicemail as evidence in the trial, and the other party would be seeing or hearing it for the first time in court. The party not introducing the evidence would likely need to seek an adjournment of the trial, so that they could prepare an argument proving that the evidence the abuser entered has been falsified.
This poses its own set of hardships. Under Rule 5:7A(e), the FRO hearing is to take place within 10 days of the entry of the TRO. Thus, under Court Rule, domestic violence matters are to be speedy and expeditious. This does not mean that all other safeguards are thrown out the window, however. The New Jersey Supreme Court has held that the short time frame of domestic violence trials should not impinge on the parties’ due process rights.
“Our courts have broad discretion to reject a request for an adjournment that is ill founded or designed only to create delay, but they should liberally grant one that is based on an expansion of factual assertions that form the heart of the complaint for relief.”[2] Moreover, when it is the defendant seeking an adjournment, granting same is seen as posing “no risk to plaintiff…[as] courts are empowered to continue temporary restraints during the pendency of an adjournment, thus fully protecting the putative victim while ensuring that defendant’s due process rights are safeguarded as well.”[3]
In addition, “even in summary actions, the trial court has the discretion to authorize discovery for good cause shown.”[4] Moreover, the Supreme Court held that “the ten-day provision does not preclude a continuance where fundamental fairness dictates allowing a defendant additional time. Indeed, to the extent that compliance with the ten-day provision precludes meaningful notice and an opportunity to defend the provision must yield to due process requirements.”[5]
As Judge Thomas H. Dilts determined in the Depos case, discovery is only permitted in a summary action, such as a domestic violence hearing, upon a showing of good cause.[6] In fact, the Appellate Division indicated their agreement with Judge Dilts in Crespo v. Crespo[7] when they stated that “in compelling circumstances, where a party’s ability to adequately present evidence during a domestic violence action may be significantly impaired, a trial judge may, in the exercise of sound discretion, permit limited discovery in order to prevent an injustice. Judges are not required to be oblivious to a party’s claim for discovery in compelling circumstances even though the court rules do not expressly authorize relief.”
Accordingly, while there are no reported decisions (yet) where an adjournment request was made to obtain an expert opinion on the veracity of evidence, a brief adjournment to obtain said expert should be requested and granted. When presented with two competing sets of call logs for example, when the predicate act of domestic violence is harassment by way of repeated telephone calls at odd and inconvenient hours – how would a court be able to determine which is accurate without an expert assisting in said determination? A court would partly rely on credibility and testimony of the parties. However, when it comes down to two competing versions of what occurred during the incident in question, a court must be able to rely on documentation and the veracity of same.
In conclusion, at the very least, when presented with evidence from an alleged victim, what your client claims never happened or is falsified, an adjournment must be requested. This adjournment would be to formulate an argument and potentially retain an expert to analyze the evidence being presented. Researchers are developing new methods to determine if an image, video, or document is a deepfake. Of course, they are using AI models to look for color, sounds, or image abnormalities, digital watermarks, and other indicia of falsification or manipulation. AI is both the problem and the potential answer in these scenarios.
About Stacey A. Cozewith
Stacey A. Cozewith is a partner at Sarno da Costa D’Aniello Maceri LLC and concentrates her practice on matters related to family law. Stacey strongly advocates for her clients, offering her expertise through a unique balance of compassion and tenacity.
Click here to view New Jersey Lawyer, December 2024, No. 351.
[1] Depos v. Depos, 307 N.J. Super. 396, 399 (Ch. Div. 1997).
[2] D. v. M.D.F., 207 N.J. 458, 480 (2011).
[3] Id.
[4] R.K. v. D.L., 434 N.J. Super.113, 133 (App. Div. 2014).
[5] H.E.S. v. J.C.S., 175 N.J. 309, 323 (2003).
[6] Depos, supra at 400.
[7] 408 N.J. Super. 25, 44-45 (App. Div. 2009)