Robert Diab

Are sexual deepfakes not a crime in Canada?

January 11, 2025

In late December, the Toronto Star ran a story about a boy in high school who had created a series of pornographic deepfakes of other girls at his school using images of their faces on Instagram. The nude pictures were discovered on his phone inadvertently, during sleepover when a friend went looking for a selfie taken on his device. Once discovered, the girls were alerted (with screenshots) and soon police were at his door.

They grappled with whether creating the images was criminal. After questioning other boys believed to have seen the images and consulting with Crown, police decided not to proceed. But they invited the girls and their parents to the station to explain that they didn’t think it was a crime without more evidence that the images had been shared.

Police appear to have concluded that only one provision of the Code applied — possession of child pornography in section 163.1 — and there was a good chance the boy could rely on the “private use” exception in R v Sharpe, SCC 2001.

The story points to a larger gap or ambiguity in Canadian criminal law around sexual deepfakes — one that Professor Suzie Dunn (Dalhousie) helped explain to the Star.

As she points out in the story and details at greater length in an informative article forthcoming in the McGill Law Journal, two Criminal Code provisions are relevant to sexual deepfakes: the prohibition in section 162.1 on non-consensual distribution of intimate images (NCII) and the prohibition in section 163.1 on making, distributing, or possessing child porn.

The first offence (intimate images) applies to victims of any age. But as Dunn notes, on a plain reading, 162.1 captures only the distribution of authentic images. It prohibits sharing an “intimate image of a person”, defining this as “a visual recording of a person made by any means ...in which the person is nude… or is engaged in explicit sexual activity.”

She notes that 162.1 does not appear to have been applied to a deepfake in any Canadian case. Doing a search of all cases involving 162.1 turns up only a few hits.

But as Dunn also notes, a Quebec court has applied the child porn provision in the Criminal Code (s 163.1) to capture the creation of a deepfake video. There is also a BC case from early 2024 in which the court applied section 163.1 where the accused created images using an app call ‘DeepNude’ and shared them with the victim and her friends. (Both are sentencing cases.)

Was it private use?

To be clear, section 163.1 appears to capture deepfake porn involving persons under 18 because it defines ‘child pornography’ to mean

a photographic… or other visual representation, whether or not it was made by electronic or mechanical means … that shows a person who is or is depicted as being under the age of eighteen years and is engaged in or is depicted as engaged in explicit sexual activity.

In other words, the image doesn’t have to be of the person him or herself. It can be an image that depicts them. But they have to be under 18.

In the Toronto high school case, the boy clearly created child pornography within the meaning of 163.1. The question is whether his possession of it fell within the “private use” exception in Sharpe.

In that case, the Supreme Court held that to avoid an unjustifiable limit of free expression under the Charter, a defence of “private use” had to be read into the child porn offence provisions in 163.1. It contemplates two exceptions.

The first involves “the possession of expressive material created through the efforts of a single person and held by that person alone, exclusively for his or her own personal use.” The second involves recordings of lawful sexual activity for private use “created with the consent of those persons depicted.”

In the Star article, Dunn queries whether the first exception would apply here, since the boy would not have created the image by himself — but by relying on an AI app, which likely entails storage of the image on company servers.

Again, police or Crown probably concluded that it would be risky to proceed with a prosecution under 163.1 without clearer evidence that the boy had shared the images — thus taking him out of the Sharpe exception (without any debate about AI and company servers).

Are deepfakes not intimate images under the Code?

But I want to pick up another thread in Dunn’s comments on the gap in the Code on deepfakes — one that pertains to the other provision at issue, the prohibition on non-consensual sharing of intimate images of persons of any age.

I agree that on a plain reading of 162.1 of the Criminal Code, the intimate images must be of the person themselves. But the Supreme Court of Canada has endorsed departures from the principle of strict construction in criminal law where a narrow reading would give rise to arbitrariness or defeat the larger aim or purpose of the provision.

In R v Paré (SCC 1987), the accused murdered a boy two minutes after committing an indecent assault against him. A provision still found in the Code (231(5)) states that "murder is first degree murder in respect of a person when the death is caused by that person while committing” indecent assault or other offences. Paré argued that because it happened two minutes later, the murder was not caused ‘while committing’ the assault — and he should be entitled to a literal reading. For centuries, courts have applied the principle of strict construction in criminal law.

The Court held that it was time to update the doctrine. The original reasons for it (many offences resulting in capital punishment) have been “substantially eroded”. Ambiguities should still be settled in favour of the accused, since criminal penalties are severe. But the question should now be whether “the narrow interpretation of ‘while committing’ is a reasonable one, given the scheme and purpose of the legislation.”

The narrow reading wasn’t reasonable. We couldn’t assume Parliament meant to limit the meaning of ‘while committing’ to ‘simultaneously,’ because, as Justice Wilson held, doing so would result in drawing arbitrary lines between when the assault ended and the murder began. She also held that a wider reading (one that includes a murder immediately following an assault) would be the one that “best expresses the policy considerations that underlie the provision”, i.e., more serious punishment (first degree) for more serious conduct.

Should Paré apply here?

We have the same disconnect with larger purposes and arbitrariness if we read 162.1 strictly — to apply only to real images.

One might argue that the purpose of 162.1 is to prevent not simply the non-consensual distribution of intimate images, but violations of a person’s sexual privacy or integrity through the sharing of intimate images. If one could circumvent the application of 162.1 by merely doctoring a real image of one’s partner nude before posting it online — allowing one to say “but it isn’t actually her body” — that would make little sense.

Put another way, the question is whether 162.1 makes it offence to share intimate pictures only of a person him or herself — or also what looks to be him or her. If the offence doesn’t include the latter, how do we distinguish between a grainy picture of you good enough to make out and a doctored picture of you that seems real enough to be convincing? Why would non-consensual distribution of the one be criminalized and not the other?

One reason might be that in the one case, a person consented to the creation of the image but not the distribution; in the other case, they consented to neither.

But the gravamen of the offence lies in the non-consensual distribution of an intimate image. Do we not find the same gravamen in the sharing of a deepfake? Is the culprit not trying to do the same thing: compromise the victim’s sexual integrity through exposure?

We might add that while section 162.1 clearly contemplates the distribution of intimate images a person consented to have taken of them, it doesn’t require this. The definition in 162.1(2) does say “intimate image means a visual recording of a person made by any means including…” Those means could include AI. So why does the image itself have to include only images of the person themselves? After all, every digital image is doctored to some degree by our devices.

Private law remedies?

Dunn’s forthcoming McGill paper notes that various provinces (aside from Ontario) have passed tort legislation making the non-consensual distribution of intimate images actionable without proof of damages. And as she points out, all are worded in ways that clearly capture deepfakes. For example, in BC’s act, intimate image “means a visual recording or visual simultaneous representation of an individual, whether or not the individual is identifiable and whether or not the image has been altered in any way, in which the individual is or is depicted as…” engaged in sexual activity, nude or “nearly nude.”

Manitoba’s act was amended in 2024 to be more explicit about deepfakes, adding as a defined term "fake intimate image", which means “any type of visual recording … that in a reasonably convincing manner, falsely depicts an identifiable person (i) as being nude or exposing their genital organs, anal region or breasts, or (ii) engaging in explicit sexual activity.”

These provincial statutes set out various ways to try to have an image taken down or deleted once circulated. Orders against platforms, third-parties, search engines. All of them are potentially helpful, but how helpful (or realistic) is unclear. The federal Online Harms Act in Bill C-63 (which just died on the order paper, with the proroguing of Parliament) would have placed a host of obligations on platforms to prevent circulating NCII or take them down. I expect that bill will be reprised at some point.

A cursory search on Canlii for cases applying these statutes uncovers a few dozen cases mostly seeking monetary damages for threats to distribute or posting of NCII. The focus appears to be on money rather than removal of the images. And to my knowledge, none involve deepfakes.

It may be too early to assess whether tort law will be an effective tool for curbing the use of AI to create and share sexual deepfakes. But soon, I suspect, both tort and criminal law provisions will begin to be tested on this front.

- Back to the top | Earlier Posts -

Professor of law at Thompson Rivers University, writing about law and technology. More here.

Follow on Substack & Bluesky

Recent Posts:

Why TikTok’s challenge to the order to leave Canada will fail - and how

Are your texts still private when police hijack your recipient’s identity?

Earlier Posts