Will Canada’s new hate crime bill impact free speech online?

Last week, the Liberal government tabled Bill C-9, containing three new criminal offences targeting hate speech — as a response to the alarming and appalling rise in antisemitic violence in Canada in the past two years, along with attacks against places of worship, schools, and community centres.

The new offences primarily capture acts of intimidation of a physical sort: blocking access to a synagogue, mosque, or temple, or promoting hatred by waving flags or symbols of groups listed as terrorist entities.

But two of the offences will apply to speech online and raise questions for me about where they fit in the panoply of hate speech offences in Canada — and whether we’re likely to see further regulation of online speech this fall.

I thought I’d write this short post to help situate the new offences in the Criminal Code’s existing hate speech provisions, highlight what they add to what we already have, and remind readers about Bill C-36 in 2021, which sought to revive a human rights law that would make hate speech a form of actionable discrimination — since it may be coming back.

Existing hate crimes in the Criminal Code (and which of them capture online speech)

Briefly, the Code criminalizes hate speech in the following ways:

Key provisions for targeting online speech are those in sections 319(1) and (2) — public incitement and wilful promotion of hatred. They capture speech online given the way the Code defines ‘public place’ and ‘statements’ in these provisions: “any place to which the public have access as of right or by invitation” and “words spoken or written or recorded electronically” (s. 319(7)).

In at least three cases, courts have applied the promotion or incitement offence to speech online. But two of these were decisions about committal to trial (here and here), and the third was a sentencing case.

In 1990, the Supreme Court of Canada in Keegstra held that section 319(2) — wilful promotion of hatred — infringed the freedom of expression in section 2(b) of the Charter because it targets speech content, but found it to be a reasonable limit on the right under section 1.

The majority in Keegstra held the government’s aim of preventing the social harm of hate speech to be pressing and substantial. The offence minimally impaired expression for various reasons, including its being limited to ‘hatred,’ defined as the intense emotion of ‘vilification’ or ‘detestation’ rather than ‘disdain’ or ‘dislike.’ The dissent found the concept of hatred too vague and subjective, and the scope of the offence too broad given that it didn’t require statements likely to result in violence.

What Bill C-9 adds to the picture

First, C-9 will codify the Keegstra definition of ‘hatred,’ as elaborated in the Supreme Court’s 2013 decision in Whatcott.

Or does it?

The Canadian Constitution Foundation says the definition in the bill “appears to lower the bar for hate speech set by the Supreme Court of Canada in cases like R v Keegstra and R v Whatcott, which could chill speech and public debate.”

In Keegstra, Dickson CJC held: “the term ‘hatred’ [in 319(2)] connotes emotion of an intense and extreme nature that is clearly associated with vilification and detestation.”

In Whatcott, Rothstein J, for the Court, held:

[w]here the term “hatred” is used in the context of a prohibition of expression in human rights legislation, it should be applied objectively to determine whether a reasonable person, aware of the context and circumstances, would view the expression as likely to expose a person or persons to detestation and vilification on the basis of a prohibited ground of discrimination.

But to be clear, the Supreme Court had already taken an objective approach to hatred in the criminal context in Krymowski (2005). There it held that judges must “look at the totality of the evidence and draw appropriate inferences” to decide whether an accused person “intended to target” an identifiable group.

C-9 will add to 319(7): “hatred means the emotion that involves detestation or vilification and that is stronger than disdain or dislike; (haine)”

It will also add in 319(6): “For greater certainty, the communication of a statement does not incite or promote hatred, for the purposes of this section, solely because it discredits, humiliates, hurts or offends.”

I’m not convinced that C-9 lowers the bar for criminalizing hate speech by adding these provisions.

New offences

C-9 also adds three new offences. Cutting and pasting here from the DoJ’s press release, the bill will:

The second and third offences will apply to speech online. I say this because the third offence (wilful promotion of hatred by displaying in public symbols of a terrorist group) will be slotted into 319, thus drawing on the definition of ‘public place’ noted above.

The second offence here — committing any indictable offence when “motivated by hatred based on race, national or ethnic origin, language, colour, religion, sex, age, mental or physical disability, sexual orientation or gender identity or expression” — points to the new definition of hatred to be added to 319(7).

But it will apply to speech online by virtue of the included offence possibly being 319(1) or (2). For example, if a white supremacist posts antisemitic or Islamophobic content on a blog or social media platform that meets the test for public incitement or wilful promotion under 319(1) or (2), they can be charged with this additional offence.

Anaïs Bussières McNicoll of the Canadian Civil Liberties Association believes the new hate-motivation offence may violate the presumption of innocence:

The new hate crime offence risks stigmatizing defendants throughout the entire judicial process, while they are still presumed innocent. The sentencing judge should continue to be responsible for labeling a defendant’s motivations and weighing their aggravating impact on sentencing, once a defendant has been found guilty of a criminal offence and all relevant evidence has been heard.

Is the new ‘displaying hate symbols’ offence redundant?

Richard Moon, Canada’s leading authority on the Charter right to free speech, in a post offering initial impressions of C-9, put his finger on a key issue about the wilful promotion by flag-waving offence: it doesn’t appear to capture anything new.

As Moon writes:

It is unclear what this provision adds to the existing ban [on wilful promotion of hatred] and indeed whether it will prohibit the public display of the Hezbollah or Hamas flags, which seems to be its purpose.

The public display of a Nazi flag will ordinarily be viewed as communication that wilfuly promotes hatred, contrary to both the existing code provision and the new provision in Bill C-9. But… it is less clear that the display of the flags of Hezbollah, Hamas, or the Popular Front for the Liberation of Palestine can be seen, at least beyond a reasonable doubt, as “wilfully” promoting racial or religious hatred, since the formal mandate of these groups is anti-Zionist rather than antisemitic.”

Put otherwise, waving a Nazi flag can only imply wilful promotion of hatred; waving the flag of a group whose meaning is ambiguous (terrorist org or, as some believe, the resistance) could only be wilful promotion if accompanied by other statements that tie the flag waving to hatred rather some other belief.

Which is really just a way of saying: you can’t prove wilful promotion by flag waving unless you can prove it’s wilful promotion. And if you can do that, then you don’t need this new offence.

I agree with Moon that this new offence may be “simply performative.”

But then what did Justice Minister, Sean Fraser, mean when he said in the press conference introducing C-9 that these new provisions don’t ban wearing these symbols as you walk down the street — including Nazi insignia? (As the Globe reports, Fraser said that whether it’s criminal would “depend upon the person’s behaviour and the circumstances.”)

The bill contemplates a fine line between walking down the street with a Nazi t-shirt and standing on the steps of the Art Gallery in Vancouver (where large protests take place) and waving a big Nazi flag. In the one case, you are merely expressing a belief; in the other, you are wilfully promoting hatred because the public display — i.e., flag waving, rather than merely wearing — entails promotion of hated rather than mere expression.

Again, I think we have this in 319(2) as it is. I don’t see how the new offence makes it easier for Crown to obtain a conviction for wilful promotion — with or without public display of a symbol — than it is now.

The possible return of a human rights law on hate speech?

Bill C-36, as you may recall, contained a version of the second offence here — making it a new offence to commit an indictable offence when motivated by hate — and combined it with the revival of a provision rescinded from the Canadian Human Rights Act in 2013 that allowed for a human rights complaint for hate speech.

C-36 proposed to revive the old section 13 of the Act to make it a discriminatory practice to communicate “hate speech by means of the Internet… in a context in which the hate speech is likely to foment detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination.”

The bill codified the Supreme Court’s more limited definition of hatred in Whatcott, restricting the potential scope of a human rights action against hate speech.

Briefly, in Whatcott the Court held that a provision in Saskatchewan’s human rights law banning hate speech violated s. 2(b) for being overbroad. The Court read in a more restricted definition of hatred — excluding the standard of “ridicules, belittles or otherwise affronts the dignity of” — and found the provision to be a reasonable limit under section 1.

We may see the return of this provision this fall. I’ll save a discussion of its merits if and when a new bill is tabled — and whether reviving a human rights remedy would help curb the polarization and algorithmic amplification of hate speech that are upending so much of our politics these days.


Authorship After AI

image alt *

A new article in AI Magazine draws an illuminating comparison between what AI is doing to writing and what photography did to art in the 1840s. It helps to make sense of a question many of us are thinking about more often: does increasing reliance on AI signal the end of writing?

The insights in this piece resonate with me, given the quantum leap in my own use of AI over the past few months.

I’m now making such frequent use of it — integrating it into my research, writing, and editing — that it has me wondering what’s really happening.

As I describe in a piece for the CBA’s National Magazine, I’ve been dipping in and out of Claude, ChatGPT, and Perplexity constantly — to get a quicker lay of the land on new topics, reword sentences, and tighten drafts. But the pace and intensity feel like a transformation as momentous as the shift from typewriter to computer, or from paper-based research to the internet.

To be clear, I’m not using AI to create texts. But using it more often to edit, it sometimes causes me to think about my claim to authorship. At what point does a suggestion — or re-write of a paragraph — mean it’s no longer me?

In “Reclaiming authorship in the age of generative AI: From panic to possibility,” Mohsen Askari argues that we need to abandon the notion that “authorship is defined by the absence of tools” — that using AI contaminates the purity of writing.

He sees AI as part of a continuum of tools from the pen to the typewriter to the reference manager. His central claim is provocative: “[w]hat matters is not whether help was involved, but whether the author stands behind the final work.”

Why AI is like early photography

The sharpest part of his piece is the analogy to photography in France in 1839. Painter Paul Delaroche famously declared: “From today, painting is dead!” The camera’s ability to mechanically capture the world posed an existential threat to painting not unlike our response to AI in writing: “shock, suspicion, and widespread declarations of the end of a creative tradition.”

Early photography was dismissed as craftless. It seemed to require “no imagination, no hand, and no labour.” The prestige artists earned for mastery evaporated. Photography “democratized image-making.”

But painting didn’t die. It ceased to be about reproduction and exploded with creativity through abstraction and experimentation. Meanwhile, photography itself became an art form: “Mastery emerged not from the act of clicking a shutter, but from timing, framing, lighting, and selection. In short: from judgment.”

Askari sees the same happening with AI. Like photography, it produces results quickly and provokes fears of “fraudulence and depersonalization.” Yet using AI well involves more than typing a prompt; it requires “knowing what to ask, how to evaluate, when to refine, and when to reject.”

AI can produce fluent text, he notes, but fluency is “not the same as quality, insight, or originality.” The real work lies in “asking the right questions, rephrasing, discarding early results, and returning with a clearer intent.”

For Askari, writing with AI remains authorship when it involves real “sculpting”: “The user curates meaning. They filter the signal from the noise. Above all they remain accountable for what is kept and what is removed.”

But is it really you?

Askari may be stretching it too far. Surely authorship is more than “augmentation” or “curation.”

But he has a point: authorship can be authentic even if not every sentence is one’s own. In conversation, we often grope toward an idea only for a friend to supply the better phrasing, which we readily adopt. They give us the words; we provided the idea. The proof is that our friend doesn’t just nod but lights up with an “aha.” This is the distinction Askari seems to be after.

For people pressed with time, living with “interrupted attention spans,” or working in “linguistically diverse environments,” AI, he says, isn’t a “crutch or a cheat,” but a “tool that enables a different kind of flow.”

In the academy, the flow he describes sparks anxiety because AI makes suddenly “easier, faster, and more accessible” skills that once took years to develop: “the ability to write well, think clearly, and publish independently.”

We’re still aiming to cultivate these skills rather than handing them off to AI. How do we do this when AI offers to do it all for us?

What about student assignments?

When students hand in work with a strong trace of AI—a paper more polished than we suspect they would have written on their own—Askari urges us to question the reflexive view that AI use entails “the absence of thought.” He suggests we see the tool not as disrupting writing, but as having “supported” it.

The question, he writes, is not “whether AI was used, but whether the author remained present, intentional, and accountable throughout the process.”

This framing helps.

When I recently used AI to revise the opening of a piece, I wondered whether it was still my writing if I adopted the suggestion. Askari’s point is that it’s yours not because you accept AI’s wording, but because what AI is rewording is your idea.

If there’s a visible trace between your draft and AI’s output, then yes, you wrote it. AI only helped.

But what about the term paper I received last spring in one of my courses that seemed written entirely by AI? Askari would say this wasn’t authorship any more than pointing a camera out a window and clicking would be art.

It fails because the student had not “remained present, intentional, and accountable throughout the process.” They simply pointed and clicked. And that’s why it felt wrong.

We might conclude that what matters is not whether AI polished a student’s prose, but whether we can still detect presence and intentionality. Original ideas, analogies, connections.

The line will often be subtle. How much originality or intent is enough? How do we measure it? How do we teach students not to over-rely on AI?

Not easy questions. But Askari’s insights remain useful.


When AI Turns Deadly: Are Model Makers Responsible?

This week, parents of Adam Raine, a California teen who committed suicide in April after a lengthy interaction with GPT-4o, filed a lawsuit against OpenAI and its CEO, Sam Altman. The case follows a suit brought in late 2024 by the parents of a Florida teen, Sewell Setzer, who took his own life after engaging with a Character.AI chatbot impersonating Daenerys Targaryen from Game of Thrones.

In early August, ChatGPT was also implicated in a murder-suicide in Connecticut involving 56-year-old tech worker Stein-Erik Soelberg, who had a history of mental illness. Although the chatbot did not suggest that he murder his mother, it appears to have fueled Soelberg’s paranoid delusions, which led him to do so.

OpenAI and other companies have been quick to respond with blog posts and press releases outlining steps they are taking to mitigate risks from misuse of their models.

This raises a larger question left unanswered in Canada after the Artificial Intelligence and Data Act died on the order paper in early 2025, when the last Parliament ended: what guardrails exist in Canadian law to govern the harmful uses of generative AI?

Like the United States, Canada has no national or provincial legislation designed to impose liability on AI companies for harms caused by their products. The European Union passed an AI Act in 2024 that does impose liability for harmful AI systems.

But in both the EU law and the Canadian bill that was abandoned, there is a notable flaw in how liability is conceived.

I explored this in a paper I wrote in late 2023, surveying early reports of harmful uses of language models (a suicide in Belgium, help with bomb-making, and other cases).

My article garnered some interest on SSRN but only recently appeared in print (it was published this month). The core argument was this:

Both [the European and Canadian AI] bills are premised on the ability to quantify in advance and to a reasonable degree the nature and extent of the risk a system poses. This paper canvases evidence that raises doubt about whether providers or auditors have this ability. It argues that while providers can take measures to mitigate risk to some degree, remaining risks are substantial, but difficult to quantify, and may persist for the foreseeable future due to the intractable problem of novel methods of jailbreaking and limits to model interpretability.

The problem remains unresolved.

The only guardrails at the moment

The only mechanisms in Canada and the US for holding AI companies liable are laws on product liability, negligence, and wrongful death.

Parents in both the California and Florida cases are suing the model makers (OpenAI and Character.AI, respectively) for wrongful death, a statutory cause of action that allows family members of the deceased to sue for damages including funeral expenses, mental anguish, loss of future financial support, and companionship. Plaintiffs must show the defendant’s negligence or intentional misconduct caused the death.

Here, parents allege that chatbot makers were negligent in product design and failed to provide adequate warnings about risks.

Canadian law works in a similar way. Provinces allow wrongful death suits for a wrongful act. Damage awards in Canada are much smaller than in the US and mostly limited to quantifiable losses. But plaintiffs can also claim that a model maker was negligent in offering a harmful product, or that it was defective or lacked adequate warnings.

At the heart of negligence and product liability is the same question: what steps should OpenAI, Anthropic, or Google reasonably have taken to avoid harm?

Put another way, in making chatbots available, companies clearly owe users a duty of care. The product carries risks, and harm to users is foreseeable.

The key question, though, is: what is the standard of care?

When can OpenAI and others be said to have done enough—or not enough—to avoid harm? If the standard is “reasonably safe” rather than “absolutely safe,” when is that threshold met? And can it even be met, given the nature of these systems?

No one knows. But OpenAI and others are taking—and publicizing—all the steps one might predict a tort lawyer would advise them to take.

OpenAI admits its risk-detection mechanisms work better in shorter conversations and degrade as conversations lengthen. It is working to improve performance in longer chats.

It is also improving detection across different types of harmful conversations, from suicidal to criminal. It has announced plans for parental controls to let parents monitor their child’s activity, and is rolling out systems to route some conversations to human overseers who can terminate the chat and lock the user out of further access.

Whether these steps will be deemed sufficient—enough to absolve OpenAI and others of liability—remains to be seen.

Much may depend on how a model was misused, what jailbreak was employed, and whether that misuse was foreseeable.

In a broader sense, it is worth keeping perspective on AI risks. As tragic as these cases are, hundreds of millions of people use these tools daily, and many find them beneficial. But there are, inevitably, many ways to misuse them.


Bill C-2 Backgrounder - the missing manual!

Over a month later, the controversy over the Strong Borders Act continues.

Privacy experts are still sounding the alarm over the astonishing breadth of some of the new powers — allowing police to demand from a doctor, lawyer, anyone who “provides a service” information about a person’s account without a warrant; a power to compel Shaw or Google to “install equipment” that would give police or CSIS access to personal data — the list goes on.

Following my last post that looked in some detail at parts of the bill, the government has issued a Charter statement that drew criticism for being self-serving, even misleading. Along with others, I wrote opinion pieces and spoke about the bill on Law Bytes and other venues.

But I noticed there was still some confusion and uncertainty about many aspects of the bill. Rather than wait for a Parliamentary backgrounder to appear, I decided to put together my own overview of all aspects of the bill touching on privacy — and to offer an independent assessment of them in relation to section 8 of the Charter (guaranteeing “a right to be secure against unreasonable search or seizure”).

The result is a paper I’ve posted to SSRN titled “Bill C-2 Backgrounder: New Search Powers in the Strong Borders Act and Their Charter Compliance”.

I’ve tried to provide more context than is found in the government’s Charter statement, by detailing how new powers expand on or amend those currently in force.

The paper looks at more controversial parts of the bill, including the whole new lawful access act contained in C-2, and declaratory provisions in the Criminal Code asserting that police don’t need a warrant for subscriber ID or an ‘information demand’ with voluntary compliance — and an indemnity for those who comply.

I plan to keep the paper up to date (on SSRN) as the bill moves through second and third reading — and to post those updates here. Comments are welcome!


Major new search powers in the Strong Borders Act: are they constitutional?

The Liberal’s first bill in Parliament last week proposes a raft of new search powers to give police easier access to our private data. They may turn out to be the most consequential search powers added to the Criminal Code in the past decade.

They have little to do with the primary aim of the bill, strengthening borders by expanding powers in customs and immigration.

Tucked in the middle of Bill C-2 are measures that revive long-standing aims to pass “lawful access” legislation that will make it easier for police to obtain subscriber information attached to an ISP account (with Shaw or Telus) and give police direct access to private data held by ISPs or platforms like iCloud, Gmail, or Instagram.

I’ve written a general overview of these powers for The Conversation here, and Michael Geist has a very informative op-ed in the Globe that sets out a wider context and walks through some of the provisions in detail. If you’re new to this story, you might begin there.

In this post, I offer a few thoughts on the constitutionality of three key powers in the bill: the new production order for subscriber info; the new information demand power; and the provisions that compel service providers to assist police in gaining direct access to personal data.

This is a long post, almost 3k words. It might have been three shorter ones, but I thought I’d put it all in one post.

It’s meant for those looking for a deeper dive on the constitutional questions.

What do you mean by ‘constitutional’?

The larger issue here is whether these provisions will survive a challenge under section 8 of the Charter of Rights and Freedoms, guaranteeing “everyone has the right to be secure against unreasonable search or seizure.”

Two things to keep in mind about section 8: What is a search? And when will a search be reasonable? 

A search for the purpose of section 8 is anything done by a state agent for an investigative purpose that interferes with a reasonable expectation of privacy in a place or thing (R v Bykovets).

A search will be reasonable where it is authorized by law, the law is reasonable, and it is carried out in a reasonable manner (R v Collins).

The powers created in this new bill set out authority for a search. The issue here is whether each of them sets out a ‘reasonable law’ authorizing a search.

(In case you’re interested, I’ve co-authored an entire book on section 8, which you can check out here.)

Relevant background: production orders and the Spencer situation

In 2004, Parliament created what are called ‘production orders’ to give police the power to ask an internet or cellphone service provider to hand over data about digital communications, including the content of messages.

That power required reasonable suspicion, and it was challenged under section 8 of the Charter as being too low a standard, giving rise to an unreasonable search.

In 2014, the BC Supreme Court said it was too low: it should be probable grounds; the Alberta Court of Appeal disagreed: it should only be reasonable suspicion.

That same year Parliament passed Bill C-14, which created a general production order requiring probable grounds (487.014) and four more specific production orders requiring only reasonable suspicion — for tracing communications (e.g., metadata attached to email or phone calls), transmission data (call or text histories); tracking data (location data); and financial data (487.015 to 487.018).

Meanwhile, in June of 2014, Supreme Court of Canada decided R v Spencer, which held that subscriber information attached to an IP address — the name and physical address of the person linked to it — is private, because it associates a person with their online search history. Police can’t demand it from an ISP without authority in law to do so (which may or may not involve a warrant).

The Court in Spencer noted (at para 11) that police had demanded the subscriber ID from Shaw without first obtaining a production order in that case — thus contemplating its use as a means for doing so.

But the Court did not address the question of what kind of search power would be reasonable to obtain subscriber info. After explaining why provisions in private sector legislation (PIPEDA) didn’t authorize the search, the Court simply concluded (in para 73) that “in the absence of exigent circumstances or a reasonable law,” police couldn’t lawfully search (i.e., demand) it.

So what remained unclear after Spencer was: what is a reasonable search law that authorizes police to make a demand for subscriber information?

The presumptive standard for a reasonable search in criminal law (i.e., what constitutes a “reasonable law” authorizing a search) is one involving a warrant issued on “reasonable grounds to believe” (probable grounds) that an offence has been or will be committed, rather than “reasonable suspicion.” It would seem, then, that a demand for subscriber ID should be a warrant on probable grounds.

Things said in Spencer support this inference. It held the privacy interest in subscriber information is high, given that it links a person to search activity that can be highly revealing. Anything less than probable grounds would not strike the right balance between law enforcement interests and personal privacy. But at least one privacy scholar disagrees.

In the wake of Spencer, to obtain subscriber info, police have been using the new general production order power added in 2014, requiring probable grounds. Again, this isn’t a powered tailored specifically for obtaining subscriber ID, so it’s unclear whether anything less would suffice. Police and Crown hope so. Probable grounds is a relatively high standard; why not just a warrant on reasonable suspicion?

Privacy in a set of numbers alone?

And what about demanding an IP address? Sometimes police can’t get far without asking an ISP or an online platform like Instagram to reveal a user’s IP address. Did they need a warrant for this? Was an IP address on its own private?

In R v Bykovets, the Supreme Court of Canada held that an IP address is private because it readily links a person to their online activity. But the Court didn’t specify what kind of power would render a search (demand) for an IP address reasonable.

At para 85 of the decision, Karakatsanis J for the majority, points the production order power in section 487.015(1) of the Code (for transmission data) on reasonable suspicion as possible tool police might use here. This is obiter, since the Court is not being asked whether the use of this to demand an IP address would constitute a reasonable law. Yet we can assume that the judges in the majority think that a warrant on reasonable suspicion would suffice.

New production order in the Strong Borders Act

The new bill gives police and Crown what they want: a production order power tailored to making a demand for subscriber info by obtaining a warrant issued on reasonable suspicion that a federal offence has been or will be committed (a new 487.0181(2) of the Criminal Code).

Will this be constitutional? More specifically, a search conducted under this power will be authorized by law, but is this law reasonable?

There is no single test for when a law authorizing a search is reasonable under section 8 of the Charter. But the Supreme Court has generally considered four factors: whether the power relates to a criminal or regulatory offence; the state or law enforcement interest at issue; the impact on personal privacy; and the oversight and accountability safeguards.

Demanding subscriber info on reasonable suspicion is, I think, likely to be found unreasonable. In this case, the privacy interest is high (i.e., the online activity linked to a person’s name). Given things said about this in Spencer, this alone could favour a finding that nothing less than probable grounds is reasonable.

Further possible support may be found in R v Tse, which held that emergency wiretap provisions of the Code were unreasonable for failing to include a post facto notice requirement to persons affected. In this case, there’s no requirement to advise a person that they were subject to a production order, if charges do not follow. Not sure a court would view production order powers to be sufficiently analogous to wiretap provisions. But I flag it as a potential consideration.

The new “information demand” power

Bill C-2 also creates a new power on the part of police to demand information. In some cases, police may only ask if a service provider has info about something. In other cases, they can demand the info itself.

Under a new section 487.0121 in the Code, police can ask a service provider whether they have “provided services to any subscriber or client, or to any account or identifier.” If so, police can demand to be told where and when service was provided — along with info about any other providers who may have offered the person service.

They can do this on reasonable suspicion alone, without a warrant.

Police can thus ask Shaw or Gmail things like: does this user have an account with you? Do you have an IP address or phone number associated with their account? If so, tell us where and when you provided it.

Why do police need this power? Aren’t police free to ask questions as part of their investigation? Is there not a distinction between a person describing to police what they know or have observed and police demanding to see it themselves? Can’t we assume that police only carry out a search when they ask for and receive private data itself?

Recall that a search is anything done for an investigative purpose that interferes with a reasonable expectation of privacy. Police demanding private information in the hands of a third party can constitute a search. For example, police carried out a search in Spencer by asking Shaw: whose name is attached to this IP address?

What is contemplated here differs in some ways but is similar in others. Police might ask simply: do you have a name (or an account) attaching to this IP address? Did you lease this IP address to a person? Or they might ask: when and where did you provide use of this IP address?

In some cases, depending on the question and the limited info revealed by the answer, it may not amount to a search. But in some cases it can. 

If police have a name, or an IP or email address and they ask a dating, gambling, or porn website whether they have a user account related to any of them, a “yes” in response could be quite revealing. If a service provider can link a person to a location, or more than one, in a window of time, this could also be invasive.

Should this too require a warrant? We’re in genuinely new terrain here.

The information demand power gives police authority to go poking around the edges of our digital lives — knocking on the doors of anywhere we’ve left a digital trace — to ask questions that could readily create a clear picture of who we are and where we’ve been. All on nothing more than reasonable suspicion.

I can see a challenge to this power leading to a deeply divided the Supreme Court decision similar to that in Bykovets, where half the Court says: reasonable suspicion is enough, and the other half says no, it should require a warrant.

I suspect it will come down to half the Court seeing this power as too preliminary to pose a real threat to privacy and police needing some leeway to act without undue hindrance, and half the Court seeing this as too close in nature to a means of circumventing the protections around subscriber ID and IP addresses. In some cases a positive answer to the question: “does this user have an account with you?” will be all the police need to know to link a person with an extensive amount of personal data.

If I were a betting man, which I’m not, I would bet that a majority of the Court will find this power reasonable. (But there will be a wonderful, eloquent dissent, probably by Karakatsanis J or Martin J or maybe both, on the importance of privacy and the need for a warrant.)

Briefly, Bill C-2 also extends to agents of the Canadian Security Intelligence Service the ability to make an information demand on no grounds at all. But they may not target a Canadian citizen or permanent resident. Given the high state interest in these cases and the limited privacy interest engaged, this power is likely to be found reasonable.

The lawful access provisions

Bill C-2 contains a whole new statute called the “Supporting Authorized Access to Information Act,” which brings about a “lawful access” regime for private data that police and Crown have long been seeking.

(See Professor Geist’s Globe article on the history of this.)

The Criminal Code has long had something called an assistance order, which compels third parties to assist police in executing a warrant. (Open that storage locker please.) The lawful access provisions do the same but on a larger scale.

They impose of obligations on “electronic service providers,” or anyone providing a digital service (storage, creation, or transmission of data) to people in Canada or if situated here, and more onerous obligations on a class called “core providers” who can be added to a schedule to the Act.

An ESP can be ordered to “provide all reasonable assistance, in any prescribed time and manner, to permit the assessment or testing of any device, equipment or other thing that may enable an authorized person to access information”.

But core providers will be subject to regulations that mandate the “installation… of any device, equipment or other thing that may enable an authorized person to access information”.

A core provider might be Google or Meta, Shaw or Telus. And the equipment at issue could enable direct access to accounts, stored files, data logs, and so on.

There are two important limits on this.

One is that police (or an authorized person, such as a CSIS agent) can only go ahead and access data or demand it if they have authority to do so under law — which may or may not involve a warrant, reasonable grounds, and so on.

The other limit applies to both ESPs and core providers: they do not have to follow an order “if compliance… would require the provider to introduce a systemic vulnerability in electronic protections related”. I take this to mean that they cannot be compelled to install a backdoor to encryption.

Are these powers immune to challenge under section 8 of the Charter?

They do not contemplate a search directly. But depending on how an assistance order is used, it could result in an unreasonable search.

For example, a while ago, there was a debate about whether using an assistance order to compel a person to provide police their password might amount to an unreasonable search.

The companies subject to a requirement under this new lawful access statute could challenge it in court — either in response to an order made to them specifically or under a regulation that applies to them as a core provider (on administrative law principles).

But it’s harder to imagine a case where police have conducted a search on lawful grounds, or with a valid warrant, which is found to be unreasonable under section 8 because police were able to gain access to private data more readily through technical means of access made possible under this new statute.

But I can envision two possible exceptions.

One is if the means of access mandated under the new Act amounts to an interception: realtime access to data that police use to obtain the data at issue. An accused person would need to show, however, that police came into possession of their private data in realtime and without a warrant under the wiretap (interception) provisions in Part VI of the Criminal Code. (See the Telus case for more on this distinction.)

Another exception is simply that police gained quick access technically, but without a lawful basis (a warrant, etc.).

But it isn’t inconceivable that the Supreme Court might eventually say that mandating certain measures, means, or forms of access amount to an unreasonable search even if used with lawful authority such as a warrant. These might include means that somehow give police with a warrant access to data being created now and in the future, in addition to data already created.

If you’re still with me, thanks for reading! I’ll continue to follow the bill as it makes its way through Parliament and try to post about it here.