In an effort to tackle a variety of hazardous internet information, Canada has recently introduced the internet Safety Act. The Liberal government of Prime Minister Justin Trudeau has unveiled draft legislation that would make digital platforms responsible for quickly eliminating any information that fits under one of seven predetermined categories of damage.
Intimate content shared without consent, hate speech, violent extremism or terrorism, bullying of children, and content that involves injury to a child or encourages self-harm are among the seven categories that are outlined.
On February 26, 2024, the Canadian government proposed the historic Online Harms Bill (Bill C-63) in parliament. The Online Harms Act and amendments to the criminal and human rights legislation are included in the measure. Significant reforms to platform regulation in Canada are included in the Act, which also establishes a strong new regulatory body. If approved, it would include Canada in the group of nations that have enacted extensive legislation on internet dangers, which also includes the United States, Australia, and the European Union.
The Liberal Party promised to initiate legislation “within its first 100 days” in government during the federal election of 2021. It suggested a strategy based on the Network Enforcement Law (NetzDG) of Germany, which calls for the swift and obligatory removal of hate speech and many other types of damaging communication.
The government rethought its strategy and experts criticized the earlier idea, which resulted in the present law taking nearly eight times longer to complete than anticipated.
Online Harm Act
The proposed law would require tech companies to implement a screening procedure and delete harmful information within 24 hours of receiving a request. Additionally, hate speech distributors would face legal action from Canadians who submit complaints with a human rights tribunal. The law proposes to establish a new Canadian digital safety commission in order to implement these rules.
Making content unavailable is the strictest obligation. It covers two categories of content: revealing private information without consent and abusing minors. If the regulator issued an order, platforms would have to block access to that content.
The law also aims to discourage major offenses in addition to addressing harmful internet information. Businesses who break the rule risk fines of up to 6% of their total worldwide sales. In addition, the law suggests a substantial increase in the punishments for encouraging or inciting genocide, with a possible life term as the maximum.
Social media companies would first have a need to behave responsibly for each of the seven categories of harm aforementioned. In addition to filing a digital safety plan with the regulator and undergoing compliance investigations, they need to develop strategies to detect and reduce risks. This obligation is in addition to important new transparency regulations that mandate platforms share data with the government, authorized independent researchers, and civil society, as well as post information on how they handle dangerous content.
Content that is meant for or likely to be viewed by minors (Below 18) is subject to the act’s duty of care. This is a more stringent obligation, demanding that age- and safety-appropriate design elements be implemented on platforms. The protection of children is undoubtedly the main goal of the Act and the government’s support of it.
Bill C-63’s Online Harms Act section is an attempt to strike a balance between safety and freedom of speech, rather than just concentrating on takedowns. Though there are concerns over the lack of clarity surrounding matters such as the precise powers of the Digital Safety Commission and the certification for independent researchers, the majority of professional comments in Canada seems optimistic about this strategy.
Criticism
The act also had to have the criticism. Strong resistance to the government’s ideas has come from conservatives, including Pierre Poilievre, the head of the Conservative Party. He expresses worries about the government’s definition of hate speech and opposes the act for possibly restricting the right to free speech.
Canada’s proposal comes after similar legal measures in other Western nations, including the Digital Services Act of the European Union, the Online Safety Law of the United Kingdom, and content moderation regulations in several U.S. states. After a comprehensive assessment by a legislative committee, the Senate will consider the proposed measure. Before the legislation is signed into law, these steps enable possible changes and alterations.