Google Translate by Jon Russell CC BY 2.0 https://flic.kr/p/S4BPDz
News
October 5, 2023
The Canadian government plans to regulate the use of artificial intelligence in search results and when used to prioritize the display of content on search engines and social media services. AI is widely used by both search and social media for a range of purpose that do not involve ChatGPT-style generative AI. For example, Google has identified multiple ways that it uses AI to generate search results, provide translation, and other features, while TikTok uses AI to identify the interests of its users through recommendation engines. The regulation plans are revealed in a letter from ISED Minister François-Philippe Champagne to the Industry committee studying Bill C-27, the privacy reform and AI regulation bill. The government is refusing to disclose the actual text of planned amendments to the bill.
The current approach in Bill C-27 leaves the question of which AI systems should be viewed as high impact to a future regulatory approach. The letter says the government now plans to identify the high impact systems within the bill and drop the future regulatory process. While many of the proposed high impact system are unsurprising and largely mirror similar rules in the European Union, the inclusion of search and social media is key exception. The government is targeting the following classes of AI systems:
1. The use of an artificial intelligence system in matters relating to determinations in respect of employment, including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination.
2. The use of an artificial intelligence system in matters relating to (a) the determination of whether to provide services to an individual; (b) the determination of the type or cost of services to be provided to an individual; or (c) the prioritization of the services to be provided to individuals.
3. The use of an artificial intelligence system to process biometric information in matters relating to (a) the identification of an individual, other than if the biometric information is processed with the individual’s consent to authenticate their identity; or (b) an individual’s behaviour or state of mind.
4. The use of an artificial intelligence system in matters relating to (a) the moderation of content that is found on an online communications platform, including a search engine and a social media service; or (b) the prioritization of the presentation of such content.
5. The use of an artificial intelligence system in matters relating to health care or to emergency services, excluding a use referred to in any of paragraphs (a) to (e) of the definition of “device” in section 2 of the Food and Drugs Act that is in relation to humans.
6. The use of an artificial intelligence system by a court or administrative body in making a determination in respect of an individual who is a party to proceedings before the court or administrative body.
7. The use of an artificial intelligence system to assist a peace officer, as defined in section 2 of the Criminal Code, in the exercise and performance of their law enforcement powers, duties and functions.
The identification of AI use for hiring, biometric information, heath care, administrative decisions, and law enforcement are similar to some of the “high risk” systems in the EU, which has tended to focus on sectors such as education and law enforcement. However, the inclusion of a category for content moderation or the prioritization of the presentation of content is not found in the EU. More comparative study is needed, but it does appear China’s extensive AI regulations cover search and social media. Further, the issue of regulating algorithms and discoverability was a major issue during the Bill C-11 debate, with the government insisting it would not do so. This approach would involve far more extensive regulation.
By including search and social media results as “high impact” systems, Bill C-27 establishes a range of regulations and new powers, including risk mitigation, record keeping, and public disclosures. The Minister can order disclosure of records, require an audit, and order virtually any measures arising out of the audit. Failure to abide by the regulations can result in penalties as high as 3% of gross global revenues. Moreover, the government plans to more closely align the regulatory powers to those found in the EU, which could establish a host of additional regulatory requirements. Given that the government is not releasing the actual text of the amendments, the specific obligations remains somewhat uncertain.
Many Canadians have been calling for rules to prevent bias and other harms that may arise from AI. However, the inclusion of content moderation and discoverability/prioritization comes as a surprise as does equating AI search and discoverability with issues such as bias in hiring or uses by law enforcement. While the government says it is more closely aligning its rules to the EU, it appears Canada would be an outlier when compared to the both the EU and the U.S. on the issue.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Hacker News – https://www.michaelgeist.ca/2023/10/canada-plans-to-regulate-search-and-social-media-use-of-artificial-intelligence-for-content-moderation-and-discoverability/