The Federal Commerce Fee (“FTC” or “Agency”) lately indicated that it considers initiation of pre-rulemaking “under section 18 of the FTC Act to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.” This follows the same indication from Fall 2021 the place the FTC had signaled its intention to begin pre-rulemaking activities on the same security, privacy, and AI topics in February 2022. This time, the FTC has expressly indicated that it’s going to submit an Advanced Notice of Preliminary Rulemaking (“ANPRM”) in June with the related public remark interval to finish in August, whereas it was silent on a particular timeline when it made its preliminary indication again in the Fall. We’ll proceed to maintain you up to date on these FTC rulemaking developments on safety, privateness, and AI.
Additionally, on June 16, 2022 the Company issued a report to Congress (the “Report”), as directed by Congress in the 2021 Appropriations Act, relating to the use of synthetic intelligence (“AI”) to fight on-line issues reminiscent of scams, deepfakes, and faux critiques, in addition to different extra severe harms, reminiscent of little one sexual exploitation and incitement of violence. Whereas the Report is restricted in its purview—addressing the use of AI to fight on-line harms, as we talk about additional beneath—the FTC additionally makes use of the Report as a chance to sign its positions on, and intentions as to, AI extra broadly.
Background on Congress’s Request & the FTC’s Report
The Report was issued by the FTC at the request of Congress, which—by means of the 2021 Appropriations Act—had directed the FTC to research and report on whether or not and how AI could also be used to determine, take away, or take some other acceptable motion crucial to tackle all kinds of specified “online harms.” The Report itself, whereas spending a big quantity of time addressing the prescribed on-line harms and providing suggestions relating to the use of AI to fight the identical, in addition to caveats for over-reliance on them, additionally devotes a big quantity of consideration to signaling its ideas on AI extra broadly. Specifically, due to particular considerations which were raised by the FTC and different policymakers, thought leaders, client advocates, and others, the Report cautions that the use of AI shouldn’t essentially be handled as an answer to the unfold of dangerous on-line content material. Somewhat, recognizing that “misuse or over-reliance on [AI] tools can lead to poor results that can serve to cause more harm than they mitigate,” the Company affords a quantity of safeguards. In so doing, the Company raises considerations that, amongst different issues, AI instruments will be inaccurate, biased, and discriminatory by design, and can even incentivize relying on more and more invasive types of industrial surveillance, maybe signaling what could also be areas of focus in forthcoming rulemaking.
Whereas the FTC’s dialogue of these points and different shortcomings focuses predominantly on the use of AI to fight on-line harms by means of coverage initiatives developed by lawmakers, these areas of concern apply with equal power to the use of AI in the non-public sector. Thus, it’s cheap to posit that the FTC will focus its investigative and enforcement efforts on these identical considerations in reference to the use of AI by firms that fall underneath the FTC’s jurisdiction. Corporations using AI applied sciences extra broadly ought to listen to the Company’s forthcoming rulemaking course of to keep forward of the problems.
The FTC’s Suggestions Relating to the Use of AI
One other main takeaway of the Report pertains to the collection of “related considerations” that the FTC has cautioned would require the train of nice care and centered consideration when working AI instruments. These issues entail (amongst others) the next:
Human Intervention: Human intervention continues to be wanted, and maybe all the time can be, in reference to monitoring the use and selections of AI instruments supposed to tackle dangerous conduct.
Transparency: AI use should be meaningfully clear, which incorporates the necessity for these instruments to be explainable and contestable, particularly when folks’s rights are concerned or when private information is being collected or used.
Accountability: Intertwined with transparency, platforms and different organizations that rely on AI instruments to clear up dangerous content material that their companies have amplified should be accountable for each their information and practices and their outcomes.
Information Scientist and Employer Duty for Inputs and Outputs: Information scientists and their employers who construct AI instruments—in addition to the corporations procuring and deploying them—should be accountable for each inputs and outputs. Applicable documentation of datasets, fashions, and work undertaken to create these instruments is essential in this regard. Concern also needs to be given to the potential affect and precise outcomes, regardless that these designing the instruments won’t all the time understand how they may finally be used. And privateness and safety ought to all the time stay a precedence focus, reminiscent of in their therapy of coaching information.
Of word, the Report identifies transparency and accountability as essentially the most useful path in this space—at least as an preliminary step—as having the ability to view and permitting for analysis behind platforms’ opaque screens (in a way that takes consumer privateness into consideration) could show important for figuring out one of the best programs for additional public and non-public motion, particularly contemplating the difficulties created in crafting acceptable options when key facets of the issues are obscured from view. The Report additionally highlights a 2020 public assertion on this difficulty by Commissioners Rebecca Kelly Slaughter and Christine Wilson, who remarked that “[i]t is alarming that we still know so little about companies that know so much about us” and that “[t]oo much about the industry remains opaque.”
As well as, Congress additionally instructed the FTC to suggest legal guidelines that would advance the use of AI to tackle on-line harms. The Report, nonetheless, finds that—on condition that main tech platforms and others are already utilizing AI instruments to tackle on-line harms—lawmakers ought to as an alternative think about focusing on growing authorized frameworks to be certain that AI instruments don’t trigger further hurt.
Taken collectively, firms ought to count on the FTC to pay notably shut consideration to these points as they start to take a extra energetic strategy in policing the use of AI.
FTC: Our Work on AI “Will Likely Deepen”
As well as to signaling what areas of focus could also be shifting ahead when addressing Congress’ mandate, the FTC veered exterior of its purview to spotlight its latest AI-specific enforcement instances and initiatives, describe the enhancement of its AI-focused staffing, and present commentary on its intentions as to AI shifting ahead. In a single notable sound chunk, the FTC notes in the Report that its “work has addressed AI repeatedly, and this work will likely deepen as AI’s presence continues to rise in commerce.” Furthermore, the FTC particularly calls out its latest staffing enhancements because it relates to AI, highlighting the hiring of technologists and further employees with experience in and particularly devoted to the subject material space.
The Report additionally highlights the FTC’s main AI-related initiatives to date, together with:
two latest FTC instances—one in opposition to Everalbum (covered by at CPW by Kristin Bryan and by SPB crew member David Oberly in a Bloomberg Regulation article) and the opposite in opposition to Facebook—which have handled facial recognition expertise;
the FTC’s 2016 report, Big Data: A Tool for Inclusion or Exclusion?, which discusses algorithmic bias in depth; and
the vary of public occasions the FTC has held focusing on AI points, together with workshops on dark patterns and voice cloning, classes on AI and algorithmic bias at PrivacyCon in 2020 and 2021, a listening to on competitors and client points with algorithms and AI, a FinTech Forum on AI and blockchain, and an early discussion board on facial recognition expertise that produced the FTC report, Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies (analyzed by David Oberly in this Biometric Update article).
The latest Report to Congress strongly signifies the FTC’s total apprehension and mistrust because it relates to the use of AI, which ought to function a warning to the non-public sector of the potential for better federal regulation over the utilization of AI instruments. That regulation could come ahead of later, particularly in mild of the Company’s latest ANAPR signaling the FTC’s consideration of initiating rulemaking to “ensure that algorithmic decision-making does not result in unlawful discrimination.”
On the identical time, though the FTC’s Report calls on lawmakers to think about growing authorized frameworks to assist be certain that the use of AI instruments doesn’t trigger further on-line harms, it is usually seemingly that the FTC will enhance its efforts in investigating and pursuing enforcement actions in opposition to improper AI practices extra typically, particularly because it relates to the Company’s considerations relating to inaccuracy, bias, and discrimination.
Taken collectively, firms ought to seek the advice of with skilled AI counsel to acquire recommendation on proactive measures that may be applied at this time to get forward of the compliance curve and put themselves in one of the best place to mitigate authorized dangers shifting ahead—as it is just a matter of time earlier than regulation governing the use of AI is enacted, seemingly sooner moderately than later.