Regulation of Software Using AI

Before we get into the dull regulatory details, let us explore the technologies that provide AI processing capacity and the differences with the processing capacity of traditional computers. It is essential to understand the fundamental difference between a CPU (central processing unit) and a GPU (graphic processing unit) to fully understand the growing importance of companies such as NVIDIA and QScale here in Quebec. You often hear about it, but what exactly is it?

The general nature of the CPU

Let us take a moment to admire the wonder of the modern computer. At the heart of every machine is a piece of silicon, engraved with billions of nanoscale circuits. The electrical operating regime of these circuits lies at the boundary between classical and quantum physics. They enable the computer to run arbitrary sets of instructions, called programs, with billions of transactions per second. This engineering miracle is called a CPU.

Despite the complexity of its design, the CPU’s operation relies on relatively simple principles. The CPU executes a sequence of instructions defining the operations on the data to process. For example, for a multiplication series, the CPU takes each operation individually and executes them sequentially.

This architecture makes the CPU a generalist tool. The CPU can perform most imaginable computer operations. Therefore, what is the GPU’s role? Do we not have enough power with our modern CPUs to perform all the additions and multiplications we need? Apparently not!

GPU, the specialist

We became aware of the limitations inherent in the CPU architecture when developing video games. When a computer simulates a 3D scene, it must perform many vector geometry calculations. These geometric transformations result in matrix multiplications. To run this simulation in real-time, as is required for a video game, the CPU must perform all of these matrix multipliers to generate the next image of the scene in milliseconds while continuing to manage the computer’s other functions.

Fortunately for video game fans who love fluid graphics, these vector geometry algorithms have a special feature that allows for important optimizations: they are parallel. In other words, operation 2 does not depend on the outcome of operation 1. Theoretically, you could run tens of thousands of parallel transactions instead of sending them one by one to the CPU!

The first GPUs were created to take advantage of this optimization opportunity. The term “GPU” was first used in 1999 by NVIDIA with the GeForce 256 graphic card. Unlike the CPU, which executes instructions sequentially, the GPU runs a single instruction on thousands of cores simultaneously, thus processing several batches of data simultaneously and achieving high parallelization levels.

Over time, many other sectors exploited this ability to perform many parallel calculations. For example, networks using blockchain technology are central to the cryptocurrency explosion.

Next followed the rise of generative artificial intelligence. Neural networks that are essential for this innovation also benefit from parallelization. Their training and inference processes are based on matrix multiplications, similar to those used for video game rendering.

This development was particularly favourable to Nvidia, which initially focused on the gaming equipment market. Today, Nvidia is a leading provider of parallel computing power, explaining why it is currently one of the most valued companies in the world.

Other terms, such as TPU (Tensor Processing Unit) or NPU (Neural Processing Unit), are also emerging. These are other implementations of the same idea: a processor specializing in parallel calculations! Will the regulations governing AI slow down investment and, by extension, innovation?

The European Union adopts stringent regulations

The competent authorities will likely approve and sign the EU Artificial Intelligence Act in 2024. It is a regulatory framework governing the use of AI by organizations that sell and operate in the EU and will affect all companies, regardless of nationality or size.

The Act establishes regulations for four risk levels associated with the sale and use of AI in software products.

Complying with these new regulations could result in significant, if not prohibitive, costs for CEOs of technology companies developing products based on “high-risk” AI for the European market.

In addition, CEOs of technologies that have developed or are developing products considered to present “unacceptable risks’ will be prohibited from selling their AI products in the EU within six months of the official entry into force of the regulations – expected around October 2024.

Category 1: “Unacceptable Risk”

Organizations selling products deemed “unacceptable risk” can expect the banning of these products from EU markets within the next six months. These AI-based products include:

  • Systems capable of providing real-time biometric identification.
  • Tools that collect facial images from the Internet or public video surveillance systems to create facial recognition databases.
  • Tools that can infer people’s emotions in the workplace. This includes, for example, software that can capture and interpret people’s facial expressions.

Category 2: “High Risk”

Organizations selling products in the “high-risk” category can expect strict and costly requirements. These products are considered to have the potential to harm health, safety, fundamental rights, the environment and democracy. They include biometrics, critical infrastructure, education, employment and law enforcement products.

Companies selling or using products based on “high-risk” AI must adjust their systems to meet the prescribed operational requirements in the EU. Responsibility for demonstrating compliance will rest with software producers. These producers will demonstrate compliance through technical and legal assessments that will not be cheap.

Technology producers will also have to develop risk management measures that can be costly. Finally, these regulations may concentrate the supply of innovative AI solutions in the United States and Canada, where the frameworks are less prescriptive and result in greater prudence in AI investments, potentially hindering innovation.

Category 3: “Limited Risk”

Organizations selling products in the “limited risk” category — including AI software such as chatbots and deepfake generators — will have less stringent obligations than those in previous categories. Technology providers with products in this category must inform users that they are interacting with an AI system and ensure that all AI-generated audio, video and photo recordings are labelled as such.

Category 4: “Minimal risk”

Organizations selling AI products in the “minimal risk” category, such as anti-spam filters and referral systems, will have no obligations under the AI Act.

In conclusion, we are at an inflexion point where the technology and the regulators will brush up against each other. It will be interesting to see how the markets react.

Note: This article was written in collaboration with Nicolas Berthiaume, Software Architect at Mondata.