In 2024, business leaders, especially those in small and medium-sized enterprises, face many innovative products using artificial intelligence (AI). Some of these products are just promises. Others have the potential to radically transform your business by increasing the productivity and quality of the products and services offered. However, these transformational technologies also carry their share of risk. All leaders must fully understand these risks before embarking on AI solutions.
Understanding AI in the SME context
Artificial intelligence is not a domain reserved for large corporations with abundant resources. Several use cases could quickly bring benefits to companies with smaller means. For example:
- Inventory management: optimize operations through intelligent systems that can forecast inventory needs based on historical and current trends, helping to optimize inventory levels and reduce holding costs.
- Dashboards and reporting: provide decision support through AI tools that generate real-time analysis and reporting, providing decision-makers with valuable information without requiring in-depth data analytics expertise.
This is in addition to many other cases, not to mention support for cybersecurity teams with platforms to speed up the incident detection work and incident responses.
Risks associated with using AI
Integrating AI can be a successful initiative that propels your organization toward meaningful growth. However, it is essential to remain vigilant about the associated risks to prevent them from compromising your earnings and turning what could be a profit into a disastrous loss.
Therefore, I offer you an overview of the risks to monitor carefully to maximize your chances of success.
- Inaccuracies: AI relies heavily on data for its machine learning processes. The principle of “Garbage in, Garbage out” is particularly relevant here; the quality of AI results is directly related to the quality of incoming data. If the data contains errors, or biases or does not accurately represent reality, AI models will incorporate these flaws into their predictions or decisions.
- Systemic bias: AI is not immune to human bias because it learns from data that may be biased. For example, a recruitment algorithm can continue perpetuating discrimination if it learns from discriminatory historical data. Recently, a case of systemic bias attracted media attention, with a credit pre-approval system disproportionately rejecting women’s applications. This occurred because the system primarily relied on data from male borrower profiles.
- Privacy: as data collection increases, it becomes essential to manage this information securely. Companies must ensure compliance with strict regulations such as the GDPR in Europe or Law 25 in Quebec. There have been instances where some systems have accidentally exposed training data, including confidential information and personal data. In this context, the model alignment principle is crucial. It refers to the need to ensure that the system acts in a manner consistent with the intentions of human operators.
- Regulatory compliance: AI laws are changing rapidly. Organizations must stay informed of the latest legislation. Canada is making active progress in implementing comprehensive legislation to regulate AI use and strengthening privacy laws through Bill C-27. This bill proposes Canada-wide requirements for AI system design, development, use and supply.
- Opt-out in model training: customers must have the option to opt out of training product AI models with their data. This minimizes the risk of public disclosure of their data should a malicious entity compromise the training data.
- Implementation costs: although less and less expensive to implement, efforts to properly configure AI should not be underestimated to ensure that the results provided are accurate or precise enough to generate benefits. Errors can have significant impacts.
- Staff skills: without necessarily building a team of AI experts, it is essential to have staff who understand the risks, benefits and key AI concepts to interact and supervise suppliers effectively.
- Ethical considerations: AI system decisions must be fair, transparent and understandable, especially when they affect vital aspects of daily life. In this context, the importance of the alignment principle and appropriate configuration becomes critical.
Cyberattacks against your AI
Cyberattacks targeting AI systems and insider threats pose complex security challenges. These attacks can be difficult to detect as they often involve subtle training data or algorithm manipulations that are not immediately obvious. Insider threats, in particular, come from people who have legitimate access to the system, making them particularly insidious. They can discreetly manipulate or sabotage AI. This can corrupt AI results and remain undetected for long periods, potentially causing significant damage before discovery.
- Poisoning attacks: An AI poisoning attack is a technique by which attackers knowingly inject false data into the training data set. The goal is to corrupt the model by training it with incorrect information, leading the AI to make decisions incorrectly.
- Escape attacks: These attacks use cleverly modified data that appears normal but is designed to mislead the AI model. For example, by slightly altering the image of a traffic sign, a model could be deceived, which could lead an autonomous vehicle to make a serious navigation error.
- Model inversion attacks: In this type, the attacker uses the AI model outputs to learn more about the data learned during training. This can be used to extract sensitive or confidential information from the model, for example, to reconstruct images used in training from AI predictions.
There are many other types of attacks. Repositories are available online. IBM and MITRE ATLAS have extensive documentation.
Contractual arrangements are increasingly complex
Given the complexity of AI technologies and legal implications, it is advisable to seek legal counsel to review or draft your contracts. Contracts with AI suppliers are crucial to protect your company’s interests. Here are some recommendations:
- Clarify ownership of AI data and models: determine who owns the data used and models developed during collaboration. Ensure you can opt out of training the model with your data.
- Liability: include clauses that assign liability for system failure or compliance issues.
- Include performance clauses: ensure that AI solution performance commitments are clearly defined and measurable.
In conclusion, adopting artificial intelligence can offer transformational benefits to your company, including staying competitive and accelerating and facilitating innovation. However, it is crucial to approach this technology with a clear understanding of the associated risks. Companies can maximize profits while minimizing inconvenience by taking steps and engaging in an informed way to mitigate these risks.