Risk Is Different With AI. Here’s How to Think About It

Posted By: Tom Morrison Community,

Most people in the current workforce had no exposure to AI technology when they attended school. This makes it challenging to identify, manage and mitigate its risks.

AI is expected to be a truly transformational technology. It is showing value in a wide variety of applications, ranging from analyzing medical images to writing legal briefs.

In the manufacturing sector, AI is already beginning to play an important role in defect detection, automated production using robots, predictive maintenance, inventory control, supply chain management, and worker training to boost human productivity and improve the efficiency of manufacturing operations.

However, AI is not a technology without deficiencies. Therefore, deployment of AI may pose several risks that need to be carefully managed. Unfortunately, most people in the current workforce had no exposure to AI technology when they attended school. This makes it challenging for them to identify, manage, and mitigate risks associated with AI.

This article provides a high-level framework for thinking about these risks and what factors need to be considered to make sound decisions regarding the deployment of AI tools and effectively utilize them.

Auditing of Training Data to Understand Vulnerabilities: Recent advances in AI are fueled by data-driven approaches. Basically, an AI system ingests a vast amount of data and learns patterns in the ingested data—and uses this information to perform classification or prediction or generate new content. How well an AI tool—for instance, a defect detection system—performs depends upon the quality of the data used to train it.

People using AI tools should be able to audit the data used by the AI tool. They should understand how AI tools can perpetuate biases present in the training data, leading to erroneous results. They should be able to ask the right questions to audit the data and be able to assess risks and develop a plan to address these risks if any problems are discovered.

A few useful steps as a part of this process are:

  • Understand where data used in training came from
  • Conduct data quality assessments to identify any issues or anomalies in the dataset
  • Identify irrelevant or redundant features that may not contribute to the system's performance.

Often, it is necessary to use data visualization tools to analyze vast amounts of data. Therefore, familiarity with different data formats and visualization tools is a useful skill for performing training-data audits.   

Understanding potential source of errors: People using AI tools need a fundamental understanding of AI algorithms and models to recognize how these tools can produce erroneous results. Comprehending the logic behind common algorithms enhances awareness of the associated risk factors and builds trust. Additionally, knowing which algorithms are suitable for different applications is beneficial. A basic knowledge of statistical methods and mathematical concepts—such as probability, linear algebra and calculus—is essential for understanding algorithms and interpreting their results.

Verification and validation of AI tools: To manage risks and avoid surprises, AI tools need to be rigorously validated and verified before large-scale deployments. This requires selecting the appropriate verification & validation (V&V) methodology based on the task performed by the AI tool.

The users should be able to:

  • Assess system performance and identify potential failure modes
  • Assess the transparency and explainability capabilities of the AI system
  • Evaluate the effectiveness of existing failsafe mechanisms used in the AI tool to help mitigate risks associated with AI system failures

If adequate failsafe mechanisms are not implemented, then they should work with the tool developers to get them implemented.

The V&V process may reveal that the training process lacked certain kinds of data. The user should be prepared to work with the system tool to ensure that the missing data is added to the system. The tool should go through the training process again to be able to use the newly added data.

For example, data used to train a tool state monitoring system might be lacking a defect type. The user should be able to implement mechanisms for continuous monitoring of data quality and model performance. They may also need to regularly update and retrain AI models with new data to adapt to changing conditions. They will also need to document the data collection process, including sources, preprocessing steps and any transformations applied to the data to assess and manage risks.

Customizing AI tools based on risk-management requirements: Many AI tools must be properly configured to ensure they produce results that meet risk-management requirements. This involves specifying user preferences for result content and format. For instance, if the user requires a certain level of accuracy, the system should only report results that meet those standards. Additionally, users may need to supply extra data and perform further training to tailor the tool to meet risk management requirements for specialized applications.

Identifying and mitigating risks associated with data privacy concerns: An AI tool might inadvertently use personal or proprietary data, raising privacy concerns. For instance, a system used to recognize human activities could unintentionally disclose private information about individuals. Those working with AI tools need to understand, identify and address these privacy issues. If the AI tool continuously collects data about individuals and customers, adequate training is necessary to obtain informed consent for data collection and usage. Additionally, team members should be proficient in using tools to clean data to preserve privacy and securely purge data that could compromise privacy.

Identifying and mitigating risks associated with data security concerns: Data stored in conjunction with AI tools poses a risk of breaches, as unauthorized access can lead to significant issues. Users of AI tools that store data should be well-versed in proper encryption, access controls and data segregation to mitigate these risks.

Compliance with legal and regulatory standards and guidelines: Many modern AI tools are “black boxes.” A lack of transparency can be problematic in ensuring regulatory compliance. Moreover, new legislation is expected to regulate AI in the future. AI tools need to be frequently evaluated and updated to ensure that they meet new guidelines and rules. Noncompliance can create significant risks.

Risks associated with unintended consequences: The use of AI also has potential to cause unintended consequences. Characterizing these consequences and mitigating potential negative impacts on people and the environment is an important aspect of ethical considerations in deployment of AI tools. This type of risk is most challenging to characterize. However, we should anticipate that as we deploy AI tools, new risk factors will materialize, and we need to be ready to manage them.    

We need to ensure that the manufacturing workforce is ready to manage and mitigate risks associated with using AI tools to harness the power of this new revolution. Familiarity with the above factors help in identifying failure modes and consequences of these failures. This knowledge can then be used to design features in the system to prevent failures. When failure prevention is not possible, the system should be designed to mitigate the consequence of potential failures and quickly recover from them. 

Stored data is also vulnerable to adversarial attacks, where input data is deliberately manipulated to cause AI models to make incorrect predictions or classifications. Implementing data validation and anomaly detection techniques can help identify and counteract such attacks.

AI models represent valuable intellectual property and can be targets for theft or unauthorized use. Understanding measures such as model watermarking, encryption, and access controls is crucial for protecting against intellectual property theft.

Workforce skill-mismatch risks: There appears to be a growing gap between the skills required by the new AI-driven manufacturing processes and those possessed by the existing workforce. AI-based automation in manufacturing can lead to job displacement as AI takes over tasks that were previously performed by human workers. This can result in unemployment and a shift in the labor market. New AI literacy programs are needed to support displaced workers and address the workforce skill mismatch risks.

Written by: Dr. Satyandra K. Gupta, co-founder and chief scientist at GrayMatter Roboticsfor IndustryWeek.