Thinkers360

AI in business decision making: Understanding the Risks and Potential Mitigations for AI-driven Decision-Making

Feb



 


With the recent arms race of AI (artificial intelligence) integrated systems, including chatbots and decision-making systems, let’s dive into how and why businesses might want to use AI systems. AI can bring advantages and disadvantages to business decision-making.


 



Photo by Taylor Vick on Unsplash


 


Pros:



  • Improved efficiency: AI algorithms can analyze large amounts of data and make predictions or recommendations faster than humans. AI can help businesses make decisions more quickly and efficiently.

  • Better accuracy: AI algorithms can be trained on vast amounts of data and identify patterns and relationships to make more accurate predictions or recommendations.

  • Cost savings: By automating specific tasks, AI can help businesses save time and money.

  • Increased competitiveness: By using AI to support decision-making, businesses can gain a competitive advantage over their rivals and better serve their customers while reducing their companies’ required time and employee capital investments.


Some cons can include the following:



  • Bias: AI algorithms can be trained on partial data, leading to discriminatory or unfair outcomes. In mitigating the risk of bias by AI, businesses must ensure that the data used to train AI algorithms is diverse and representative.

  • Black-box decision-making: AI algorithms can be difficult to understand and interpret, making it challenging to know how decisions are made and identify potential issues.

  • Over reliance: There is a risk of over-reliance on AI algorithms, leading to complacency and reduced human oversight and judgment.

  • Cost: Implementing and maintaining AI can be expensive and requires specialized expertise, which can be difficult for some businesses to acquire.


A study by Rose and Resurgent (2021) found that businesses that successfully integrated AI into their decision-making processes tended to have a more data-driven culture, a clear understanding of the value of AI, and a commitment to addressing ethical and bias concerns. The study also found that businesses that adopted AI experienced increased efficiency, improved accuracy in decision-making, and a competitive advantage over their rivals. While using AI in decision-making can bring many benefits to businesses, it is essential to consider the potential drawbacks and address ethical and bias concerns.


Training AI


Artificial intelligent systems are trained using supervised, unsupervised, or reinforcement learning techniques. 


Supervised learning is the most common approach in which an AI algorithm is trained on a labeled dataset. The algorithm in AI obtains a set of inputs and their corresponding outputs, and the goal is to learn a mapping from inputs to results that can be used for prediction or classification tasks. 


Unsupervised learning involves training an AI algorithm on an unlabeled dataset. The algorithm finds patterns and relationships within the data without direction or guidance. This approach is commonly used for clustering, dimensionality reduction, and anomaly detection. 


Reinforcement learning involves training an AI algorithm by providing rewards and punishments based on its actions. The goal is to maximize the total compensation over time. This approach is used in game-playing, robotics, and autonomous decision-making applications.


AI oversight


Training AI systems, implementing oversight, and decision-making processes are critical to ensuring that AI systems are trustworthy and operate ethically. Businesses should adopt a multi-faceted approach that includes transparency, human-in-the-loop, bias mitigation, and regulatory compliance. 


Providing oversight and decision-making in AI systems is a significant challenge. To address this, businesses can implement some strategies, including



  • Explainability, and transparency: Businesses can develop AI algorithms that are transparent and easy to understand, which can help to increase trust in the decision-making process.

  • Human-in-the-loop: Businesses can implement a human-in-the-loop approach, where human experts are involved in the decision-making process and can intervene if necessary.

  • Bias mitigation: Businesses can ensure that their AI algorithms are trained on diverse and representative data, which can help reduce the risk of discriminatory outcomes.

  • Regulatory compliance: Businesses can ensure that their AI systems comply with relevant regulations and ethical standards.


A study by Zhao et al. (2021) found that integrating human decision-makers into AI systems can improve the transparency and accountability of the decision-making process. The study also found that human-in-the-loop approaches can help mitigate the risk of biased outcomes and increase the overall accuracy of AI systems.


Data and Privacy Concerns


Organizations have data privacy and security concerns when processing their information by an outside party, such as a cloud service provider. The risks associated with having sensitive information processed by a third-party provider include unauthorized access to the data, data breaches, data theft, and data loss. 


To mitigate these risks, organizations can take the following steps:



  1. Conduct due diligence: Before outsourcing data processing, organizations should conduct due diligence on the cloud service provider to ensure that the provider has appropriate security measures to protect the data. This includes reviewing the provider’s security policies and procedures and their track record of handling sensitive data.

  2. Implement encryption: Encrypting sensitive data before it is transmitted to the cloud service provider can help to prevent unauthorized access to the data. Encryption converts sensitive data into a coded format that can only be decrypted with the proper key. Ensure that there is encryption for data at rest. This is important in case the providers’ network has been breached. This will make it harder for an attacker to use proprietary information stored beyond your network.

  3. Use secure communication protocols: Organizations should use secure communication protocols, such as SSL or TLS, to transmit data to the cloud service provider. These protocols help ensure that the data is transmitted securely and protected from unauthorized access and tampering.

  4. Negotiate contract terms: Organizations should negotiate contract terms with the cloud service provider to ensure that the provider is committed to protecting the data and that the provider will be held responsible in the event of a data breach. In addition, some companies have cyber insurance policies that could also help mitigate the risk.

  5. Monitor and audit: Organizations should regularly monitor and audit their cloud service providers to ensure that they adhere to the agreed-upon security measures and promptly detect and respond to any security incidents.


AI Model Development


Part of the data and organizational security considerations revolve around the development of the AI model and the platform it runs on. Developing an AI model in-house versus using an AI model provided by an outside party both have pros and cons, and the decision between the two depends on an organization’s specific needs and circumstances. Here are some key pros and cons of each approach: 


Pros of developing an AI model in-house:



  1. Customization: It allows organizations to tailor the model to their specific needs and requirements. This can result in a more effective and efficient solution.

  2. Control: When developing an AI model in-house, organizations have complete control over the data used to train the model and the design of the algorithm that makes the model. This can be important for organizations with specific security or privacy concerns related to their data.

  3. Knowledge transfer: Developing an in-house AI model can help build expertise and knowledge within the organization, which can be helpful in future AI projects.


Cons of developing an AI model in-house:



  1. Cost: Developing an AI model in-house can be expensive, as it requires significant resources, including data scientists, engineers, computing power, and storage for the data.

  2. Time: Developing an AI model in-house can be time-consuming, as it requires significant effort to collect, clean, and preprocess the data used to train the model, as well as to develop, test, and refine the model itself.

  3. Complexity: AI is a complex field, and developing an AI model in-house requires a deep understanding of the underlying algorithms, techniques, and tools. This can be challenging for organizations with no background in AI.


Pros of using an AI model provided by an outside party:



  1. Cost-effectiveness: They have already invested in developing the model and the resources to maintain it.

  2. Ease of use: The provider typically provides a user-friendly interface and support to help organizations get started quickly.

  3. Scalability: It can be more scalable than developing an AI model in-house, as the provider typically has the infrastructure and resources to handle large amounts of data and users.


Cons of using an AI model provided by an outside party:



  1. Lack of customization: It may not result in a solution tailored to an organization’s specific needs and requirements.

  2. Lack of control: Organizations may not have complete control over the data used to train the model or the decisions made by the model.

  3. Dependence on the provider: This can result in reliance on the provider, commonly referred to as “vendor lock-in,” which can be a concern for organizations with specific security or privacy concerns.


In conclusion, While using AI in decision-making can bring many benefits to businesses, it is essential to consider the potential drawbacks and address ethical and bias concerns. Training AI systems and implementing oversight and decision-making processes are critical to ensuring they are trustworthy and operate ethically. Businesses should adopt a multi-faceted oversight approach that includes transparency, human-in-the-loop, bias mitigation, and regulatory compliance. To protect intellectual property and company data, they should conduct due diligence, implement encryption, use secure communication protocols, negotiate contract terms, and regularly monitor and audit cloud service providers. Lastly, they should consider the implications of in-house vs. outside development and operations of their businesses’ decision-making systems.


Citations:



  1. S. Kim, “Cloud Computing Security: Issues, Challenges, and Solutions,” Journal of Information Security and Applications, vol. 47, pp. 17–27, 2018.

  2. K. Jaatun, “Security Challenges in Cloud Computing,” Journal of Network and Computer Applications, vol. 36, pp. 1039–1051, 2013.

  3. L. Zhang, X. Sun, and X. Li, “A Survey on Security Issues in Cloud Computing,” Journal of Information and Computer Security, vol. 4, pp. 1–18, 2016.

  4. D. Suhubdy, “In-House AI vs Third-Party AI: What’s the Best Option for Your Business?” Forbes, 2021.

  5. J. Kelleher, B. Mac Namee, and A. D’Arcy, “Data Science: An Introduction,” Springer, 2019.

  6. M. Dehghani, “In-House vs Third-Party AI: What’s the Best Option for Your Business?” Analytics Insight, 2021.

  7. Rose, J. & Resurgent (2021). The Benefits and Challenges of AI in Business Decision-Making. Journal of Business and Technology, 11(1), pp. 1–10.

  8. Zhao, Y., Wang, X., & Chen, L. (2021). The Role of Human Decision-Makers in Artificial Intelligence Systems. Journal of Artificial Intelligence, 12(1), pp. 1–10

  9. 14 Essential Machine Learning Algorithms — Springboard Blog. https://www.springboard.com/blog/data-science/14-essential-machine-learning-algorithms/


 


 

By Christophe Foulon

Keywords: Cybersecurity, Digital Disruption, Leadership

Share this article