Artificial Intelligence Has A Big-Time Trust Issue

Artificial Intelligence
Image credit: source

Can an enterprise even be put on autopilot?Photo: Joe McKendrick

ARTICLE CONTINUES AFTER ADVERTISEMENT

There is plenty of concern about the repercussions of AI on society and human activity, suggesting that we are turning over the keys of reason to machines. Forbes contributor Lauren deLisa Coleman explored the dark side of AI presented in a documentary called Do You Trust this Computer. But trust needs to be established at many levels.

Trust always needs to be earned, and AI still working on earning its cred, not only in popular perceptions, but also in the minds of the people who will be paying gobs of money for it. That’s the gist of a recent survey and analysis of 1,100 companies from Deloitte. Executives are concerned about a host of risks associated with AI technologies ," the co-authors, Jeff Loucks, Tom Davenport and David Schatsky, observe. "Some of the risks are typical of those associated with any information technology; others are as unique as AI technology itself."

Typically with IT systems over the years, installations can take place, and if things didn’t go right, the systems could be rolled back to before the start point until the bugs were worked out. Usually, the people hurt the most were IT staff members. With AI, however, issues may be hard to detect, and there is blind reliance on machine-made decisions that may not be audited right away, if ever.

The need for trust emerges in the list of top risks executives associate with AI, including the following:

  1. Cybersecurity vulnerabilities
  2. Making the wrong strategic decisions based on AI
  3. Legal responsibilities for decisions made by AI
  4. AI failure in a life-or-death context
ARTICLE CONTINUES AFTER ADVERTISEMENT

As AI creeps higher into the enterprise — going from mapping out delivery routes to chatting with customers to playing a greater role in managerial decisions — the need for trust is only getting more acute.

What can be done to build trust in AI? Loucks and Davenport have the following suggestions, based on their observations on the best practices of early adopters:

Pursue execution excellence: "To drive change across lines of business, companies should focus on project management and change management," Loucks and Davenport state. "The fundamentals of fostering organizational change can get lost amid excitement around pilots, grassroots experiments, and vendor-driven hype."

Measure and track progress: "Ensure that costs and impacts are tracked carefully, and that successes are incontrovertible. This will help CFOs make the investments required as projects—and budgets—get bigger."

Address cybersecurity risks: Stay on top of the latest tools and developments that address security in AI environments and datasets. "No cybersecurity efforts can prevent every attack, but early adopters can improve their defenses by incorporating security from the beginning of the process, and making it a higher priority."

ARTICLE CONTINUES AFTER ADVERTISEMENT

Apply AI beyond the IT function: "It makes sense that complex technologies, which require heavy involvement from the IT department, would be applied there first. But the transformative potential of AI will likely be reached only when it permeates an entire company and enables change in multiple business functions and units."

Take advantage of cloud resources: "Cloud can play a pivotal role in achieving those objectives via services that provide broad ranges of users with easy access to AI-based capabilities. Many big cloud providers are developing subscription-based AI services aimed at specific business functions. This may be the easiest path to getting the benefits of AI into functions such as product design and sales and marketing."

Buy off the shelf: "Although cognitive technologies are still evolving, this evolution is happening at a breakneck pace. Cloud-based CRM and ERP software with cognitive capabilities are widely available, as are chatbots. Surely, companies need expertise within their “four walls,” but they should examine which capabilities they can get from enterprise software and cloud-based platforms. This can lead to quick wins, lower initial investment, and momentum."

Staff wisely: "Focusing only upon the hardest talent to attract and retain—AI researchers and programmers, and data scientists—may not be the best strategy, especially for companies just starting out.  Focusing too much on acquiring high-cost, scarce talent that the tech giants are scooping up in a bitter arms race may lead to frustration and disappointment. Companies also likely need business executives who can speak AI to data scientists and understand the uses and limitations of data analytics."

ARTICLE CONTINUES AFTER ADVERTISEMENT

Decide where to automate and where to augment: "There are clear use cases where automation is simply better and more efficient than humans. In many more instances, machines will surface information, make predictions, and offer alternatives. Humans, using judgment, empathy, and business skill, should apply this information to best effect. This is a matter not simply of placing humans in the loop but of the loop being built to augment human decision-making."

 

“>

We’re reaching the point in which an AI-driven purchasing system can collect sales and shipping orders, looks at historical trends, and pre-orders and prepays for batches of materials for production before the demand is even apparent — without active human oversight. Would you trust an AI system with your corporate wallet?

Can an enterprise even be put on autopilot?Photo: Joe McKendrick

ARTICLE CONTINUES AFTER ADVERTISEMENT

There is plenty of concern about the repercussions of AI on society and human activity, suggesting that we are turning over the keys of reason to machines. Forbes contributor Lauren deLisa Coleman explored the dark side of AI presented in a documentary called Do You Trust this Computer. But trust needs to be established at many levels.

Trust always needs to be earned, and AI still working on earning its cred, not only in popular perceptions, but also in the minds of the people who will be paying gobs of money for it. That’s the gist of a recent survey and analysis of 1,100 companies from Deloitte. Executives are concerned about a host of risks associated with AI technologies ,” the co-authors, Jeff Loucks, Tom Davenport and David Schatsky, observe. “Some of the risks are typical of those associated with any information technology; others are as unique as AI technology itself.”

Typically with IT systems over the years, installations can take place, and if things didn’t go right, the systems could be rolled back to before the start point until the bugs were worked out. Usually, the people hurt the most were IT staff members. With AI, however, issues may be hard to detect, and there is blind reliance on machine-made decisions that may not be audited right away, if ever.

The need for trust emerges in the list of top risks executives associate with AI, including the following:

  1. Cybersecurity vulnerabilities
  2. Making the wrong strategic decisions based on AI
  3. Legal responsibilities for decisions made by AI
  4. AI failure in a life-or-death context

ARTICLE CONTINUES AFTER ADVERTISEMENT

As AI creeps higher into the enterprise — going from mapping out delivery routes to chatting with customers to playing a greater role in managerial decisions — the need for trust is only getting more acute.

What can be done to build trust in AI? Loucks and Davenport have the following suggestions, based on their observations on the best practices of early adopters:

Pursue execution excellence: “To drive change across lines of business, companies should focus on project management and change management,” Loucks and Davenport state. “The fundamentals of fostering organizational change can get lost amid excitement around pilots, grassroots experiments, and vendor-driven hype.”

Measure and track progress: “Ensure that costs and impacts are tracked carefully, and that successes are incontrovertible. This will help CFOs make the investments required as projects—and budgets—get bigger.”

Address cybersecurity risks: Stay on top of the latest tools and developments that address security in AI environments and datasets. “No cybersecurity efforts can prevent every attack, but early adopters can improve their defenses by incorporating security from the beginning of the process, and making it a higher priority.”

ARTICLE CONTINUES AFTER ADVERTISEMENT

Apply AI beyond the IT function: “It makes sense that complex technologies, which require heavy involvement from the IT department, would be applied there first. But the transformative potential of AI will likely be reached only when it permeates an entire company and enables change in multiple business functions and units.”

Take advantage of cloud resources: “Cloud can play a pivotal role in achieving those objectives via services that provide broad ranges of users with easy access to AI-based capabilities. Many big cloud providers are developing subscription-based AI services aimed at specific business functions. This may be the easiest path to getting the benefits of AI into functions such as product design and sales and marketing.”

Buy off the shelf: “Although cognitive technologies are still evolving, this evolution is happening at a breakneck pace. Cloud-based CRM and ERP software with cognitive capabilities are widely available, as are chatbots. Surely, companies need expertise within their “four walls,” but they should examine which capabilities they can get from enterprise software and cloud-based platforms. This can lead to quick wins, lower initial investment, and momentum.”

Staff wisely: “Focusing only upon the hardest talent to attract and retain—AI researchers and programmers, and data scientists—may not be the best strategy, especially for companies just starting out.  Focusing too much on acquiring high-cost, scarce talent that the tech giants are scooping up in a bitter arms race may lead to frustration and disappointment. Companies also likely need business executives who can speak AI to data scientists and understand the uses and limitations of data analytics.”

ARTICLE CONTINUES AFTER ADVERTISEMENT

Decide where to automate and where to augment: “There are clear use cases where automation is simply better and more efficient than humans. In many more instances, machines will surface information, make predictions, and offer alternatives. Humans, using judgment, empathy, and business skill, should apply this information to best effect. This is a matter not simply of placing humans in the loop but of the loop being built to augment human decision-making.”

 

(Excerpt) Read more Here | 2018-10-30 08:28:00