Artificial intelligence is explaining itself to people and paying for it

After equipping its sales team with artificial intelligence software, Microsoft Corporation’s LinkedIn subscription revenue has increased by 8%, which not only predicts the risk of cancellation of clients, but also explains how it came to its conclusion.

The system, launched last July and described in a LinkedIn blog post on Wednesday, marks a breakthrough in getting AI to “show its work” in a helpful way.

Although AI scientists have no problem designing systems that accurately predict all kinds of business outcomes, they are discovering that in order to make these tools more effective for human operators, AI may have to interpret itself through other algorithms.

The emerging field of “explanatory AI” or XAI has encouraged large investments in Silicon Valley as startups and cloud giants compete to make opaque software more understandable and have begun talks in Washington and Brussels where regulators want to ensure automated decision-making. Smooth and transparent.

AI technology can perpetuate social biases such as race, gender and culture. Some AI scientists see explanations as an important part of mitigating those problematic results.

U.S. consumer protection regulators, including the Federal Trade Commission, have warned over the past two years that AI could be investigated for being unexplainable. The EU could pass an artificial intelligence law next year, setting a comprehensive set of requirements, including enabling users to interpret automated predictions.

Proponents of explaining AI say it has helped increase the effectiveness of AI in areas such as healthcare and sales. Google Cloud sells explanatory AI services that are, for example, the most important for clients trying to sharpen their systems by predicting which pixels and which training examples will soon be the subject of the photo.

But critics say the explanations for why AI predicted are so unreliable because AI technology is not good enough to explain machines.

LinkedIn and other developers of Explicit AI acknowledge that there is still room for improvement in every step of the process – analyzing predictions, creating interpretations, verifying their accuracy and making them effective for users.

But after two years of trial and error in relatively low-stack applications, LinkedIn says its technology has paid off. This is evidenced by the 7 per cent increase in renewal bookings in the current financial year over the expected growth. LinkedIn declined to specify the benefit in dollars, but described it as large.

Previously, LinkedIn salespeople relied on their own insights and some automated alerts about clients’ services.

Now, AI conducts rapid research and analysis. CrystalCandel dubbed by LinkedIn, it calls for unnoticed trends and its logic helps salespeople use their strategies to keep risky customers on board and upgrade others.

LinkedIn says the explanatory-based recommendations have expanded to more than 5,000 sales employees, offering a wide range of recruitment, advertising, marketing and education offers.

“It has armed experienced salespeople with specific insights to help them navigate conversations with prospects. It has also helped to immerse new salespeople directly,” said Parvez Ahmed, director of machine learning at LinkedIn and head of data science applied research.

To explain or not to explain?

In 2020, LinkedIn first made predictions without explanation. A score of about 80 percent accuracy indicates that a client will soon upgrade, stabilize, or cancel for renewal.

The salespeople did not win outright. LinkedIn’s Talent Solutions Recruitment and Recruitment Software sales teams were unclear about how they would adapt their strategy, especially when the chances of a client not renewing were no better than a currency toss.

Last July, they began viewing a short, auto-generated paragraph highlighting the factors affecting the score.

For example, AI has decided to upgrade a customer because it has increased the number of employees by 240 in the last one year and the candidates have become 146 percent more responsive in the last month.

In addition, an index that measures a client’s overall success with LinkedIn recruitment tools has grown 25 percent in the last three months.

Lekha Doshi, vice president of global operations at LinkedIn, said that based on the explanation, sales representatives now focus on training, support and service to clients that enhance their experience and maintain their costs.

But some AI experts question whether an explanation is needed. They could even do harm, arouse a false sense of security in AI, or induce design abandonment that makes predictions less accurate, the researchers say.

“People use products like Tylenol and Google Maps whose inner workings are not clearly understood,” said Fei-Fei Lee, co-director of Stanford University’s Institute for Human-Centered Artificial Intelligence. In such cases, rigorous testing and observation have removed most doubts about their effectiveness.

Similarly, AI systems as a whole can be considered fair even if personal decisions are unclear, says Daniel Roy, an associate professor of statistics at the University of Toronto.

LinkedIn says that the integrity of an algorithm cannot be assessed without understanding its thinking.

It maintains that tools like its CrystalCandel can help AI users in other areas. Physicians can learn why AI predicts someone at higher risk of a disease, or people may be told why AI advised them to reject their credit card.

It is hoped that the explanations reveal whether a system is aligned with the concepts and values ​​that one wants to promote, said Bin Kim, an AI researcher at Google.

“I see interpretability as capable of ultimately enabling dialogue between machines and humans,” he said. “If we really want to enable human-machine collaboration, we need it.”

Thomson Reuters 2022


Leave a Reply

Your email address will not be published.