Join us as we return to New York on June 5 to collaborate with leaders to explore comprehensive methods for auditing AI models for bias, performance, and ethical compliance in diverse organizations. Find out how you can attend here.
The economic potential of AI is undisputed, but it is largely under-exploited by organizations, with staggering impact. 87% of AI projects failing to succeed.
Some consider it a technology problem, others a business problem, a cultural problem or an industrial problem – but the latest evidence reveals that it is a problem major. trust issue.
According to recent research, almost two thirds of senior managers argue that trust in AI drives revenue, competitiveness and customer success.
Trust is a complicated word to understand when it comes to AI. Can you trust a AI system? If so, how? We don't immediately trust humans, and we're even less likely to immediately trust AI systems.
But one lack of trust in AI hampers economic potential, and many of the recommendations aimed at building trust in AI systems have been criticized as being too abstract or too ambitious to be practical.
It is time to develop a new “AI trust equation” focused on practical application.
The AI Trust Equation
The trust equation, a concept aimed at building trust between people, was first proposed in The trusted advisor by David Maister, Charles Green and Robert Galford. The equation is Trust = Credibility + Reliability + Intimacy, divided by Self-Orientation.
![](https://venturebeat.com/wp-content/uploads/2024/05/image3_cbad8d.png?w=800)
It's clear at first glance why this is an ideal equation for building trust between humans, but it doesn't translate to building trust between humans and machines.
To build trust between humans and machinesThe new AI trust equation is Trust = Safety + Ethics + Accuracy, divided by Control.
![](https://venturebeat.com/wp-content/uploads/2024/05/image2_32f717.png?w=800)
Security is the first step on the path to trust and consists of several key principles that are well described elsewhere. For the exercise of building trust between humans and machines, it boils down to the question: “Will my information be secure if I share it with this AI system?”
Ethics is more complicated than security because it is a moral rather than a technical issue. Before investing in an AI system, leaders should consider:
- How were people treated during the making of this model, such as the Kenyan workers in preparation for ChatGPT? Is this something I/we feel comfortable supporting in building our solutions with this?
- Is the model explainable? If this produces a harmful result, can I understand why? And is there anything I can do about it (see Control)?
- Are there any implicit or explicit biases in the model? This is a carefully documented issue, such as Gender nuances the research of Joy Buolamwini and Timnit Gebru and Google's recent attempt to eliminate bias in their models, resulting in the creation ahistorical prejudices.
- What is the economic model of this AI system? Do those whose information and lifelong work formed the model get paid when the model built on their work generates revenue?
- What are the stated values of the company that created this AI system, and to what extent do the actions of the company and its leaders align with those values? OpenAI's recent choice to imitate Scarlett Johansson's voice without her consent, for example, shows a significant gap between OpenAI's stated values and Altman's decision to ignore Scarlett Johansson's choice to decline the use of her voice for ChatGPT.
Accuracy can be defined as how reliably the AI system provides an accurate answer to a series of questions throughout the workflow. This can be simplified as follows: “When I ask this AI a question based on my context, how useful is its answer? » The answer is directly related to 1) the sophistication of the model and 2) the data it was trained on.
Control is at the heart of the conversation about trusting AI, and it ranges from the most tactical question: “Will this AI system do what I want it to do, or will it is there an error? to one of the most pressing questions of our time: “Are we ever going to lose control of intelligent systems?” In both cases, the ability to control the actions, decisions and outcomes of AI systems underpins the notion of trust and implementation.
5 Steps to Using the AI Trust Equation
- Determine if the system is useful: Before investing time and resources to determine if a AI Platform is trustworthy, organizations would benefit from determining whether a platform is useful in helping them create more value.
- Check if the platform is secure: what happens to your data if you upload it to the platform? Information leaving your firewall? Working closely with your security team or hiring security advisors is essential to ensure that you can rely on the security of an AI system.
- Define your ethical threshold and evaluate all systems and organizations against it: if any of the models you invest in must be explainable, define, with absolute precision, a common and empirical definition of explainability in all your organization, with upper and lower tolerable limits, and propose measures. systems in relation to these limits. Do the same for each ethical principle that your organization deems non-negotiable when it comes to leveraging AI.
- Define your accuracy goals and don't deviate from them: It can be tempting to adopt a system that doesn't work well because it's a precursor to human labor. But if it performs below an accuracy goal that you have defined as acceptable for your organization, you run the risk of poor quality work and a greater burden on your employees. Most often, low accuracy is a model problem or a data problem, both of which can be solved with the appropriate level of investment and focus.
- Decide how much control your organization needs and how it is defined: The degree of control you want decision makers and operators to have over AI systems will determine whether you want a fully autonomous, semi-autonomous, powered by AI, or if your organization The tolerance level for sharing control with AI systems is a higher bar than any current AI system could achieve.
In the age of AI, it can be easy to look for best practices or quick wins, but the truth is that no one has figured this all out yet, and the moment they do, it won't matter. no more difference for you. and your organization more.
So, rather than waiting for the perfect solution or following trends imposed by others, take the lead. Assemble a team of champions and sponsors within your organization, tailor the AI trust equation to your specific needs, and begin evaluating AI systems against it. The rewards of such effort are not only economic, but they are also fundamental to the future of technology and its role in society.
Some technology companies see market forces moving in this direction and are working to develop the appropriate engagements, control and visibility into how their AI systems operate, such as with Salesforce. Einstein Trust Layer – and others argue that any level of visibility would cede a competitive advantage. You and your organization will need to determine how much confidence you want to have in both the results of AI systems and the organizations that build and maintain them.
The potential of AI is immense, but it will only be realized when AI systems and the people who make them can achieve and maintain trust within our organizations and society. The future of AI depends on it.
Brian Evergreen is the author of “Autonomous Transformation: Creating a More Human Future in the Age of Artificial Intelligence.”.”
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.
If you want to learn more about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contribute to an article your own!