Scott Zoldi is an expert when it comes to developing ethical AI solutions with 25 years at FICO and nearly nine years as the company’s Chief Analytics Officer. During his talk at the London Blockchain Conference 2024 (21 – 23 May), he emphasised the increasing importance of trust and governance in AI today.
Zoldi presented blockchain as a crucial solution for maintaining consistent standards and easing regulatory interactions. He kicked off his talk by immediately highlighting the problem, which is when AI is used to solve business problems that impact people. FICO monitors two-thirds of the world’s payment card transactions for fraud and pioneered credit scoring in the 1950s.
Zoldi emphasised the responsibility of developing AI, especially given its significant impact on individuals. He highlighted the importance of responsible AI, arguing that any AI impacting consumers is high-risk and requires serious accountability. Zoldi criticised the tolerance for issues like bias and lack of transparency in generative AI, stressing that responsible AI is crucial for decisions affecting people’s lives, such as loans and education.
Concerns and mistrust towards AI are widespread
Zoldi shared survey results highlighting widespread concerns about AI. A KPMG survey reveals that 61% of consumers are wary of trusting AI, and 73% perceive significant risks in its use. IBM surveyed 7,502 businesses globally, finding that 61% are concerned about their ability to explain AI-powered decisions.
According to a study of American Bankers, 75% of bankers want stronger AI guardrails, 80% are worried about inaccurate information from generative AI, and 57% have significant ethical concerns. Zoldi concluded that these findings indicate that we are not yet well-positioned to use AI responsibly.
Zoldi noted that regulators are increasingly stepping in due to the low trust that consumers have in AI. Regulations like GDPR and the recent EU AI Act classify AI systems, especially in areas like credit scoring, and solicit human oversight.
He mentioned that the US is recognising the potential for widespread discrimination from AI models. Zoldi warned of a potential “AI winter”.
Responsible AI
Zoldi then introduced the concept of responsible AI, a term he coined eight years ago. Responsible AI involves building models with care and seriousness, adhering to pillars that ensure proper development.
It’s essential for meeting regulations and establishing trust in AI. However, Zoldi noted that while many people claim to support responsible AI, the industry lacks clear standards for its implementation.
He highlighted the inadequacy of current approaches, such as explainable AI, and emphasised the need for the industry to address this issue by establishing pillars for responsible AI:
- Robust AI: This involves the careful, thorough development of AI models, including extensive data collection, testing, and understanding of the relationships within the model. It’s a rigorous process that can take months or even a year.
- Interpretable AI: This pillar focuses on creating AI models that can be understood and explained by humans. It addresses concerns about “black box” models by ensuring that the relationships and equations in the AI can be examined and justified.
- Ethical AI: Zoldi emphasised the importance of addressing bias in AI models. Since all data is inherently biassed, it’s crucial to test AI models to ensure they don’t propagate bias, especially towards protected groups.
- Auditable AI: This pillar is about accountability and governance in AI development. Auditable AI involves tracking the development process, ensuring proper decisions are made, and maintaining a record of who made those decisions. It is crucial to demonstrate that a model was developed responsibly.
Zoldi stressed that these pillars are essential for establishing trust and ensuring that AI is used responsibly in organisations.
Establishing governance for responsible AI
Zoldi explained that responsible AI development requires bringing together key stakeholders, such as analytics leaders, legal, compliance, product managers, and customer service teams, to define and standardise the model development process within an organisation. Currently, many organisations struggle with AI regulation, with 43% unsure how to meet regulatory requirements and lacking consistent standards. This inconsistency leads to varied model development practices across individuals within the same organisation. Governance teams often perform verbal audits, which are error-prone and inconsistent.
To address this, Zoldi advocated for a unified model governance standard within organisations. This standard would ensure consistent use of algorithms, disallow certain practices and establish clear guidelines for explainability and ethics.
The goal is to specialise in and defend this standard to regulatory bodies, ensuring that AI models are built correctly and responsibly. This approach includes proper problem formulation, ensuring representative data, and maintaining thorough records.
Blockchain’s role in responsible AI development
Zoldi emphasised blockchain’s crucial role in ensuring transparency and accountability in AI development. Blockchain provides an immutable, cryptographically secure record of every decision made during the development process, including mistakes. This transparency creates a trustworthy and auditable approach to AI, where the entire development journey is documented and can be reviewed.
Zoldi identified three key roles in auditing responsible AI, which are:
- Data Scientist: Responsible for fulfilling specific requirements and understanding success criteria to ensure the model meets these standards.
- Tester: Independently verifies the model’s outcomes against the success criteria, providing an objective assessment of the data scientist’s work.
- Reviewer: A higher authority who reviews the model and testing results to ensure that all requirements have been met correctly.
Blockchain plays a key role in all the processes that involve these key roles and serves as a tool that makes the ‘black box’ that AI currently is transparent.