Generative AI is introducing a brand-new risk vector to the tech domain including issues such as ‘hallucinations’, ‘jailbreaks’ and ‘adversarial prompting’ that can manipulate data. This is according to KPMG’s Global Tech Report for 2024 which looked at how emerging technologies such as AI and blockchain are impacting the workforce. The report is based on a survey of 2,450 executives from 26 different countries.
‘The unexpected consequences of generated content have appeared seemingly frequently in the news this year,’ the group said. ‘In light of this, when scaling emerging tech, organisations will need to have the governance and processes in place to control emerging risks.’
Survey respondents highlighted the importance of continually developing AI governance policies for ethical and fair use in line with the evolving regulatory landscape.
‘The process of establishing trusted guardrails and controls requires a multi-lens end-to-end secure approach to designing, building and deployment that incorporates coding best practices, code reviews, staff training and red teaming exercises that test the performance and reliability of AI models,’ KPMG said.
‘The ongoing AI hype may tempt organisations to make hurried decisions. However, successful AI adoption requires an enterprisewide multidisciplinary effort, a clear alignment across the enterprise, and a shared understanding of AI’s role and its potential.
‘Organisations that can clearly define and communicate the value of AI to all stakeholders within the context of their business objectives, and execute through thoughtful collaboration, will have a greater chance of maximising its impact across the enterprise.’
Blockchain and AI
One of the key reasons blockchain complements AI so effectively is its core attributes of transparency and immutability. By harnessing these capabilities, AI algorithms can securely access and share data without relying on intermediaries, thereby ensuring data integrity and maintaining transparency throughout the process.
Blockchain also bolsters the credibility of AI systems by offering a verifiable and auditable record of data transactions and model outputs. Since AI algorithms require extensive datasets for training, blockchain facilitates the secure and transparent exchange of such data among diverse entities, including individuals, organisations, and Internet of Things (IoT) devices.
The use of blockchain enables end-to-end tracking of data provenance, from its origin to its implementation. This ensures accountability and allows stakeholders to verify the authenticity and quality of the data used by AI models.
Moreover, blockchain addresses significant challenges in AI, such as privacy concerns and data ownership issues. With blockchain-based identity management and authentication systems, individuals can retain ownership and control over their personal data while permitting AI algorithms to securely access and analyse it.
This approach empowers users to selectively share their data with AI applications, maintaining privacy and control while fostering greater trust and collaboration between users and AI systems. Additionally, blockchain enables users to monetise their data through microtransactions, ensuring fair compensation for their contributions.
Using the BSV blockchain to keep AI on track
BSV blockchain’s scalability is a crucial factor when considering its compatibility with AI. AI algorithms often demand significant computational resources and generate vast amounts of data during training and inference processes.
The BSV blockchain’s high scalability and transaction throughput enable it to handle the immense volume of data transactions that AI systems require, ensuring smooth and uninterrupted operation even under heavy workloads.
Moreover, the BSV blockchain’s stability and immutability make it an ideal platform for ensuring the integrity and security of AI-related data and transactions.