How organisations build AI they can trust
At FTT Fintech Festival, we had the opportunity to speak with Paul Dongha, Head of Responsible AI and AI Strategy at NatWest, about what it takes to establish robust AI governance in modern organisations. Paul explained that governance is ultimately about enabling organisations to deploy AI systems with confidence, ensuring they behave as intended while reducing familiar risks such as hallucinations and data leakage. He emphasised that effective governance is built on three pillars – people, process, and technology working together.
Governance depends on how the organisation structures accountability across risk, ethics, privacy, legal, and data science. Those teams must work together rather than treating AI governance as a technical issue or a task for a single department. When governance is viewed as something that can be added on at the end or treated as simple compliance, its value is lost. It has to be embedded as AI is developed and deployed.
Technology completes the picture. Paul discussed the importance of integrating tooling directly into the development lifecycle so data scientists and machine learning engineers can identify, mitigate, and demonstrate how they’ve managed the risks in their models. This practical foundation sits at the heart of his new book, Governing the Machine, which is written for organisations and senior managers who may not be AI specialists. The book offers a clear, non technical blueprint for managing AI responsibly, supported by interviews and case studies from major organisations, including two from NatWest, showing how governance is implemented in real environments.
As organisations adopt AI at growing scale, building strong governance frameworks is essential. Watch the full video below and hear more about why it is so important to proactively establish AI governance.







