Philip Hardy and Chris Baker stories
Philip Hardy and Chris Baker explore the intersection of artificial intelligence and regulatory compliance, highlighting the challenges and risks that come with integrating AI into decision-making processes. Their stories focus on the consequences of relying too heavily on AI without adequate human oversight.
By reading their insights, readers can gain a clearer understanding of why precision and thoroughness remain critical when deploying AI solutions, especially in highly regulated environments. Their work encourages a balanced approach to AI adoption that prioritizes both efficiency and accountability.
Incremental intelligence: The case for AI as an intern, not a partner
Mon, 22nd Sep 2025
Treat AI like an intern, not a partner: start with small tasks, verify accuracy, and only promote its role once reliability is proven to manage risk effectively.
Beyond ‘black box’ mode - Proving AI’s controls actually work
Wed, 30th Jul 2025
Compliance leaders must prove AI outputs are reliable, using validation and human checks to avoid costly regulatory errors in complex legal tasks.
When ‘good enough’ AI gets you fined (or fired!)
Tue, 15th Jul 2025
Rushing with AI in compliance risks hefty fines or job losses, as speed without precision fails regulators and demands solid human oversight.