
HCLTech explores strategic importance of responsible AI
HCLTech's Head of Responsible AI & Governance, Dr Heather Domin, has provided insights into the complexities and strategic importance of responsible artificial intelligence (AI) and generative AI (GenAI) within large organisations.
Dr Domin, an expert in AI ethics with professional certifications in AI Governance, remarked, "There is a huge misconception that governance and responsible AI/GenAI are the same thing. Governance is about compliance, ensuring AI/GenAI aligns with regulatory requirements, while responsible AI/GenAI goes further, embedding ethical principles into how AI/GenAI is built and deployed. Progress in AI/GenAI shouldn't just be about meeting legal standards but building technology that aligns with human values."
She further emphasised the emergent need for leadership roles within companies that focus on ethical AI development. "We may soon start to see CEOs emerge from responsible AI backgrounds who will not only understand AI/GenAI's technical capabilities but how to balance innovation with ethics, risk, & long-term impact. Likewise, the emergence of The Chief Responsible AI Officer (or similar role) will soon become a core part of the C-suite. Companies that recognise this shift early and embed responsible AI/GenAI leadership at the highest levels will be the ones that succeed," she stated.
Dr Domin underscored the significance of integrating ethical principles right from the onset of technology development. "When responsible AI/GenAI sits in a corporate legal or regulatory function, it becomes an afterthought. To make AI/GenAI truly responsible, it needs to be integrated into technology development from the start, ensuring ethical considerations are fundamental," she explained.
Addressing the challenge of bias in AI models, Dr Domin noted the intricacy required to mitigate bias effectively. "Bias mitigation is a problem that requires a careful & nuanced approach. Rather than simply blocking certain outputs, AI/GenAI models need to understand the context of historical versus present-day representations, requiring diverse datasets, rigorous testing, and human oversight," she said.
Dr Domin also highlighted the potential risks associated with over-reliance on AI technology without adequate human oversight. She observed, "It is dangerous for organisations to over-rely on AI - they should also consider approaches that allow AI models to pull from verified sources rather than relying solely on pre-trained data."