Anthology launches AI policy framework for universities
AI pioneer Anthology today unveiled its AI Policy Framework aimed at aiding higher education institutions in the development and adoption of robust policies and programmes around the ethical use of artificial intelligence (AI). The framework offers support and guidance on assessing the extensive implications of AI, as well as advice on how to sketch and establish effective policies and governance structures.
The newly launched AI Policy Framework by Anthology is based on seven principles, namely fairness, reliability, humans remaining in charge, transparency, explainability, privacy, security, safety, value alignment, and accountability. It borrows significantly from international standards, adhering to the principles of the NIST AI Risk Management Framework, the EU AI Act, and the OECD Principles.
Bruce Dahlgren, CEO of Anthology, reflected on the transformative impact of generative AI on the higher education sector. "Many institutions raced to create policies largely focused on how to control its use without giving much consideration to how to harness its power," said Dahlgren. "We believe that once you put the right guardrails in place, attention will quickly shift to how to leverage AI to drive student success, support operational excellence, and gain institutional efficiencies. As the leader in this space, we have a responsibility to help our customers balance the risks and rewards."
The implications of AI use stretch across all aspects of an institutional structure, ranging from academics, administration, and governance to operational processes. In this context, the AI Policy Framework urges institutions to have a comprehensive AI policy reflecting their overall stand on AI. It should provide a clear roadmap to structure the general policy according to specific academic departments, administrative units, operations, colleges, etc. Reflecting a holistic approach, the AI Policy Framework has taken into account aspects such as governance, teaching and learning, copyright, intellectual property, research, updates to the policy, academic dishonesty, operational and administrative aspects, and outcomes of non-compliance.
A recent survey that sought university leaders' views on AI's impact on higher education and university operations found that 34% of respondents worried about the ethical use of AI. However, an almost equal number of leaders (33%) acknowledged the value of AI in providing personalised learning experiences. Nearly one in five leaders believed AI could aid in creating effective enrolment or admission campaigns.
Anthology has put its own learning into practice by establishing a Trustworthy AI programme in 2023, led by its Global Privacy Officer. Central to this initiative is the commitment to implementing the same seven principles that underpin its AI Policy Framework for higher education. The education technology provider emphasises that while its AI Policy Framework offers a starting point for institutions in developing their own ethical AI policies, these should be customised to fit differing needs and circumstances.
In conclusion, the Anthology AI Policy Framework offers higher education institutions a blueprint to start formulating their ethical AI policy and procedures by sharing insights gained from its Trustworthy AI programme implementation.