This is the current version of this document. You can provide feedback on this policy to the document author - refer to the Status and Details on the document's navigation bar.
Section 1 - Purpose
(1) This statement sets out the principles on the use of artificial intelligence (the Charles Sturt University AI Principles) that will guide the ethical and responsible use of artificial intelligence tools at Charles Sturt University (the University).
Scope
(2) This statement applies to all university staff and students and all members of the University's governing bodies.
Top of PageSection 2 - Statement
Overview
(3) Artificial intelligence (AI) tools present both opportunities and challenges to higher education. They will significantly impact the way we teach, research, learn and work, and all of the University must proactively ensure these tools are used for the benefit of students, staff and the wider community. It is imperative that we guide the ethical and responsible use of AI at the University.
(4) The following principles have been collectively developed. Considered in their entirety, they detail the need for a whole of organisation, coordinated approach to the use of AI tools, that includes adequate support for effective engagement, and the acknowledgment that ongoing revisions to these principles will be required to accommodate the rapidly changing AI environment.
Charles Sturt University AI Principles
(5) The Charles Sturt University AI Principles are:
- Human-AI Partnership: Prioritise the development and deployment of human-centred AI systems that augment human capabilities and decision-making, rather than replace them. Encourage the co-evolution of staff, students and AI, where each benefits from, and enriches the other's contributions and where ultimate responsibility rests with humans.
- Transparency: Clearly communicate the purpose, scope, limitations, and methodologies that underpin AI applications to all. This includes sharing information about data sources and decision-making processes about where, when, and how outputs are being used to ensure staff and students can question, challenge and engage with AI-enabled outcomes.
- Accountability: Establish clear lines of responsibility for AI applications and their use, including who is responsible for communications, training, development, deployment, review and oversight. AI systems should be auditable and traceable. Implement robust monitoring and reporting mechanisms to track impact. AI-informed functions and decisions must be subject to human review, oversight, and intervention.
- Privacy and Data Protection: Ensure compliance with data protection laws and safeguard the privacy of individuals whose data is used in AI systems. Anonymise and secure data, and practice data minimisation whenever possible. AI systems must be safe and perform as intended. It is incumbent on the individual to identify that controls exist, otherwise any inputs are in the public domain and uncontrolled.
- Fairness, Autonomy, and Inclusivity: Ensure equitable access for all staff and students, regardless of abilities or backgrounds, while implementing measures to identify, mitigate (where necessary), and monitor potential biases. The best outcomes from AI will depend on data quality, the use of relevant data and careful data management. The use of AI should not adversely impact social justice, fairness, and impartiality.
- Education and Skills Development: Cultivate AI literacy and ethical engagement across the university. Provide targeted training for AI learning and embed ethical considerations in AI-related curricula to foster a culture of informed and responsible AI use. The focus should not just be on affordances of the technology but also ensuring clarity of the underpinning processes that are executed by or with AI.
- Academic Integrity: Set stringent ethical standards for AI in academia and research. Implement clear policies and ethical frameworks to safeguard individual and societal well-being. These must be consistent with the use of AI in University administration and ensure whole of organisation consistency.
- Ethical Considerations: Formulate and adhere to ethical guidelines that address moral implications of AI, such as potential harm, consent, and the welfare of individuals and communities affected by AI applications.
- Collaboration: Foster interdisciplinary collaboration to address the ethical, legal, social, and technical challenges of AI. Engage with external stakeholders, such as industry partners, policymakers, accrediting bodies and community members, to share best practices and promote responsible AI use and to ensure inclusive approaches to AI governance.
- Continuous Improvement and Governance: Regularly assess and evaluate the impact of AI applications on staff, students, communities, and society. Use the findings to update policies, processes, and systems to ensure responsible AI use in universities continues to evolve in line with emerging generative AI and other forms of this technology.
- Environmental Sustainability: Implement practices to minimise the environmental footprint of AI systems, including energy-efficient algorithms and hardware, and regularly assess the ecological impact of AI operations within the university.
Top of PageSection 3 - Procedure
(6) Nil.
Top of PageSection 4 - Guidelines and supporting documents
(7) Nil.
Top of PageSection 5 - Glossary
(8) In this document, the following definition applies:
- Artificial intelligence (AI) – Aligned with the definition provided by the New South Wales Government’s Artificial Intelligence Ethics Policy, artificial intelligence is defined as intelligent technology, including programs and the application of advanced computing algorithms, that can augment decision-making by identifying meaningful patterns in data.
Top of PageSection 6 - Document context