This is the current version of this document. To view historic versions, click the link in the document's navigation bar.
Section 1 - Purpose
(1) This statement sets out the principles on the use of artificial intelligence (the Charles Sturt University AI Principles) that will guide the ethical and responsible use of artificial intelligence tools at Charles Sturt University (the University).
(2) Consistent with the Australian Code for the Responsible Conduct of Research 2018 (Research Code) this statement also includes guidance material for the use of artificial intelligence tools in the context of research activities.
Scope
(3) This statement applies to all members of the University community.
Top of PageSection 2 - Statement
Overview
(4) Artificial intelligence (AI) tools present both opportunities and challenges to higher education. They will significantly impact the way we teach, research, learn and work, and all of the University must proactively ensure these tools are used for the benefit of students, staff and the wider community. It is imperative that we guide the ethical and responsible use of AI at the University.
(5) The following principles have been collectively developed. Considered in their entirety, they detail the need for a whole of organisation, coordinated approach to the use of AI tools, that includes adequate support for effective engagement, and the acknowledgment that ongoing revisions to these principles will be required to accommodate the rapidly changing AI environment.
Artificial intelligence principles for use
(6) The Charles Sturt University AI Principles for use are:
- Principle 1: Human-AI Partnership: Prioritise the development and deployment of human-centred AI systems that augment human capabilities and decision-making, rather than replace them. Encourage the co-evolution of staff, students and AI, where each benefits from, and enriches the other's contributions and where ultimate responsibility rests with humans.
- Principle 2: Transparency: Clearly communicate the purpose, scope, limitations, and methodologies that underpin AI applications to all. This includes sharing information about data sources and decision-making processes about where, when, and how outputs are being used to ensure staff and students can question, challenge and engage with AI-enabled outcomes.
- Principle 3: Accountability: Establish clear lines of responsibility for AI applications and their use, including who is responsible for communications, training, development, deployment, review and oversight. AI systems should be auditable and traceable. Implement robust monitoring and reporting mechanisms to track impact. AI-informed functions and decisions must be subject to human review, oversight, and intervention.
- Principle 4: Privacy and Data Protection: Ensure compliance with data protection laws and safeguard the privacy of individuals whose data is used in AI systems. Anonymise and secure data, and practice data minimisation whenever possible. AI systems must be safe and perform as intended. It is incumbent on the individual to identify that controls exist, otherwise any inputs are in the public domain and uncontrolled.
- Principle 5: Fairness, Autonomy, and Inclusivity: Ensure equitable access for all staff and students, regardless of abilities or backgrounds, while implementing measures to identify, mitigate (where necessary), and monitor potential biases. The best outcomes from AI will depend on data quality, the use of relevant data and careful data management. The use of AI should not adversely impact social justice, fairness, and impartiality.
- Principle 6: Education and Skills Development: Cultivate AI literacy and ethical engagement across the university. Provide targeted training for AI learning and embed ethical considerations in AI-related curricula to foster a culture of informed and responsible AI use. The focus should not just be on affordances of the technology but also ensuring clarity of the underpinning processes that are executed by or with AI.
- Principle 7: Academic Integrity: Set stringent ethical standards for AI in academia and research. Implement clear policies and ethical frameworks to safeguard individual and societal well-being. These must be consistent with the use of AI in University administration and ensure whole of organisation consistency.
- Principle 8: Ethical Considerations: Formulate and adhere to ethical guidelines that address moral implications of AI, such as potential harm, consent, and the welfare of individuals and communities affected by AI applications.
- Principle 9: Collaboration: Foster interdisciplinary collaboration to address the ethical, legal, social, and technical challenges of AI. Engage with external stakeholders, such as industry partners, policymakers, accrediting bodies and community members, to share best practices and promote responsible AI use and to ensure inclusive approaches to AI governance.
- Principle 10: Continuous Improvement and Governance: Regularly assess and evaluate the impact of AI applications on staff, students, communities, and society. Use the findings to update policies, processes, and systems to ensure responsible AI use in universities continues to evolve in line with emerging generative AI and other forms of this technology.
- Principle 11: Environmental Sustainability: Implement practices to minimise the environmental footprint of AI systems, including energy-efficient algorithms and hardware, and regularly assess the ecological impact of AI operations within the University.
Guidance specific to the use of artificial intelligence tools in research
(7) This guidance provides specific context for applying the AI Principles in research settings. It does not modify the core principles but offers additional consideration for research applications.
(8) Research activities and outputs remain the ultimate responsibility of the human researcher(s). The onus is on the researcher(s) to ensure they adhere to the Research Code at all times when using AI technology. Based on the Research Code, the following guidelines should be followed:
- Guideline R1: AI technology can be used to support researchers’ intellectual and scholarly work and not as a substitute for it.
- Guideline R2: Researchers are responsible for what is entered into AI technology and must have the appropriate consent and authorisation, such as copyright or intellectual property, for any content they input.
- Guideline R3: Researchers are responsible for the careful interpretation, acknowledgement and representation of outputs generated by AI technology (including citation).
- Guideline R4: Researchers are responsible for maintaining contemporary knowledge of AI technology, including its benefits and risks, to ensure the context of its use aligns with the Research Code.
Top of PageSection 3 - Procedure
(9) Nil.
Top of PageSection 4 - Guidelines and supporting documents
(10) Nil.
Top of PageSection 5 - Glossary
(11) In this document, the following definition applies:
- Artificial intelligence (AI) – Aligned with the definition provided by the New South Wales Government’s Artificial Intelligence Ethics Policy, artificial intelligence is defined as intelligent technology, including programs and the application of advanced computing algorithms, that can augment decision-making by identifying meaningful patterns in data.
Top of PageSection 6 - Document context