Bulletin Board - Review and Comment
Step 1 of 4: Comment on Document
How to make a comment?
1. Use this to open a comment box for your chosen Section, Part, Heading or clause.
2. Type your feedback into the comments box and then click "save comment" button located in the lower-right of the comment box.
3. Do not open more than one comment box at the same time.
4. When you have finished making comments proceed to the next stage by clicking on the "Continue to Step 2" button at the very bottom of this page.
Important Information
During the comment process you are connected to a database. The session that connects you to the database may time-out due to inactivity. The following tips will help you to avoid losing your comments or corrupting your entries:
- Do not jump between web pages/applications while logging comments.
- Do not log comments for more than one document at a time. Complete and submit comments from one document before commenting on another.
- Do not leave your submission part way through the comment process. If you are part way through and need to take a break, submit your current set of comments. The system will email you a copy of your comments, so you will be able to identify where you were up to so you can add to them later.
- Do not exit the process until you have completed all three stages.
(1) This Policy defines the governance and responsible use of Artificial Intelligence (AI) at Victoria University (VU or the University), ensuring data integrity, privacy and security, communicating obligations when using AI technologies and protecting stakeholders from the unintended consequences of AI implementations and use. (3) This Policy provides a clear framework and principles to ensure the responsible and ethical use of AI technologies, its governance and application and to: (4) VU’s core AI Governance Principles have been developed to guide the responsible use, development and management of AI technologies across the University in alignment with the Australia’s AI Ethics Principles. (5) The AI Steering Committee will be established as the authoritative management committee overseeing the strategic enablement, responsible use, and enterprise coordination of AI at Victoria University. It will report through the Vice-Chancellor’s Group (VCG) and will be added to the VCG workplan to ensure executive visibility and alignment. (6) The AI Steering Committee will work closely with the Data Governance Committee to: (7) The AI Risk Classification Model is used to classify AI technologies based on their potential impact, data sensitivity, level of automation, and associated legal, ethical, and reputational risks. (8) All AI systems must have appropriate human oversight, proportionate to the Risk Level (see AI Risk Classification Model) assigned to the AI system. (9) All AI systems that support, influence or make decision affecting individuals must: (10) VU promotes a flexible, principles-based (See Part A – AI Governance Principles) approach to the responsible use of AI technologies across academic, research and business contexts that enables innovation while safeguarding individuals, communities and VU values. (11) VU provides a suite of AI tools that are University-licensed, supported, and assessed for alignment with VU policies and standards. These AI tools are recommended for use where possible, and can be used in teaching, research, and administration activities without prior approval on condition users comply with this Policy. (12) Users that choose to explore alternative AI tools (including open-source and publicly available platforms) for individual use are requested to assess these tools using the AI Risk Classification Model and follow relevant guidelines provided by the AI committee. (13) Users of alternative AI tools must ensure they comply with the principles (See Part A – AI Governance Principles) of responsible use, applicable legislation, and University policies and procedures including the AI Governance and Responsible Use Procedure (to be established). (14) VU does not seek to regulate all AI technologies exhaustively, however, may prohibit certain AI tools for university use based on strategic alignment, legislation, and regulations, risk assessment, and ethical evaluation to protect the integrity, privacy, transparency, and security of VU data. (15) Users are required to use AI technologies in an appropriate, responsible and ethical manner and in line with this Policy. (16) Users are expected to safely engage with AI technologies when using AI to enhance productivity, decision-making and service delivery to the benefit of staff and students. (17) Users must be aware that generative AI can produce realistic content that may be difficult to distinguish from human-created work and may contain inaccuracies or ‘hallucinations.’ Such content, while appearing credible, can unintentionally mislead, influence important decisions or contribute to the spread of misinformation if accepted without critical evaluation. (18) Users are encouraged to explicitly disclose when content has been generated using AI to support transparency and mitigate the risk of misinformation. (19) Users must ensure they critically evaluate AI-generated content for accuracy, impartiality, and limitations and are responsible for verifying the information to ensure the reliability of the content before use. (20) The use of AI in research activities must comply with data security frameworks, this Policy and the Research Integrity Policy, Research Integrity - Authorship Procedure, Research Integrity - Research Data Management Procedure. (21) VU reserves the right to deploy AI system monitoring and web filtering capabilities in line with Information Security Policy to ensure the use of AI technologies is lawful and complies with university values and policies. (22) VU will provide users educational training and awareness programs to support users to engage with AI technologies ethically, responsibly and effectively. (23) Only authorised users with a legitimate business purpose may access training data used for AI technologies or AI outputs involving sensitive or identifiable data. (24) The following is inappropriate and prohibited when accessing or using AI technologies: (25) Staff may use AI technologies to support the assessment of or feedback on students’ work, provided they comply with this Policy and the Assessment for Learning Policy and retain responsibility for making evaluative judgements regarding students submitted work or any feedback given. (26) Student use of AI technologies must comply with this Policy and the Academic Integrity Policy and Academic Integrity Guidelines. (27) Specific rules on student use of AI technologies may be set by the subject educator. Depending on the course, unit, or discipline, and the intended learning outcomes, AI tool usage may be restricted, prohibited, or actively encouraged. Users must comply with these guidelines as outlined by their course requirements. (28) Students are generally permitted to use University-approved AI tools to enhance their personalised learning and assist in preparing assessed work, however must appropriately disclose the use of any AI-generated content in line with the Academic Integrity Policy and ensure that their contributions, whether individual or part of a group project, are original. (29) Under current Australian copyright law, only works created by humans can be protected. This means that output generated entirely by AI (including but not limited to content, data, code, tools, models, or recommendations) is not eligible for copyright protection. For work to be protected by copyright, it must involve a human who has made a meaningful intellectual contribution. (30) The following principles guide the VU’s position on intellectual property (IP) and copyright as they relate to AI and apply in conjunction with the Copyright Policy, Research Integrity Policy and Learning and Teaching Quality and Standards Policy: (31) Enterprise AI tools, data pipelines or models developed by staff, contractors, or vendors within the scope of their work for the University are considered University-owned IP, unless otherwise negotiated. This includes tools built for internal automation, research, learning analytics, or administrative use. (32) Uploading or training AI systems using University-owned data or content (including research data, student records, teaching materials or internal documents) must comply with the AI Governance and Responsible Use Procedure (to be established), Data Governance, Information Security Policy and Intellectual Property Regulations 2013. (33) AI systems used for business, research and academic purposes must be selected, managed and utilised in a manner that achieves the objectives of the University and according to the principles outlined in the IT Asset - Business Application Procedure and comply with the IT Asset Policy, Purchasing Policy, Risk Management Policy, Third Party Arrangements Policy and Contracts Policy and Contracts Procedure. (34) The procurement of AI systems, including processes for assessment, approval, and contractual arrangements with third part service providers of AI systems must include appropriate consultation and AI governance measures (see AI Risk Classification Model) prior to entering contractual commitments. (35) All AI implementations must adhere to relevant international regulations when handling data across international borders, including when using third-party AI systems or services hosted or provided by external vendors (36) When deploying AI systems that handle data across international borders, VU must consider: (37) AI systems will be subject to a full lifecycle management process proportionate to the Risk Level (see AI Risk Classification Model) assigned to the AI system. (38) AI system lifecycle management procedures will be established and implemented in alignment with the IT Asset - Business Application Procedure, Information Security Policy and AI Governance and Responsible Use Procedure (to be established). (39) VU maintains procedures for identifying, reporting, and responding to AI-related incidents including but not limited to system failures, data breaches and bias that align with existing incident management processes, the Information Security Policy, Critical Incident, Emergency Planning and Business Continuity Policy, Privacy Policy. (40) VU will maintain a dedicated AI Incident Register aligned with the University's broader incident reporting systems. (41) Academic Integrity incidents in relation to unacceptable use of AI generated content in student assessed materials are recorded in accordance with the Academic Integrity Policy. (42) AI implementations must adhere to the University's Data Governance Framework and Data Governance Standards, with additional AI-specific data governance requirements based on the Risk Level (see AI Risk Classification Model) assigned to the AI system including: (43) Any actual or suspected breaches of this Policy and related Procedures should be reported immediately to the relevant line manager and/or the Digital and Campus Services Desk. (44) All breaches of this Policy will be treated seriously and may be subject to disciplinary action in accordance with the relevant enterprise agreement (for employees) or Student Misconduct Regulations 2019 (for students). (45) AI Governance and Responsible Use Procedure (to be established) (46) AI Systems Lifecycle Procedure (to be established) (47) AI Risk Classification Model (to be established) (48) HESF: Standards 2.1 Facilities and Infrastructure; 3.3 Learning Resources and Support; 7.3 Information Management. (49) Outcome Standards for NVR Registered Training Organisations 2025: Standards 1.3-1.5 Assessment; 1.8 Facilities, Equipment and Resources; 2.7, 2.8 Feedback, Complaints and Appeals; 4.3 Risk Management; 4.4 Continuous Improvement 2.1-2.2 Information. (50) Artificial intelligence (AI): The simulation of human intelligence in machines, enabling them to learn, reason, solve problems and make decisions. IT systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making and translation between languages. Is a machine-based system that infers, from the input it receives, how to generate outputs such as language processing content, recommendations or decisions. (51) Bias in AI: Unfair advantages or disadvantages in AI outputs due to biased training data. (52) Data Privacy: The protection of personal or sensitive information used by AI systems, ensuring compliance with laws and regulations. (53) Ethical AI: The practice of developing and the deployment of AI systems that align with human values and moral principles, ensuring fairness, transparency, accountability, and the prevention of harm to users. (54) Hallucination: The phenomenon where a generative AI system, such as a large language model (LLM), produces content that is incorrect, misleading, not factual, or entirely fabricated—including made-up references or citations that appear plausible but do not exist. (55) Large Language Model (LLM): A large language model (LLM) is a specific class of machine learning model that is trained on large volumes of sample text in order to perform natural language processing tasks, such as language generation. Examples: Open AI, Chat GPT, Gemini, Llama, and xAI Grok. (56) Users: Staff, Contractors, Consultants, third-party service providers, volunteers and Students (when working within University’s virtual and physical environments).AI Governance and Responsible Use Policy
Section 1 - Summary
Section 2 - Scope
Top of PageSection 3 - Policy Statement
Part A - AI Governance Principles
Core AI Governance Principles
Part B - AI Governance Framework
AI Steering Committee
AI Risk Classification
Provisions for Human oversight and Contestability of AI decisions
Part C - Use of AI Technologies
Responsible Use
Unacceptable Use
Learning and Teaching
Intellectual Property and Copyright Considerations
Part D - AI System Management
Part E - AI Data Practices
Part F - Breach of Policy
Section 4 - Procedures
Section 5 - HESF/ASQA/ESOS Alignment
Section 6 - Definitions
Section 7 - Supporting Documents and Information