Ethical Artificial Intelligence (AI) Policy
Version - 0.1
Effective Date: 28/11/2025
1 Introduction
Artificial Intelligence (AI) Policy encompasses the principles, guidelines, and best practices that THE LUPUS FOUNDATION OF AUSTRALASIA LTD has established to govern the ethical development, deployment, and management of AI technologies. As AI continues to evolve and integrate into various aspects of our operations, this policy aims to ensure that AI systems are used responsibly and transparently, fostering trust among stakeholders. The purpose of this policy is to provide a comprehensive framework that will:
-
Promote ethical and fair use of AI technologies.
-
Ensure transparency and accountability in AI decision-making processes.
-
Safeguard the privacy and security of data processed by AI systems.
-
Mitigate risks associated with usage of AI while harnessing its benefits.
-
Encourage innovation in AI that aligns with THE LUPUS FOUNDATION OF AUSTRALASIA LTD values and mission
By adhering to this policy, THE LUPUS FOUNDATION OF AUSTRALASIA LTD seek to leverage AI technologies to enhance operational efficiency, improve customer experiences, and drive organisational growth in a manner that is ethical, secure, and respectful of individual rights.
​
2 Who this Policy applies to
This policy applies to all employees, volunteers, board members, contractors, partners, temporary staff, consultant and authorised agents of THE LUPUS FOUNDATION OF AUSTRALASIA LTD leveraging AI in their day-to-day business activities. This includes the development, deployment, management, and use of AI systems.
3 Guiding Principles
In line with the Australian Government’s AI Ethics Principles, the principles outline the ethical, transparent, and accountable use of AU, ensuring data protection and providing necessary education for responsible AI use. Further resources are found in the endnotes on page 5.
​
3.1 Ethical Use
-
Follow ethical guidelines that reflect our community’s priorities and core values1 (i.e., fairness and non-discrimination, respect for privacy, inclusiveness and sustainability).
-
Review AI outputs for biases. AI systems are only as good as the data they are trained on. AI can perpetuate biases present in generated data sets, leading to unfair outcomes, especially against marginalised groups.
-
Before adopting AI, ensure AI will only be used in human-centred ways. This should explain that staff will oversee technology and make final decisions on its use, ensuring it does not create or worsen biases.
​
3.2 Data Protection
-
Ensure that non-public information (i.e. confidential or private) is not shared with public AI models (i.e. ChatGPT or Otter.ai) or features that these services offer.
-
Regularly check AI systems for compliance with data protection laws and sector standards. Handling large amounts of personal data necessitates stringent measures to ensure the privacy and security of individuals, particularly those vulnerable populations who are often stakeholders.
-
Ensure that data is stored and processed in accordance with the organisation’s data protection policies and legal requirements, as well as the Data Sharing Agreements covered in The Australian Privacy Principles and The Privacy Act 1988 which are the relevant laws covering Data Sharing Agreements.
​
3.3 Transparency
-
Clearly communicate how AI systems are used and the decisions they assist with to all stakeholders.
-
Ensure that AI-generated content is fact-checked and verified before dissemination. Many AI systems lack clarity in how their decision-making processes occur, posing challenges for accountability in critical NGO operations.
​
3.4 Accountability
-
Define who is accountable for AI-related decisions and ensure they understand the legal implications.
-
Create a guideline for ethical decision-making that includes human oversight and ethical consideration.
-
Regularly assess the impact of AI on staff and external stakeholders, and adjust policies as needed.
​
3.5 Education and Training
-
Offer training sessions to ensure all staff understand how to use AI tools responsibly and ethically.
-
Develop a shared playbook with best practices, including how to craft prompts and fact-check AI responses.
-
Prompt awareness about the benefits and challenges of AI through educational initiatives.
4 Staff Guidelines for Responsible AI Use
-
Understand AI Capabilities and Limitations: Know what AI can and cannot do. Use AI for its intended tasks and avoid over-reliance on AI without human oversight.
-
Data Privacy and Security: Ensure data inputted into AI is anonymised and secure. Follow data security policies.
-
Ethical Considerations: Be aware of potential biases in AI responses and use AI in ways that align with our ethical standards.
-
Review and Validate AI Outputs: Always check AI-generated outputs for accuracy and relevance before using them.
-
Report Issues: Report any AI-related concerns to management or IT staff. Participate in training sessions to stay updated on best practices.
​
5 Management Guidelines for Building a Positive AI Culture
-
Create a Supportive Environment: Encourage staff to explore AI applications with the necessary support and resources. AI solutions should be accessible and inclusive, catering to diverse populations, including those with disabilities or limited technology literacy.
-
Address anxiety and fears: Have open and honest conversations about the use of technology and ensure its use is aligned to THE LUPUS FOUNDATION OF AUSTRALASIA LTD and is human-centred.
-
Establish Clear Policies and Procedures: Develop and communicate clear AI use policies, including ethical guidelines and data protection measures.
-
Provide Training and Resources: Offer regular training on ethical AI use and provide covering fairness, accountability, transparency, and privacy, using real-world case studies to illustrate successes and challenges, and maintain a repository of these examples for ongoing reference.
-
Monitor and Evaluate AI Use: Implement mechanisms to monitor AI use and collect staff feedback to improve policies.
-
Promote Transparency and Accountability: Ensure AI operations are transparent and assign clear roles and responsibilities.
-
Safeguard Data: Implement robust data security measures and regularly audit data practices. Train staff on data security best practices.
-
Use cases: Begin by using AI to address critical pain points and bottlenecks, such as time-consuming tasks like prospect research, document searching, and repetitive inquiries, to enhance efficiency and enable other activities.
-
Pilot AI: Start with small, time-limited AI prototypes assessed by staff and external stakeholders, rigorously check accuracy and bias, and iteratively refine based on feedback to ensure ethical and effective deployment.
-
Legal Compliance: Ensure all AI applications comply with relevant Australian laws and regulations, including the Privacy Act 1988 and the Australian Human Rights Commission guidelines on AI.
6 Additional Guidelines
This is an optional section and addresses requirements when THE LUPUS FOUNDATION OF AUSTRALASIA LTD are using or looking to use any application that has an AI capability or features.
AI System Assessment and Approval
-
Criteria for assessment of AI Systems: Ensure AI systems generate accurate outputs through regular testing and validation against real-world data to maintain and improve accuracy, in line with appropriate risk/reward balance to the business (i.e. NIST AI RMF 1.0 Trustworthiness).
-
Approving AI systems and Use cases: Establish a clear process for approving AI systems and use cases by considering the usage, development and procurement of AI technology. Adopt AI systems that are trained on robust, AI-optimised infrastructure to ensure they are reliable, scalable, and efficient.
-
Built with Responsibility and Safety: Ensure that AI deployment is guided by principles of responsibility and safety. Conduct thorough bias and toxicity assessments and collaborate with external experts to identify safety issues. Commit to AI technology that values privacy, security, inclusion, trust, and safety. Actively work to mitigate limitations and continuously improve performance and safety.
Resource and Risk Management
-
Ensure the content complies with Intellectual Property (IP) rights: When generating images, videos, or voice content, ensure that the solution provider offers legal coverage, including fair treatment of artists and respect for intellectual property rights.
-
Compliance Posture Assessment: Regularly assess the compliance posture of all generative AI usage to ensure alignment with internal policies, applicable laws, and regulations.
-
Risk-based Classification of AI: Use a risk-based approach to classify AI systems and their use cases based on the organisation's risk profile. Implement fit for purpose controls and oversight for high-risk AI systems or use cases.
-
Hybrid AI Approach: Adopt a hybrid approach to AI that dynamically incorporates both proprietary and third-party AI models. Evaluate and update models periodically to ensure high-quality results while ensuring that third-party AI providers do not use user data to improve or train their models without permission.
-
Incident Response: Implement an incident response procedure to address any issues or breaches related to AI use, with clearly defined roles and timely resolution procedures, supported by detailed documentation for future risk mitigation.
Staff Training
-
Feedback Mechanisms on AI: For continuous improvement, conduct regular surveys and feedback sessions to gather insights and experiences from staff using AI tools, have periodic check-ins with the technology vendor that can help derive actionable insights to enhance AI performance and user satisfaction.
7 Policy governance
7.1 Roles and responsibilities
-
Roles and responsibilities relating to AI use must be clearly defined and documented, including internal and external decision-making capabilities, functions, and roles.
-
All team members involved in the deployment and management of AI systems should have a clear understanding of their duties and be accountable for ensuring that AI technologies are used ethically and in alignment with the THE LUPUS FOUNDATION OF AUSTRALASIA LTD’s purpose and values.
7.2 Review of AI Policy
The Policy document must be reviewed on an annual basis and updated if required, to ensure it remains up-to-date and continues to meet the requirements of THE LUPUS FOUNDATION OF AUSTRALASIA LTD.
​