Trust Culture
AI Usage Policy
1. Policy Statement
At Trust Culture, we are committed to maintaining the highest standards of professionalism, expertise, and transparency in our consultancy and training services. As part of our ongoing commitment to innovation and quality, we utilise Artificial Intelligence (AI) tools, to support our work. This policy outlines how AI tools are used to enhance our services and ensures transparency in their application.
2. Purpose
The purpose of this policy is to provide clear guidelines on how AI tools are integrated into our work processes. This ensures that clients understand their role in leveraging our 25 years of expertise in risk management, leadership, and work health and safety, while upholding the integrity and quality of our deliverables.
3. Scope
This policy applies to all Trust Culture employees, contractors, and representatives who use AI tools in the course of delivering training programs, consulting services, or creating resources for clients.
4. Guidelines for Using AI Tools
Supplementing Expertise
AI tools are used to enhance, not replace, the extensive knowledge, experience, and expertise developed over more than two decades of professional practice. By integrating AI, our processes become more efficient while maintaining the depth and quality of professional expertise.
These tools act as valuable supports in various capacities, including but not limited to:
- Idea Generation and Content Creation: Assisting with brainstorming and drafting content.
- Preliminary Research: Streamlining the gathering of essential information.
- Administrative Efficiency: Simplifying tasks like document formatting and information organisation.
- Data Analysis for Risk Assessments: Identifying trends, predicting risks, and suggesting preventative measures.
- Customised Training Programs: Tailoring content to meet specific hazards or compliance requirements.
- Audit Preparation and Reporting: Streamlining data organisation and highlighting areas for improvement.
- Feedback and Improvement Analysis: Collecting and analysing training feedback to enhance future sessions.
5. Transparency in Use
Trust Culture is transparent about the use of AI tools in client work. Deliverables influenced by AI tools will be reviewed and finalised by our experts to ensure they meet our high standards of accuracy, relevance, and professional integrity.
Where appropriate, materials will indicate the use of AI, such as with a label or statement in line with our branding. Documents and training resources created using AI will include a custom-designed “Enhanced by AI” logo to inform clients of its use.
6. Quality Assurance
All outputs from AI tools undergo thorough review by qualified professionals to:
- Verify accuracy and relevance.
- Ensure alignment with current legislation, regulations, and best practices in health and safety.
- Tailor content to the specific needs of our clients.
7. Ethical and Confidential Use
AI tools are used in compliance with ethical standards and confidentiality requirements. Sensitive client information is not shared with AI tools to ensure data privacy and security.
8. Legal Compliance
All use of AI tools by Trust Culture shall comply with applicable privacy, data protection, and workplace regulations, including the Privacy Act 1988 (Cth) and relevant international laws where applicable.
9. Risk Mitigation and Error Rectification
10. Continuous Improvement
11. Training and Bias Awareness
12. Stakeholder Engagement
13. Documentation and Explainability
14. Alignment with International Standards
15. Regular Policy Updates
16. Dispute Resolution
17. Review and Amendments
Approved by: Victoria Beresford, Director, on 20.12.2024
- Next reviewed: 01.07.2025