This policy outlines how WRS uses and governs Artificial Intelligence (AI) tools responsibly in our day-to-day operations. Our goal is to ensure that AI is used ethically, transparently, and in line with UK Government guidance, specifically the UK DSI&T Responsible AI in Recruitment Guidance.
Use of AI within WRS Systems
WRS has integrated Artificial Intelligence (AI) tools within our Applicant Tracking System (ATS), which securely stores and manages candidate data. These AI tools are used solely to assist our team with administrative and communication tasks, such as generating outreach messages and producing summary profiles from candidate CVs.
AI is not used to match candidates to job opportunities, make recruitment decisions or influence the selection process in any way. All hiring and candidate assessments are carried out exclusively by WRS staff, using their professional expertise and judgement.
Safety and Robustness
WRS ensures that all AI tools used are reliable, secure and operated within their known limits. We carefully assess each tool before implementation to understand its capabilities, limitations and potential risks. AI outputs are monitored for accuracy, consistency, and unexpected behaviour, and any issues are reported and addressed promptly. WRS maintains controls to prevent misuse, ensures secure system access, and regularly reviews AI systems to confirm they continue to operate safely over time. Human oversight is embedded at every stage, guaranteeing that AI assists but never replaces human judgement or responsibility.
Transparency and Explainability
WRS ensures that employees, candidates and clients have the right to understand when AI is being used in connection with their application and, in broad terms, how it operates. Requests for this information can be directed to data@worldwide-rs.com.
Fairness
WRS is committed to preventing AI from creating or reinforcing bias or discrimination. All AI outputs are critically reviewed by staff trained to identify and mitigate potential unfairness. We ensure that the use of AI does not disadvantage any individual or group, particularly in relation to protected characteristics such as age, gender, race, disability, religion or sexual orientation. WRS monitors AI outputs for unintended bias and regularly evaluates systems for fairness, accuracy and equitable outcomes. Staff are empowered to challenge AI-generated suggestions and make decisions grounded in professional judgement and human oversight.
Accountability
WRS recognises that accountability for all decisions rests with our people. While AI supports administrative and operational tasks, it does not make or influence hiring or screening decisions. Human oversight and responsibility remain central to all aspects of our work.
Human Oversight and Training
AI is a tool to assist staff, not replace their judgement. All WRS staff are trained to ensure that every AI-assisted task is carefully reviewed and approved by a human before being shared or actioned. Employees must critically assess AI outputs, correcting any errors or misleading content and ensuring accuracy before use.
All staff using AI receive regular training and guidance on responsible use, bias awareness, data protection, and this policy, ensuring that human oversight and accountability remain central to all AI-supported activities within WRS.
Privacy and Data Protection
All use of AI within WRS must comply with UK data protection law, including the UK GDPR and the Data Protection Act 2018, and uphold the highest standards of confidentiality. AI tools are used only in ways that respect privacy and data minimisation principles, personal data is NOT shared with AI systems and always stored securely.
WRS requires all third-party AI providers to demonstrate how they protect data, manage security risks, and comply with relevant legislation. Where applicable, WRS seeks documentation outlining how AI systems are built, trained, and tested to ensure responsible and lawful use.