Social Work Policy Panel newsletter December 2025
Notes from December's meeting:
AI and Social Work: Opportunities, Risks and Ethical Practice
On 16 December 2025, the Social Work Policy Panel hosted a session exploring the use of artificial intelligence in social work practice. Professor Beth Weaver from the University of Strathclyde delivered a comprehensive presentation examining how AI technologies might support, challenge and transform social work, followed by a discussion with practitioners.
Understanding AI in Context
Beth began by clarifying that AI is not a single technology but an umbrella term covering varied tools with different functions. From chatbots and AI search tools to deepfakes, these technologies are increasingly shaping how professionals work, learn and interact across education, healthcare, law and social work.
Publicly accessible tools such as ChatGPT, Google Gemini and Microsoft Copilot are now in widespread use. Licensed, secure platforms like Magic Notes are being developed specifically for social work, helping practitioners record conversations and generate case notes. However, Beth cautioned that all AI tools should be viewed as scaffolds or supports rather than oracles: they have significant limitations and cannot replace professional judgement.
Types of AI and Their Applications
Beth distinguished between narrow AI, designed for specific tasks like voice assistants, and generative AI, which can create new content including text and images. These models are trained on large datasets and offer interactive, conversational approaches that can feel human-like.
AI tools are already being used in social work to support various tasks:
- Transcribing and summarising assessments or meetings
- Generating follow-up actions and retrieving case information
- Initial research on topics and summarising policies
- Rewording or explaining information in different ways
- Supporting meeting preparation by generating questions or checklists
Critical Thinking and AI as a 'Critical Friend'
Beth presented findings from her and colleagues ongoing research conducted with justice social workers exploring how ChatGPT affects critical thinking and decision-making in risk assessment contexts. Early results suggest that engaging with ChatGPT helped practitioners generate new ideas, provided alternative perspectives and supported more reflective risk assessments.
Study participants valued AI's ability to act as a 'critical friend' by offering justifications for proposed decisions, challenging assumptions and helping structure complex information. One practitioner reflected that intimate knowledge of a case can sometimes lead to confirmation bias. AI could help counter this by offering fresh perspectives.
However, Beth cautioned that AI is not a substitute for human professional dialogue and supervision. Research has shown that open AI systems tend to exhibit 'social sycophancy', affirming users' actions significantly more than humans do. This tendency to tell users what they want to hear can erode professional and personal judgement rather than strengthen it and can potentially generate harms.
Efficiency: Promise and Reality
While AI is frequently promoted as a means to increase efficiency and reduce administrative burdens, Beth noted that the evidence is mixed. Research with police officers, for example, found that AI did not actually increase report writing speed despite practitioners believing it did, though it may have improved consistency and quality.
A core tension exists between AI augmenting critical thinking and the risk of practitioners using AI to automate or bypass thinking entirely. Beth highlighted concerns about 'cognitive offloading', where practitioners lean too heavily on tools rather than exercising professional judgement. This risk is particularly acute for less experienced practitioners who might use AI as a crutch rather than a supplement to developing skills.
Ethical Considerations and Professional Guidance
Beth outlined several ethical considerations that must guide AI use in social work:
- Data protection and confidentiality: Open, publicly available AI platforms are not secure and cannot be used for social work casework as this would violate GDPR
- Informed consent: Clients should be informed when AI is being used and given the opportunity to opt out
- Transparency: The use of AI in producing any output should be disclosed
- Algorithmic bias: AI systems can perpetuate existing biases that disadvantage minoritised and vulnerable groups
- Environmental impact: Training large language models requires vast amounts of energy and water
- Corporate governance: AI development is largely driven by big tech corporations whose profit motives may conflict with social work values
The British Association of Social Work Code of Ethics provides guidance on aligning new technologies with social work's commitment to care, dignity and social justice. Beth emphasised that practitioners must consult organisational AI policies and professional codes before adopting any AI tools.
Discussion: Practitioner Perspectives and Concerns
Governance and the Risk of Mission Creep
Practitioners raised concerns about AI gradually expanding beyond its intended augmentation role. One experienced participant warned that without proper controls, AI could subtly direct how practitioners work, potentially limiting flexibility in the field. The example of the English probation service piloting automated supervision check-ins via mobile phones was met with alarm: a stark illustration of AI being used to replace rather than support human contact.
Beth acknowledged these concerns, distinguishing between acceptable uses like transcription and unacceptable automation of relationship-based practice. She emphasised the need for strong leadership and governance now, before AI technologies snowball beyond professional control.
Protecting Relationship-Based Practice
The discussion turned to whether AI might threaten the relationship-based practice that lies at the heart of effective social work. While recognising workforce pressures and the task-focused nature of much contemporary practice, Beth was hopeful that Scotland's professional culture and leadership would resist inappropriate automation.
However, practitioners noted that realities of time pressures, caseloads and vacancy rates make maintaining relationship-based work a real struggle. The attraction of 'letting AI do it for you' could be significant for workers coming through training into pressured environments.
AI and Vulnerable Populations
Concerns were raised about vulnerable people seeking connection through AI. Research showing that a quarter of teenagers turn to AI chatbots for personal guidance was highlighted, along with reports of AI providing harmful advice to troubled people. The sycophantic nature of AI, telling users what they want to hear, poses particular risks for those seeking support.
Beth suggested social work has a role in enhancing AI literacy among service users, helping them understand the limitations and risks of these tools.
Coproduction and Bespoke Development
When asked whether AI could improve interventions with people with lived experience of conditions, Beth was enthusiastic about innovative uses such as virtual reality in training and interventions. She strongly advocated for AI tools to be developed through coproduction and evaluation with practitioners and service users, emphasising that bespoke models trained on appropriate data are essential.
Key Messages for Practice
The session concluded with clear guidance for practitioners:
- Never use open AI platforms to share client details – this violates data protection
- Be aware of AI's potential for bias, discrimination and 'hallucinations' (fabricating information)
- Remember that AI tends to tell you what you want to hear – approach it critically
- Even with authorised AI software, practitioners remain responsible for all decisions and case notes
- Consult organisational policies and professional codes before using any AI tools
- Use AI as a scaffold or support, not as a replacement for professional judgement
A practical guide on responsible and effective use of AI, endorsed by Social Work Scotland, is available via their website. The guide offers hints and tips for practitioners navigating this rapidly evolving landscape.
About the Social Work Policy Panel
All students, newly qualified and experienced social workers are welcome to come along to our events.
The panel is jointly run by the Scottish Association of Social Work, the Office of the Chief Social Work Adviser, and Social Work Scotland. It was created to bring frontline workers and policy makers in Government together to address the issues affecting social work today. It is an opportunity to influence those policy makers and the future of social work with your experience and knowledge.
As a social worker, we know you’re busy and facing lots of competing pressures. That’s why we want to make the panel as influential and meaningful as possible.
Get in touch with us through the panel mailbox: SWPP@basw.co.uk