North-West HR consultancy working with Cumbria’s top colleges to help apprenticeship employment

Cumbria’s largest HR consultancy has launched a new service in collaboration with some of the region’s leading training providers to help businesses employ apprentices.

Supported by Kendal College, Lakes College and Gen2, Realise HR is working to address many of the challenges faced by apprenticeship employers including employment law, employee relations, and providing positive people experiences to help apprentices get the most from their learning and thrive.

The company is hoping that by providing important complimentary services to the support already received from education partners, they can help businesses better engage with apprenticeships as well as ensuring more apprentices manage to complete their training programmes.

Martin Norris, Head of Recruitment at Realise HR, said:

“According to a recent think tank, we know that almost half of apprentices (47%) drop out of their training programmes. When you look at the detail, a lot of the reasoning comes down to their experiences at work. For example, 41% of apprentices feel that their apprenticeship is poorly organised, and only 14% believe that their mentorship is appropriately managed. If you’re an SME or employing apprentices for the first time, it can be difficult to know the ins and outs of how to structure apprenticeships within your business and make sure they’re legally compliant. Through consultation with local training providers, we’ve been able to design structured resources to help make apprenticeship employment a much easier process.”

As part of the design phase, the firm brought together a steering group of leading apprenticeship stakeholders from across the county including representatives from Cumbria Local Enterprise Partnership, All Together Cumbria, and the Construction Industry Training Board, as well as apprenticeship employers and training providers from across the county. Kelli Horner, Head of Business Engagement at Kendal College, said:

“Businesses are realising how valuable apprentices are to their sustainability and growth and want to know their approach is going to offer fantastic employment opportunities. Working with Realise HR and other training providers, we identified areas where employers are requesting more support, often in meeting necessary legal requirements and in structuring apprenticeship experiences within their respective businesses. Partnering with Realise HR, we are able to offer valuable support from an experience and reliable source to meet their individual needs.”

Realise HR is excited to be offering a way for businesses to easily get to grips with apprenticeships and enhance the experience of apprentices within the workplace. The resource suite is available for all apprenticeship employers and can be accessed by contacting Realise HR.

Do you need an AI Policy?

AI is a big deal. The global adoption rate of Artificial Intelligence now stands at 35%. And this is only set to grow, with IBM quoting that 44% of organisations are working to embed AI into their processes and applications. Whereas once the public’s interface with AI may have been largely restricted to search and recommendation engines, advanced AI tools are now becoming more widely accessible to the mass market. So, this begs the question, does your business need an AI policy?

If you joined us for our most recent event with in-Cumbria last month, you’ll know we took a deep dive into the ethical considerations of AI with thought leaders across the region. Of the topics explored, we spoke about how most companies understand the importance of responsible AI practices but that many organisations just don’t feel equipped to regulate how AI is used. If we think about the release of ChatGPT in November 2022, it seemingly came out of nowhere to the non-initiated, and the plethora of platforms following have left many of us struggling to keep up. Unsurprising then that two thirds of companies report that they lack the skills and knowledge to accountably manage the use and trustworthiness of AI within their business. For this reason alone, it seems the very least we can do is to implement an AI policy to mitigate some of the risks. Risks including:

Bias and Discrimination

AI can perpetuate human bias as it mirrors the leanings within the data it interrogates. What’s worse, it can sometimes intensify this bias with access to historical inequities and outdated modes thought. Essentially, rather than AI improving upon human decision making, it can scale-up some of the more problematic and discriminatory decisions that we’d rather weed out in 2023. For example, Reuters reported that Amazon scrapped its hiring algorithm after finding it was favouring applicants based on language predominantly found in male CVs. By observing patterns of bias, which in this instance was a precedent of men dominating the tech industry across the previous decade and consequently submitting the majority of job applications, it taught itself that male candidates were to be preferred over their female counterparts. It became inherently gender bias.

Transparency and the Bottom Line

Of the IT professionals polled by IBM, 85% agreed that consumers were more likely to choose/purchase from a company transparent about its AI technology. However, 61% of organisations stated that at present they wouldn’t be able to fully explain AI-powered decisions. If you can’t translate and understand the decisions your AI is making, not only is it difficult to be sure that it’s making the correct call (AI will sometimes get it wrong, don’t forget) but it’s directly impacting consumer trust and ultimately damaging your bottom line.

Privacy and Confidentiality

AI databases are often filled with confidential employee and/or customer data. You need to know that this data is secure and that it won’t be inappropriately leaked by an AI algorithm. Moreover, there’s also ethical considerations attached as to how appropriate it is for AI to use personal information and where we draw the line in terms of intrusion of privacy. In 2019, several states across the US banned the use of facial recognition software and biometric surveillance technology in law enforcement body cameras. The law, partly intended to prevent ‘police states’, was also informed by the number of misidentifications made by the software, particularly in correctly identifying women or those of ethnic minority. Experimentation with comparable software by police in the UK has been subject to similar scrutiny, with reports by the University of Cambridge declaring that in sample studies, deployments failed to meet minimum ethical and legal standards. Like all technology, the question is about appropriateness without slowing development and its benefits to society.

Values & Culture

Does AI generated content truly reflect your company’s values? It might sound a lot like you, but is it you or just a poor copy? We all bang on about company culture, how important it is to us, our businesses, and our colleagues, but could all our hard work be undone in our reliance on AI? AI produced material can lack authenticity, it can feel robotic and impersonal, and this is a major turn off for customers, internal and external.

So, do you need an AI policy? In short, yes. Absolutely. But it’s sometimes difficult to know where to start. If you need the fundamentals, we can send you our AI policy for free, just drop us a message in the usual places.