The Productivity Paradox


Cultivating Capability in the Shadow of AI.


Vanessa Piper, Principal Explorer
24 April 2024

AI and Human Over-Reliance

Every now and then, our research leads us to unexpected finds that might matter in the near future. They don’t always fit with our main focus on emerging technologies, but we believe they’re interesting enough to share.

While everyone can agree that we’re still in the very early stages of Large Language Model (LLM) adoption, one concern that keeps coming up is the “risk of human over-reliance on LLMs”.

It’s hard to picture “over-reliance on LLMs” in these early days, but given the leaps in both the availability of LLMs and our awareness of them, this is one that I suspect we’ll need to balance in the coming years.

The Concern is .. The Risk of Cognitive Atrophy

In other domains, such as physical fitness or language skills, there is a principle known as “use it, or lose it”, and it means just what it implies: if you are not actively working on maintaining a level of skill or activity, those skills will atrophy over time.

As applied to Artificial Intelligence (AI) for augmentation, the concern is that as humans rely more on LLMs for tasks that previously required deep analysis, synthesis of ideas, or creative problem-solving, there's a valid risk that their inherent skills in these areas will collectively degrade.

This is what we’ve heard referred to as the dulling of "sharp edges" of human judgement and critical thinking. Over time, this could lead to a workforce that is less capable of independent strategic thinking, problem-solving, and innovation.

The Counterargument and The Paradox of Augmentation

The power and, some might say, the very purpose of Large Language Models lie in its ability to augment human capabilities and elevate performance.

For example, LLMs can process and analyse data at scales and speeds unreachable by humans alone, leading to richer insights and in many instances, more informed decision-making.

The challenge is balancing this augmentation with the need for maintaining and developing intrinsic human skills. This is the paradox we’re digging into here: where the use of the tool to elevate standards could inadvertently prevent individuals from reaching these standards unaided.

The Case Study

In a study from April 2023, later revised in November the same year, researchers Erik Brynjolfsson et al. (NBER) researched how new AI technologies are shaping workplace performance. They focused on an LLM-based conversational assistant designed for customer support teams. Drawing on data of 3 million chats by 5,179 customer support agents, the researchers discovered a notable 14% boost in productivity, as measured by the number of issues resolved per hour, after the tool was introduced.

The study highlighted that this AI tool was particularly advantageous for novice and lower-skilled workers, who experienced a 34% improvement in productivity. In contrast, more experienced and highly skilled workers saw only a minimal impact. This suggests that the AI tool serves as a conduit for sharing best practices across the workplace, thereby speeding up the learning process for newer employees.

The study also noted positive shifts in customer sentiment, an uptick in employee retention, and enhanced opportunities for worker learning, underscoring the broader benefits of integrating LLMs into workplace practices.

It’s worth noting that studies such as the previous ones are being utilised by businesses and vendors alike to justify accelerating LLM augmentation in the workplace. It doesn’t help that the hype over AI has reached all corners of the digital world and the threat is all but blatant now: in order to remain competitive it just makes sense to boost your workplace with an LLM or three.

Whether a cloud-based LLM subscription for the cautious, augmenting a Microsoft-heavy workplace with Co-Pilot (as with Google Gemini for Workspace) we are being promised a strong lifting of standards and improved staff happiness and productivity simultaneously.

This does leave us with several slight ethical dilemmas.

  1. It’s not great publicity to replace humans with AI solutions, and the current response to this seems to be “well, upskill your staff into more meaningful roles, with LLMs to take the brunt of repetitive work”.

  2. If we hired staff for specific purposes, there is nothing to suggest they would be suited for a more “meaningful” role, or be interested in one. The suggested solution there is to augment their existing skills with an LLM.

  3. If we augment staff with an LLM, we risk the aforementioned cognitive atrophy where they will over-depend on LLMs to do the thinking for them.

So how do we enhance our workplace with LLMs, while avoiding teaching staff to over-rely on the same technology created to help them?

The Usual Mitigation Suspects

The strongest suggested strategy to avoid over-reliance on AI & subsequent cognitive atrophy, is Education.

Shifting the responsibility to staff training rather than addressing issues with technical solutions is quite common in several areas, and turning “LLM Cognitive Atrophy Awareness” into another “Phishing training, Cybersecurity Awareness, Data Privacy, Workplace Harassment & Health & Safety” checkbox exercise seems a bit like wasting an opportunity to reject the status quo and create something more meaningful.

But if we are to do this by the classic manual, we should be responsible for educating our employees on the strengths AND limitations of LLM. This includes ensuring that users are aware that while LLM can generate useful outputs, staff are made well aware that it is also prone to biases, errors, and limitations based on the data it was trained on. This awareness encourages users to critically assess LLM-generated content rather than accept it wholesale.

Naturally this critical assessment relies on either a trained analytical skill set, or the experience to double-check information; and renders the original toil-saving next to useless.

We should also establish clear guidelines on when and how to use LLM tools effectively, including examples of tasks that should ideally remain human-led. We also need to encourage the development of skills that complement LLM capabilities. For instance, while an LLM can analyse data, human employees can focus on learning advanced interpretative, relational, and ethical decision-making skills.

Further to the prior guidelines and training, we should also implement ongoing training programs that challenge employees to engage in tasks without the aid of an LLM, ensuring that they retain and refine their critical thinking and problem-solving skills.

As for who is going to check guidelines are being followed? Human-in-the-Loop Systems and Human Feedback are required. For example, the creation of supervisory roles where humans supervise and intervene in LLM processes, ensuring that outputs are not just taken at face value but are evaluated and contextualised by human judgement. We should also use errors or failures in LLM outputs as learning opportunities for teams, reinforcing the importance of human oversight and identifying the limits of automation.

If you’re shaking your head reading this and thinking “uh, infantilising and micromanaging behaviour isn’t gonna fly where I work”, you’re likely right.

Introducing a tool to help, but at the cost of human autonomy won’t sit well at most workplaces; particularly those where staff have previously been trusted to just get the work done.

Can We Educate and Micromanage the AI Genie Back into the Bottle?

Right now, regardless of whether workplaces have AI policies, LLM subscriptions or encourage or discourage the use of generative AI, staff can and will access LLMs.

And access to LLMs will only get easier over the coming months. It is likely that by the end of 2024, most people will have usable LLMs on their phones or baked into their browser, which will further complicate the issue of identifying where the line is between skill and augmentation.

Furthermore, it will blur the line as to whether skill is required. If, after all, we are all augmented to the same standard with LLM, what is the point in paying for experience if all you need is a novice with the ability to double-check their work?

As large language models and generative AI advance, they could start to replace some jobs. So for ethical employers interested in keeping and developing their teams while benefiting from LLMs, it's important to start thinking now about how we can integrate these technologies into our workplaces responsibly and transparently, making decisions that can give our company competitive advantages while allowing us to sleep at night, knowing we’ve done right by our employees.

The Augmented Apprenticeship Model

We know that no good comes from attempting to start a revolution, nor from speaking authoritatively about subjects in which one is not an expert.

That said, hold my beer.

At some point we have to consider that - on a long enough timeline - every option still leaves us with the same cyclic descent into “LLMs augment our jobs, we lose cognitive capacity, then LLMs take our jobs as we have AI’d ourselves into obsolescence”.

Maybe it’s time to consider a new way of approaching entry-level work that inevitably will be enhanced with LLMs increasingly as they evolve?

Potential Structure of an Augmented Apprenticeship

What if all roles that currently are, or intend to be enhanced with LLMs, started off as apprenticeships?

We typically think of apprenticeships as being the sole province of trades. A traditional trade apprenticeship is a structured program where an apprentice learns a skilled trade through a combination of on-the-job training, and classroom learning.

There is often a formal agreement between the apprentice and a master craftsman or employer, detailing the duration of training, the skills to be acquired, and naturally, the salary the apprentice is to receive for the different levels of apprenticeship.

Typically, these programs last between 2 to 5 years, and apprentices are often required to pass examinations or assessments to demonstrate their competency in the trade.

How would this look if we applied a similar structure to AI-augmented roles? A new graduate, entry-level employee or career-changer would start by accepting an apprenticeship with the knowledge that they would be supported, trained and mentored for the duration of their journey from apprentice to specialist.

How might it look? It could look like this:

  • Employees start in a role where LLM tools handle repetitive tasks, allowing them to focus on learning the foundational skills of their profession. Training at this stage focuses on understanding the role of an LLM, basic job skills, and the industry landscape.

    Performance and aptitude assessments determine when they move forward.

  • Employees who show competence move to this level where the use of LLMs shifts from doing tasks for them to assisting them. Here, the focus is on developing analytical skills, problem-solving, and decision-making. Employees might begin to specialise in specific areas based on their interests and skills.

  • At this stage, employees are expected to perform complex tasks with minimal AI assistance, moving into roles that require deeper expertise or leadership skills. Paths could diverge here towards management, technical specialisations, or strategic roles.

Considerations for Implementation

Transparent Communication: From the onset, it's crucial to be clear about the expectations and potential career paths within the company, including the possibility that those who choose not to advance may face limitations in their roles.

Support Structures: Implementing mentorship programs, peer learning groups, and regular check-ins can support the growth and engagement of employees at all levels.

Adjustability and Adaptability: The model should be flexible enough to adapt as job roles evolve with technological advances and changing industry needs.

Conclusion

At the beginning of this post, my focus was on the paradox of LLMs potentially leading to cognitive decline among those who over-rely on them. However, there was little consideration of how companies planning to implement AI augmentation could proactively address its impact on staff.

This is not a new concept. There are already companies and industry sectors that use many elements of the model we’ve suggested, from large companies such as Google, IBM and Microsoft who offer apprenticeship programs incorporating AI & Machine Learning tools, specifically designed to bring individuals up to speed while fostering critical thinking.

Similarly, forward-thinking banks and financial institutions are starting to use AI-driven simulations and data analysis tools to train employees in complex modelling and risk assessment, the idea being to augment their employees skills while preparing them for higher level analytical tasks. The concepts of this type of augmented training are being used, but usually at scale and aren't considered necessary for smaller workplaces.

Regardless of the scale of your organisation, AI is here to stay. The decision to integrate LLMs into the workplace still lies ahead for many organisations, and while the risk of staff cognitive atrophy might not be a top concern, it shouldn't be overlooked.

It’s up to each workplace to weigh the pros and cons and plan accordingly; however given the availability of LLMs, our best advice is not to wait too long to start thinking about it.


Previous
Previous

If AI is the hammer, every problem is a nail

Next
Next

Introducing Distant Field Labs