In an ideal world, AI will augment and make our working lives more productive, removing the mundane, allowing us to concentrate on the interesting work. Sound like the idealistic brochure for a utopian paradise?
You’d be right to be sceptical. The reality is organisations and start-ups alike are trying to automate tasks ranging from copywriting to representing someone in court. This is job replacement, not augmentation – threatening white collar jobs.
What is apparent is that organisations taking the more cautious approach of keeping the human in the loop, to perform an editorial role of the generated subject matter are faring better than those trying to fully replace roles.
For example, CNET opted for a riskier approach and created whole articles with ChatGPT. Plagued with convincing falsehoods and questions around plagiarism it had to issue corrections and apologies, ultimately losing confidence among its readers.
Others fared better by taking a more cautious approach such as Buzzfeed, who used ChatGPT to generate quiz answers guided by the journalists who created the topics and questions.
Worryingly though is the admittance from several journalist circles, that a lot of articles are derivatives of other competitor’s works. You can easily appreciate how a model such as ChatGPT could build a convincing alternative to a current news story given appropriate sources, only needing some editorial oversight.
Does this mean journalists are freed up to do the actual investigative work? Or does it mean a reduction in the number required to do the same job?
In the legal world, DoNotPay was attempting to use ChatGPT to defend people in court against speeding fines, although it retired its robot lawyer before it had its first case due to numerous threats from bar associations to sue for “unauthorized practice of law”.
However, it is also using GPT-3 API to chat to an internet provider’s support centre to negotiate better deals on behalf of an individual, which let’s be honest, I think everybody would be happy about. Just think of car insurance!
Although this means if we can automate something to talk to a representative in chat to negotiate a deal on our behalf, doesn't this imply the opposite is true and an automated bot operator can fulfil the other side of the negotiating table (i.e. effectively bot-to-bot communication)?
In software engineering, could business analysts generate code purely from chatting to a bot, thus freeing up developers to concentrate on low level complex aspects, or producing building blocks that high level features depend on?
Although this implies developers get to concentrate on the more exciting aspects, doesn't this imply we need fewer specialists? Is there a real danger that specialist knowledge is lost as things become more abstract?
Fortunately, the immediate answer to all these questions is no. ChatGPT does not pose an immediate threat to our jobs, so long as we embrace the changes that are coming.
However, it would be naive to believe that efficiency cuts are not on the way, even in the best case scenario when our work is augmented with AI-based tooling. There is a clear need for legislation to catch up with the tech, to protect working rights in terms of setting boundaries where AI augmentation ends, and human oversight begins.
Otherwise in the long-term as models improve, jobs will be at risk.
We asked ChatGPT our question and this is the response:
As an AI language model, ChatGPT has the potential to automate certain tasks that currently require human input, but it's not designed to replace jobs. Instead, it's meant to assist people in tasks such as customer service, data analysis, and content creation. AI technology like ChatGPT can free up time and resources for people to focus on more complex and creative work, leading to job growth in different fields.
Next time, we’ll be taking a look at a darker side of ChatGPT that I’m not seeing a lot of people talk about...