The more popular something becomes the more criticism it will inevitably attract. Nothing new there and nothing about the ChatGPT launch last year has been different. As humans we are naturally pessimistic about things that change what we know about something. This stretches back to the discovery that the earth isn't flat and even further back than that, I'm sure. What has changed is the fact that there are now people amongst us who seek to resolve those fears proactively rather than just waiting for acceptance which can take generations.

In recent years:

The topic of "ethical AI" is one that I have discussed in previous posts, but it is an interesting subject that I feel strongly about. Whilst as a professional technologist it is my job to stay on top of as many new technological advancements as possible, it is also in my nature to be inquisitive about them too. What I have seen in the past few years is that exponential growth has happened and in the community that I sit, so has the acceptance. Whether that be with Machine Learning in the form of TAR/CAL in document reviews or in the broader AI spere, such as ChatGPT, most of my peers with whom I have spoken are all accepting of the fact that this technology is here to stay and if understood and harnessed, can be a hugely powerful tool for us to have in our kitbags.

The area that I have seen less uptake is in industries that have black and white, or correct and incorrect outcomes. I suspect that this is because the nature of technology with its always changing goalposts means I am pre-trained to expect change - whereas £1,000 incoming and £1,000 outgoing always has resulted in £0 left in the pot.

What is coming next?

With all of this said, there are several colleagues, contacts and friends from non-technology focused professions who are beginning to think more like me - and this is incredibly refreshing.

The way we work is going to change, it might take a month, a year, or a decade but the advancements in technology we have seen all point towards a large shift in how we are going to be able to function within our jobs.

I believe that AI is going to help us with parts of our jobs that we find tedious such as data entry (timesheets, CRMs, etc.) and diary organisation in the first instance. These are the parts of our day to day lives that we are more likely to give up - after all, who would choose to enter contact information into a CRM system is "the machine" can do it for them!?

From there, after the trust of the masses is earned it will be far easier to embrace the bigger changes, whatever they may be.

What do we all need to be cautious of?

Whilst it might appear that I am full steam ahead with the adoption of technology and AI, this isn't always the case. We need to be sure that what we do adopt is suitable for use. We've all heard horror stories of unsupervised AI being let loose on the internet and becoming "inappropriate" within a very short amount of time so it is vital that we continue to test and refine as we develop. 

People are using the AI to do quirky things at the moment like "explain the Third Law of Thermodynamics to me like I am a child" and what is given back to them is a very easy to digest paragraph that makes some sense to the layperson. But how do we know that paragraph isn't a recipe for gazpacho if we aren't validating the output? How would we know the responses that are being given aren't becoming inappropriate without validating the output.

The obvious answer to this is to validate the input - we all know ChatGPT has had the spotlight shone on it with its practises in that area - and then continue to monitor and review the output. This process is a never-ending cycle but is essential.

Ethical AI is hopefully going to be a magnificent tool that makes all of our lives better, but we have a duty to ensure that we are responsible with its application as we embrase it.