Artificial Intelligence – how is your team applying it? 

AI converts are screaming in frustration from every rooftop, Linked-In post and at every networking event.  

Why can’t we see the light?  

‘Surely the efficiency, productivity, service improvements and analytical capabilities are so obvious’ that we should just sign the cheque, subscribe to the SAAS (software as a service) offering, and let it do all the heavy lifting for us, forever. 

The benefits are seemingly endless, and the possibilities across the health sector for a start make it a very attractive addition to an overloaded system. AI lets us analyse at scale, make connections we wouldn’t otherwise see, and introduces process efficiencies that somehow we mere humans missed. 

We are all for those accelerated drug developments, earlier disease detection across greater numbers, the potential for its predictive capabilities – who wants another pandemic – and the promise of personalised medicine. 

It is hard to resist – but there are a few reasons to think carefully about how you will apply it. Here are the first three that come to mind: 

  • There’s a good reason we don’t marry our cousins – the genetic pool gets smaller, and then it gets problematic. AI’s creativity lifts us to new heights right now, and can provide us with a wealth of everything made possible by multiple new connections our human brains can’t see.  

However, we need to remember that it’s building on what we created – with our human, erratic, imaginative, flexible, diverse minds – across eons, continents, cultures… of course it’s good, look at its source material!    

So, what happens when every AI connection has been drawn from the existing pool of Human Intelligence data, knowledge and creativity. Where does the new and joyfully absurd idea come from?  

How do we ensure that Human intelligence, original ideas and creativity remain valued, encouraged, supported incentivised and rewarded?   

How do we ensure that they remain recognisable, as human works of creativity, before they are derived to the point of being one more data input for multiple generated images or paragraphs?  

  • Who’s filling the expert pipeline – if we take the mundane tasks out of every profession – we’ve all done them, I’m looking at you auditors, copywriters, designers, drafters, call centre operators, data analysts, logisticians, accountants, and anything else that can be repetitive, or requires analysis or forecasting – then who is learning how to become an expert in these fields? Can you hop over the AI capability into a senior analyst role – and who will be keeping an eye on the accuracy and quality of what AI is giving us, if we don’t train new people into the professions?  

  • Bias – in 2024 we are all fully aware of the biases that exist in the record of human history, and we think critically about what AI might generate for us – but that might only last one generation (and in tech the generations can be very short). How critical will our thinking be in 2045?   

Maybe AI will solve these problems for us, but for now, we have a few Golden Rules in our own Artemis AI policy, to ensure that our employees take an informed, consistent and responsible approach to the way we use and apply AI for ourselves and our clients.  

In developing our Golden Rules, we discussed the opportunities and limitations we might be triggering, the ethics around transparency and fairness, the legalities, how accountability must always rest with us, and how it’s important for us to stay abreast of AI’s risks and potential, again, for ourselves and our client work. 

AI and our understanding of how to leverage it will evolve. We’ll remain abreast of these changes, but for now, these are the rules we apply to the way we use it: 

  • We never upload anything to an AI system that is not already publicly available via an internet search. This applies to our own content and that of our clients. 

  • We never use written content or analysis generated by AI in the format it is provided to us. We only use it to inform our thinking and decision-making, to enable us to prepare our own content. We must always prepare our own written content. 

  • We never assume that content provided to us by AI is correct. When using or relying on information provided, we verify it with other reliable sources. Our human oversight, creativity, judgement and decision-making is paramount. 

  • We check for subtle, unavoidable bias that might influence our thinking. 

Please, feel free to copy and paste these into your own policy if you don’t already have one.  

In full disclosure, AI, with the help of several refinement instructions, generated the banner for this article. We like her.


Previous
Previous

When is APS work ‘core’ work?

Next
Next

A guide to visual communication