Discover your Generative AI Voice

Look to voice and vision to ‘stay ahead’ of generative AI

I’ve been talking to a lot of clients about Generative AI voice capability this week. So when the team at Comms Leaders and ​​​The Academy (theacademylearning.org) asked me to guest blog on “how to stay ahead of AI”, I went straight to the mode that may become the way many people experience gen AI in the future - voice.

My answer, as always, is to start by properly understanding the generative AI capabilities that we have now.

The Rise of Voice, Vision and Generative AI Agents

While many are now familiar with text-based interactions using large language models like ChatGPT, generative AI capabilities now extend far beyond the written word. Voice and vision are set to revolutionise how we interact with AI, offering more intuitive and human-like experiences, alongside great uncertainty about what they mean for human connection and other societal impacts.

We have already been shown some of the immediate next developments by the likes of Google and Open AI, including:

  1. Advanced voice interactions that are much more like interacting with humans, and are started to be rolled out;

  2. Vision capabilities that enable generative AI tools to ‘see’ what is going on and infer appropriate actions;

  3. Connecting together capabilities like this into human-like reasoning;

We also know that significant money is going into AI Agents and AI Assistants. These are AI tools that can collaborate with humans and other AI Agents, automating tasks and achieving goals with minimal human intervention. The potential for these tools is astounding. But there is also huge scope for both deliberate misuse, and accidental harms. A very good analysis of some of these risks was set out in this paper from the team at Deepmind.

Even current voice interactions, with all their clunkiness, can be disturbingly engaging. What widespread human-level voice interaction means for human relationships is one of the near-term risks I find myself discussing a lot. And the coming step-change in voice capability could see generative AI tools creep even further into our everyday lives. The latest Apple and Android phones are already embedding generative AI voice capabilities, preparing devices to become all-round AI assistants. 

While voice-to-voice is under-used at the moment, it could quickly become the way many people experience generative AI. 

Properly understanding the tools we have

The good news is that the capabilities that underpin these ‘coming next’ AI technologies already exist and are freely accessible. So if you want to ‘stay ahead of AI’ the best thing you can do is take the opportunity we have now to properly understand the current tools. 

By ‘properly’ I mean bringing AI tools into most things you do. Trying out different modes of interaction, using voice and vision as well as text. Building your understanding of what it’s good at. And what it’s not good at. More importantly than anything else, talking to colleagues about what you discover, and sharing any ethical concerns that you come up against.

Putting it into practice

Ethan Mollick, who writes a lot about co-intelligence with AI, talks about real understanding coming from 10 hours of practice. But that’s a lot of time for busy professionals. At AIConfident we like to encourage people to practice with a purpose. Consider something that you’re not looking forward to this week, or would love to do but don’t feel equipped to. Timebox 30 minutes to bring AI to the table to try to solve that task 

  • Is it a research task? Try searching using Perplexity;

  • Got an important decision you’re wrestling with? Go for a walk and discuss it with Pi.ai or the Chat GPT Voice app on your phone

  • Need to analyse some data? Give Anthropic Claude some (safe) data and manipulate it in the Artifact Window.

  • Something broken? Upload a picture of it to an AI tool and ask for suggested fixes

  • Got a repetitive process that you’d like to automate? Create a GPT in ChatGPT, or a project in Anthropic’s Claude

All of these (with the exception of creating a GPT) can be done with free generative AI tools.  Experimentation is essential for helping people get to grips with the good and the not-so-good sides of generative AI technologies. 

How can leaders help?

Creating the right environment for safe experimentation is crucial. Giving people the permission to be able to experiment, along with guidelines to help them to do so responsibly and in line with your organisation’s values, will help to create a permissive, safe environment for experimentation. 

Double down on this by creating the space for good quality conversations, where teams can share their positive AI use cases and surface any ethical concerns that they have, and lead by demonstrating responsible use yourself.

Together, this will create the cultures that lead to widespread responsible AI use, and maximise the potential for innovation.

And how does this relate back to comms?

We know there are many positive use cases for generative AI in comms. Often it’s the team in an organisation that we end up working with first, because they’re slightly ahead of others. But we need to look beyond content creation. 

How people discover the information we want them to about our organisation is already changing with generative search being embedded in Google, and new search tools like Perplexity getting popular. This is likely to change again in a world where people use voice commands to send their AI Assistants off around the internet to undertake tasks or gather information for them.

Equipping yourself with a knowledge of the capabilities as they exist now is a great way to build your understanding of what is coming next for comms teams. The same safe experimentation rules apply as for any discipline, with the added layer that in communications you are often the guardian of your organisations reputation. 

Your challenge for the week

‘Staying ahead’ of generative AI is difficult, particularly given the varying predictions for what ‘next’ looks like. But ‘keeping up’ is an important and very achievable step. 

Whatever industry you’re in, your challenge for this week is to choose one AI tool or mode you haven't tried before and use it to tackle a real-world problem. Share your experience with your colleagues, and reflect on how it might impact your work in the future. 

You'll not only give yourself an insight into the next developments in AI, you’ll also be contributing to the responsible and effective AI adoption in your organisation.

Want more?

At AIConfident, we help leaders to create the right cultures for safe experimentation with generative AI technologies, enabling teams to operate co-intelligently with AI and think innovatively about how they could transform your organisation. If you’re ready for your first steps to being an AIConfident organisation, then get in touch


A huge thank you to the team at Comms Leaders and The Academy for posing the question and for inviting me to write jointly for their blog.

Previous
Previous

We Must Stop The Obsession With Detecting AI

Next
Next

Introducing “Tales from the ‘Other’ AI Frontier”