Almost three-quarters of consumers believe that brands should disclose the use of AI-generated content and that fully automated AI-driven marketing campaigns should be carefully regulated.
These are just some of the findings from a new survey of 2,000 people aged 18+, commissioned by the IPA and conducted by Opinium, into the ethics and etiquette of using AI. For some of the findings, comparisons with 2018 IPA/Opinium survey data are available and show changing consumer sentiment.
The core findings reveal a continued high level of desire by consumers for AI transparency; a decrease in consumers’ belief that AI should control and police them; a high belief that fully automated AI-driven marketing campaigns should be carefully regulated and a significant decrease in the belief that ‘the robots’ deserve rights and respect.
Core findings:
Consumer calls for AI transparency remain high, although lower than in 2018
- According to the survey, in response to a new question for 2023, 74% of all consumers believe that brands should be transparent in their use of AI-generated content.
- Coupled with this, 75% of people want to be notified when they are not dealing with a real person. While this overall figure remains high, it is down on the 2018 figure of 84% in 2018.
- Furthermore, the report shows how two-thirds (67%) of British adults think that AI should not pretend to be human or act as if it has a personality. Again, while this overall figure remains high, it is lower than the 74% recorded in 2018.
Consumers want to be controlled and policed less by AI than they did five years ago
The survey reveals a considerable increase in desire by consumers to not be policed, nor disagreed with, by the AI, when comparing 2023 data with that recorded in 2018.
- While just over half of consumers, 51%, believe that AI should have the right to report them if they are engaging in an illegal activity in this latest dataset, this is a significant decrease from the 67% recorded in 2018.
- This same trend is seen when asking consumers whether they believe AI should be allowed to make it known if it disagrees with them, In 2018 51% said they believe this is ok, in 2023 this fell to 42% of consumers.
Almost three-quarters of consumers say AI should be regulated and that humans must accept liability
Regarding who holds responsibility and liability for the use of AI, there is strong belief that AI should be regulated and that humans must be liable if its use results in an accident.
- Seventy-two percent of all adults feel that fully automated AI-driven marketing campaigns should be carefully regulated. (New question for 2023.)
- Meanwhile, almost two-thirds (61%) think that humans must accept liability if the use of AI results in an accident. This is a slight decrease from the 64% measured in 2018.
Consumers show less belief in respect for AI and robot rights since 2018
The survey shows how consumer manners and respect when dealing with AI has dropped considerably in recent years.
- There has been a 25% decrease in the number of people who think that they should be polite and exhibit good manners when interacting with virtual assistants, from 64% in 2018 to 48% in 2023.
- In addition, less than a quarter (24%) of Brits believe that robot rights should be introduced to ensure the humane treatment of AI, a decrease from 30% in 2018.
AI provides incredible opportunities for our business. As these findings demonstrate, however, the public are understandably cautious about its use – and increasingly so in some areas. It is therefore our responsibility to be transparent and accountable when using AI to ensure the trust of our customers.