The less people know about AI, the more they like it


THIS ARTICLE IS reissued from The Conversation under s Creative Commons License.

The rapid spread of artificial intelligence has people wondering: Who will embrace AI the most in their daily lives? Many assume that it is the tech-savvy—those who understand how AI works—who are most eager to adopt it.

Surprisingly, our new research, published in the Journal of Marketing, finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.

This link appears across different groups, institutions and even countries. For example, our analysis of data from market research company Ipsos spanning 27 countries shows that people in countries with lower average AI literacy are more receptive to AI adoption than those in nations with higher literacy.

Similarly, our survey of US undergraduates finds that those with less understanding of AI are more likely to indicate that they use it for tasks such as academic assignments.

The reason behind this link lies in how AI now performs tasks that we once thought only humans could do. When AI creates a piece of art, writes a heartfelt answer, or plays a musical instrument, it can feel almost magical—like it’s crossing over into human territory.

Of course, AI does not actually have human characteristics. A chatbot might generate an empathetic response, but it doesn’t feel empathy. People with more technical knowledge about AI understand this.

They know how algorithms (sets of mathematical rules used by computers to perform specific tasks), training data (used to improve how an AI system works) and computational models work. This makes the technology less mysterious.

On the other hand, those with less understanding may view AI as magical and awe-inspiring. We suggest that this sense of magic makes them more open to using AI tools.

Our studies show that this lower literacy–higher receptivity link is strongest for using AI tools in areas that people associate with human qualities, such as providing emotional support or counseling. When it comes to tasks that do not evoke the same sense of human qualities—such as analyzing test results—the pattern reverses. People with higher AI literacy are more receptive to these uses because they focus on AI’s effectiveness, rather than any “magical” properties.

It is not about ability, fear or ethics

Interestingly, this relationship between lower literacy and higher receptivity persists, even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a little scary. Their openness to AI appears to stem from their sense of wonder at what it can do, despite these perceived drawbacks.

This finding offers new insights into why people react so differently to emerging technologies. Some studies suggest that consumers prefer new technology, a phenomenon called “algorithm appreciation,” while others show skepticism or “algorithm aversion.” Our research points to perceptions of AI’s “magic” as a key factor shaping these responses.

These insights pose a challenge for policy makers and educators. Efforts to boost AI literacy may inadvertently dampen people’s enthusiasm for using AI by making it seem less magical. This creates a difficult balance between helping people understand AI and keeping them open to accepting it.

Leave a Reply

Your email address will not be published. Required fields are marked *