ChatGPT and other large language models possess the remarkable capability of generating human-like responses to almost every perceptible prompt we can throw at them. These responses span a wide spectrum of inquiries, ranging from recommendations for the finest Italian eatery in town to elaborate explanations of diverse theories concerning the essence of malevolence.

Without thinking about it too much, individuals will say interacting with these platforms is fun. However, the technology’s writing proficiency has resurfaced age-old inquiries regarding the potential for machines to attain consciousness, self-awareness, or sentience. This concept is deeply embedded in science fiction and popular culture, and it’s one that’s been recurring a lot in today’s AI rhetoric as well.

Many remember when a Google engineer suggested their chatbot, LaMDA, had achieved a state of consciousness. This happened back in 2022. Then, there was the infamous exchange between Kevin Roose, a technology columnist for the New York Times, and Sydney, Microsoft Bing’s chatbot. It’s no surprise that these are the examples people cite when they talk about their main AI anxiety: machine sentience.

I understand where these concerns emanate from, but it’s even more important that I and other executives in the AI space continue to steer conversations away from such speculative worries and toward a more pragmatic focus: trust in AI systems.

As it concerns large language models, people’s anxiety about AI is unfounded and misplaced. ChatGPT and comparable technologies are essentially very advanced sentence completion models that have been trained on such large datasets that latent connections between language, concepts, and topics start to surface. However, it is important to keep in mind that their remarkable responses and emergent behaviors primarily stem from the predictability of human communication patterns and not some emerging sentience.

Instead, our attention should be directed toward the tangible challenges surrounding AI deployment in real-world scenarios. As AI technologies continue to permeate various facets of society, from healthcare to finance, the fundamental question becomes not whether AI will achieve consciousness but rather how we can foster trust in its capabilities. Trust forms the cornerstone of any successful interaction between humans and AI systems. Without trust, individuals become reluctant to rely on AI recommendations or decisions, which impedes the widespread adoption of these technologies. 

One approach to fostering trust in AI involves transparency and explainability. Users must understand how AI algorithms arrive at their conclusions and recommendations. Demystifying the decision-making process will allow AI developers to instill confidence in users and dispel apprehensions surrounding the “black box” nature of AI systems. Another is that robust ethical frameworks must underpin the development and deployment of AI technologies. Addressing concerns related to bias, fairness, and accountability is critical to ensuring that AI systems operate in alignment with societal values and norms. Organizations must prioritize ethical considerations throughout the AI lifecycle, from data collection and model training to deployment and monitoring.

Humans have a tendency to confuse sentience with intelligence. Intelligence entails the capacity to gather and utilize information effectively, whereas sentience involves the ability to experience and perceive sensations. Consciousness, on the other hand, encompasses a higher level of self-awareness. In reality, artificial intelligence excels primarily at reproducing information.

Large language models have proven valuable tools for writing and coding assistance. They are poised to revolutionize internet search capabilities and, one day, may even yield certain psychological benefits. But even as these innovative tools continue to evolve, it’s important we stop anthropomorphizing them.