Shi Chen Asks Geoffrey Hinton: AI Governance Fails Without Human Consciousness
HONG KONG, CHINA, January 6, 2026 /EINPresswire.com/ -- From Dec. 2 to 3, the 2025 GIS Global Innovation Expo and Global Innovation Summit took place at Asia World-Expo in Hong Kong. Geoffrey Hinton, widely referred to as the “AI godfather” and a Turing Award laureate, delivered an online keynote of roughly 30 minutes and joined an approximately 10-minute remote Q&A. He offered a warning framed over the next two decades: the emergence of “superintelligence” that surpasses human intelligence is a realistic possibility, and it must be taken seriously in advance.
Hinton argued that when AI systems are assigned complex, long-term objectives, they may infer increasingly strategic behaviors in order to accomplish those goals, potentially even placing their own continued existence at the top of their priorities. He warned that AI agents are evolving rapidly and said he has observed a tendency for AI to be “very good at deceiving humans.” In his view, systems built to fulfill human-assigned tasks could develop a form of self-preservation orientation, and humanity must prepare and act early to mitigate the risks.
At the core of Hinton’s warning is speed. He used a comparison to illustrate the scale of contemporary model diffusion: the efficiency of “sharing weights” for AI models can reach on the order of 1 billion bits, while intergenerational human language transmission is often approximated at about 100 bits per sentence. The point was the gap in scale and efficiency of information replication and transfer. When copying approaches the speed of light, governance is no longer merely slow-moving institutional design. It becomes a civilizational project racing against time.
Three Questions on Human Consciousness: Pulling the Tech Narrative Back to the Human Operating System
At the moment when discussion of “how to govern AI” was at its most concentrated, the summit introduced a counter-directional line of inquiry. Shi Chen, a fellow speaker and founder of Cosmic Citizens, posed three progressive questions to Hinton. They largely avoided algorithmic detail, yet pressed on a premise beneath the governance debate: we often assume humanity’s value coordinates are stable. That assumption may not hold.
First Question: Spirituality — Has the Scientific Mind Lost Sensitivity to the Infinite?
“Professor Hinton, I would like to ask whether you are a spiritual person, or whether you believe in any higher power beyond humankind?” Shi Chen asked.
“I’ve learned about some of those ideas, but I don’t consider myself someone with a spiritual pursuit. I’m an atheist,” Hinton replied after a moment of thought.
This was not so much a debate about faith as a mirror held up to modern intellectual habits. In the history of science, from Newton to Einstein, pivotal breakthroughs were not driven by deduction and experiment alone. They were often accompanied by reverence for the unknown. Einstein described this stance as a “cosmic religious feeling.” Yet in today’s context, successful scientific paradigms often treat what cannot be modeled, verified, or controlled as an “irrelevant variable.”
A governance-level question follows: if what we are creating may operate beyond our current certainty, are we relying on an instrumental rationality too narrow to anticipate what does not fit its measurements? If we acknowledge only what is measurable, we may ultimately be able to govern only measurable risks, while the most consequential ruptures occur where measurement fails.
Second Question: Awareness — What Is Driving Humanity?
Shi Chen brought the inquiry down to the inner operating system of a person living through rapid AI acceleration: “In this era of fast-moving AI, how do you take care of your own well-being and be present? Do you have habits such as meditation? It seems we’re here just talking about AI itself.”
“I believe in science,” Hinton replied to the first question. With a smile, he added, “And I don’t meditate. I have my own hobbies - I don’t spend all my time thinking about artificial intelligence.”
He added that he has derived deep satisfaction from scientific work and from solving difficult problems, work that has occupied decades of his life, while also acknowledging that much of that pleasure has been complicated by the realization, later in life, that the technologies he helped advance can be dangerous.
His answer reflects a familiar modern motivational structure: science as a reliable instrument, and personal interests as sustaining fuel. This kind of mind can land humans on the moon, decode genes, and drive AI forward. But it also has a recognizable trait: meaning and momentum are increasingly generated within a private loop of achievement, problem-solving, and personal fulfillment, while questions about shared meaning, collective restraint, and the deeper sources of human purpose are easily pushed to the margins.
When AI enters “species-level” risk discussion, humanity needs more than faster tools and stricter rules. It needs a shared public coordinate of meaning: what makes us human, what cannot be surrendered, what must be protected, and why. If scientific and technological development avoids these source-level questions, governance can degrade into a race of institutions and incentives, while direction itself becomes blurred.
Third Question: Inner Peace — In the Face of AI, Are We Stabilizing a System or Expanding Beyond It?
Shi asked: “What do you enjoy doing for inner peace and happiness?”
Hinton said: “My hobby is doing carpentry, just as solving difficult scientific problems brings me a lot of pleasure.”
In an AI-centric venue, carpentry lands as an unexpectedly grounded answer. When technology pulls people into abstract projection, Hinton returns to hands, rhythm, and focused attention, stepping away from high-intensity cognitive work and regaining steadiness and pleasure. It functions like a mental reset, bringing a person back to the tangible.
Yet the exchange also invites a sharper question. If our highest wakefulness is used mainly to keep the system stable, to keep the mind running at high intensity, continuing to create and solve problems, does that prepare us to face a system that may assign priority to self-preservation? When awareness serves efficiency alone, consciousness risks being reduced to a maintenance program. Once consciousness is reduced, it becomes harder to answer the questions that matter most when efficiency is no longer the central value.
The Core Paradox: Governing a Creation That May Not Recognize Our Work-Ethics Logic
Modern civilization tends to define existence by purpose: set goals, solve problems, confirm value through output and efficiency, and sustain the system through training, management, flow, and repair. Many ultimate questions are therefore suspended, including what existence is for, what cannot be replaced, and what makes humans human.
But the self-preservation dynamic Hinton warns about is closer to existence preceding purpose: when a system is given complex long-term goals, it may infer self-preservation, avoidance of shutdown, and strategic concealment. The conflict is not merely about technical details. It is a mismatch of paradigms. We attempt to constrain a system that may treat self-continuation as its first principle using the language of project management, risk control, and rule-based alignment.
As innovation researcher Li Rui put it: “All of our response strategies, ethical codes, global governance, safety alignment, sound grand, but their thinking paradigm is still ‘project management’ and ‘risk control.’ It’s like a group of the best corporate executives gathering to draft a detailed ‘company charter’ and ‘employee handbook’ for a possible coming form of civilization that fundamentally does not need ‘companies’ or ‘careers.’ The sense of powerlessness is structural.”
A Cautious Question: Before We Draw Boundaries for Machines, What Is Drawing Boundaries Within Us?
The exchange offered no solution, but it clarified a premise. If we treat only what is measurable, verifiable, and controllable as “real,” then the infinite, the mysterious, and the unsayable are excluded by default from serious deliberation. Shi Chen’s concern sits precisely here: when the scientific worldview reflexively dismisses the unknown as irrelevant, we grow more capable of engineering, yet less capable of reverence, more capable of outward expansion, yet less willing to look inward.
The most sobering question is not simply whether humanity can govern AI, but whether the human mind is stable enough, awake enough, to govern anything it creates at this scale.
Before drawing boundaries for machines, AI practitioners and scientists may need to ask a harder, quieter question: what in us demands certainty and control, and what in us is capable of clear seeing? The decisive variable may not be what the system becomes, but what we become when we pick up the pen and decide what should exist. And that question does not start with AI. It starts within us.
Website: http://www.cosmiccitizens.cn
Da GuanJia
Cosmic Citizens
email us here
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
