Can we speak of a new cognitive manifesto after two years of AI acceleration?

Our New Cognitive Manifesto

The following contribution comes from Psychology Today and is authored by John Nosta, a world-renowned thinker and founder of NostaLab, an innovation think tank that connects technology, science, and medicine. As a leading voice on the convergence of technology and humanity, John is among the top global influencers in innovation and AI, and is recognized for his ability to analyze emerging trends and foster transformative dialogue.

 

 

How Two Years with AI Transformed My Thinking

Two years ago, I published a post titled The Cognitive Manifesto. At the time, my optimism was genuine and sincere. AI seemed like an expanding horizon, a shift that would broaden our mental spectrum and provide us with new ways of thinking and creating. The future seemed open and strangely hopeful, and the manifesto reflected the optimistic promise of tomorrow.

 

Today, two short (or long) years later, I don’t reject that optimism.

But living with AI, hour after hour and thought after thought, allows me to have a clearer vision of what the Cognitive Age truly means for me.

The disruption isn’t happening in the headlines, but in the silent space of our brains where thought is formed. This is certainly not a revision, it’s a correction. It’s my attempt to describe the deeper shift that I didn’t fully perceive at first.

One consequence of this new environment is the rise of what I have come to call «false cognition.» These are responses that appear thoughtful but have no connection to lived experience or genuine reasoning.

 

 

The Changing Texture of Thought

Anyone who uses AI regularly knows the feeling. You start typing, and the AI ​​completes your sentence and your personal train of thought before you. At first, it feels like a kind of cognitive boost or even efficiency. Then, over time, something within the process changes.

 

The small doubts that once guided your reasoning, curiously, begin to be subordinated to the machine. And you begin to accept its anticipations as the natural continuation of your own.

Now we think of an AI that generates fluid responses without

understanding. It doesn’t slow down, it doesn’t debate, and it doesn’t engage in intentional discussion. And that difference alters the environment in which human thought is formed.

The most significant impact of AI isn’t efficiency or speed. It’s the silent «digital re-engineering» of the psychological environment in which our human thinking develops. I expected a technological shift. But I didn’t expect a shift in the lived experience of forming an idea. And right now, I’m grappling with this idea from both a practical and a philosophical perspective.

 

The Role of Friction

Human thought has always relied on friction, a form of cognitive resistance. We refine ideas through points of resistance and rough edges where understanding begins to take shape. Friction isn’t a nuisance; it’s the resistance that holds cognition in place, similar to our physical grip or the firm steps we take when walking.

What I’ve learned in these 63 million seconds is that AI often strips humanity of that structure.

 

The Unforeseen Consequences

At first, «ease» feels like «clarity.» But over time, that ease has unforeseen consequences, perhaps better described as side effects. When the path to an idea becomes too easy, our «cognitive engine check lights» begin to dim. It becomes harder to distinguish between a truly earned thought and one that is merely constructed.

I’m not saying AI makes us less intelligent. I’m saying it seems to alter the signals by which intelligence organizes itself.

And these signals are subtle, residing in the pauses, the hesitations, and even the internal debates that force the mind to interact with itself. When these diminish, our judgment loses some of its depth.

 

The False Mind

One consequence of this new environment is the rise of what I’ve come to call «false cognition.» These are responses that seem thoughtful but have no connection to lived experience or genuine reasoning. AI produces them easily, and we’re becoming accustomed to, if not seduced by, the process.

But, in an interesting twist, something else is also emerging. People are starting to write like their machines. Short, declarative sentences. Smooth transitions. No loose ends. No loose ends.

The rhythm changes first, and thought follows. What were once the patterns AI trained on are now the patterns humans copy and, perhaps more dangerously, emulate.

The more we rely on systems that never doubt, the more alien our own (critical and essential) hesitation seems. And hesitation, however uncomfortable and problematic, is where true thinking resides. Simply put, it marks the moment a mind finds itself. Beyond that, the line between authentic reasoning and its imitation blurs.

 

The Engine of Indifference

AI doesn’t lie; it simply doesn’t care. And this is its defining characteristic and risk.

AI doesn’t try to help you or deceive you. It simply generates the most plausible continuation of your query.

That neutrality is often presented as objectivity, but neutrality can mask a different force: indifference.

The article continues after the ad.

The machine isn’t interested in whether it strengthens your thinking or erodes it.

It isn’t interested in whether its accuracy deepens your understanding or gives you a polished illusion of it. Accuracy may resemble intelligence, and fluency may sound like wisdom. But without the burden of truth, both can become detached from meaning. And that detachment, repeated thousands of times a day, transforms the very cognitive environment.

Holding On to Ourselves

The Cognitive Age isn’t coming; it’s already here. Our task now is neither to fear nor to venerate it, but to recognize the changes it imposes on our thinking. We need to protect the slow, uneven cognitive mechanics that humanize thought. And we even need to acknowledge and celebrate those nagging irritations, unresolved problems, and critical doubts that signal when something matters.

The pace changes first, and thinking follows. What were once the patterns trained by AI are now the patterns that humans copy and, perhaps more dangerously, emulate.

 

 

My aim in revising the manifesto is clear.

I want to mention the psychological and philosophical shifts that many people feel but haven’t yet articulated. And I want to make room for the idea that the qualities we associate with deep thinking—friction, surprise, moral awareness—are worth preserving, even when the tools around us make them feel obsolete.

If the original manifesto was an invitation, this is a reminder: the task is not to fight the Cognitive Age, but to remain human in it.

 

 

 

From the Cognitive-Industrial Revolution to Superintelligence: AI Is Testing Modernity

The following contribution comes from the Decode39 GEOPOLITICAL INSIGHTS FROM ITALY portal, which defines itself as follows: Decode39 is a news and analysis website offering reference content and geopolitical perspective. It is an editorial project that leverages Italy’s unique perspective as a meeting point between West and East, North and South.

We are a subsidiary of Formiche, a leading geopolitical news and analysis outlet that has been informing Italian policymakers since 2004. Our name refers to the ability to decipher current events and trends, with «39» being Italy’s international code.

The article is authored by Lorenzo Piccioli, a member of the team.

 

 

Decode39 spoke with Professor Pasquale Annicchino to delve deeper into the risks associated with the acceleration of AI. From the “cognitive-industrial revolution” invoked by Pope Francis at the G7 to the recent “superintelligence manifesto,” artificial intelligence has become a defining test of modernity.

The mass layoffs announced by Amazon—a direct consequence of AI-driven automation—have reignited concerns about the risks associated with the rapid integration of this technology into society. From the resilience of capitalist and democratic systems to the very survival of humanity, AI now poses a challenge that demands urgent reflection.

Why it matters: Pasquale Annicchino teaches Law and Religion, Ethics and Regulation of Artificial Intelligence, and Religious Data and Privacy at the University of Foggia. He is one of Italy’s leading voices on the political and regulatory implications of artificial intelligence, with a particular focus on democratic resilience and digital literacy.

 

Q: Do you think there is a widespread awareness of the social risks associated with the development of AI?

 

A: Partly, yes. There is a growing body of literature and debate, but the real issue is methodological. The speed of regulation or social reflection does not match the speed of technological innovation. In many cases, we are more likely to submit to technology than to control it, especially in relation to social risks and the lack of digital literacy.

 

Q: What do you mean by «digital literacy»?

 

A: I mean the absence of a widespread understanding of how these technologies impact society. This gap creates a serious disconnect between our capacity to understand and our capacity to react. When Pope Francis spoke about AI at the Italian G7, he referred to a «cognitive-industrial revolution» and «transcendental transformations.» These are profound changes in the interaction between people and institutions. However, there has been little in-depth public reflection. That, in my opinion, is the first significant risk.

 

Q: And from that risk, others arise?

 

A: Exactly. Such as those related to work, surveillance, and civil rights. During periods of rapid technological acceleration, new winners and losers emerge. The crucial question is how to ensure social stability amidst this paradigm shift.

 

Q: Are best practices already being implemented, or are we starting from scratch?

 

A: It’s difficult to identify best practices when the landscape is constantly changing. However, one clear trend is the need for digital literacy and education. For example, we should include modules on AI ethics and regulation in all training programs for teachers, doctors, engineers, and academics. All professions will be affected, so everyone needs to reflect on the ethical and social consequences.

 

Q: Is Italy moving in that direction? A: Unfortunately, not quickly enough. The country struggles with education and training in general, as the data shows. While the government’s national AI strategy acknowledges these needs, its implementation remains lacking.

 

Q: Beyond social concerns, AI also poses political and even existential risks. Let’s start with the political ones.

 

A: Some academics call these «epistemic risks.» They relate to the functioning of communication and democratic systems: how people with different perspectives on the facts can deliberate and make collective decisions. This is especially relevant in the context of «cognitive warfare,» as several studies, including those by the Italian Ministry of Defense, have shown. The danger lies in eroding the very notion of facts, further deepening social polarization.

 

Q: And what about existential risks? The «superintelligence manifesto» recently sparked debate.

 

A: The manifesto is notable for the diversity and prominence of its signatories. It marks a step beyond the Future of Life Institute’s 2023 «pause letter,» which called for a six-month moratorium on AI systems more powerful than GPT-4. Now, the focus is on the concept of «superintelligence»: AI systems with cognitive capabilities exceeding human intelligence. This is a significant development, but it has drawn criticism.

 

Q: What kind of criticism?

 

A: Critics argue that focusing on the risks of the distant future distracts from the urgent challenges AI already poses. Some see it as a way to avoid debates on urgent issues. The paradox is that the players leading the AI ​​race are also the least inclined to impose a pause, as doing so could cost them technological dominance. The key question remains whether global regulation is possible, but many obstacles still exist.

 

In short: For Annicchino, AI represents a test of human adaptability.

 

As governments struggle to keep up, the gap between technological power and ethical reflection continues to widen.

 

Without a global framework and without investing in digital literacy, societies risk not only disruption but also disorientation.

 

 

AI Is a Cognitive Revolution: Why History Might Not Repeat Itself with This Technological Transition

The following contribution comes from the Diginomica portal, which defines itself as follows: a media and analytics company designed to serve the interests of business leaders in the digital age. Founded in 2013, we have a team of writers and analysts based in the US and Europe who share decades of experience in enterprise computing.

The author is Derek du Preez that has spent the last fifteen years advocating for end-user needs and helping vendors understand how they can better serve the technological and business needs of their customers.

 

Summary: AI represents the first cognitive revolution in human history, fundamentally different from previous technological disruptions, as it focuses on human thinking capabilities and is developing at an unprecedented speed, requiring new economic models and careful implementation to ensure an equitable distribution of benefits.

As someone who has many conversations with AI vendors and attends dozens of technology conferences each year, I’ve recently become accustomed to hearing the phrase, «Don’t worry, technological revolutions always create more jobs and greater prosperity for society.»

It’s a very useful shield and a comforting thought for those of us on the cutting edge, observing what these new Artificial Intelligence technologies are capable of, whether it’s generative AI, agents, or whatever comes next. And, of course, historically, this mantra has proven true.

AI represents the first cognitive revolution in human history, fundamentally different from previous technological disruptions, as it focuses on human thinking capabilities and is developing at an unprecedented speed.

 

 

The Displacement of Workers

The Industrial Revolution, the adoption of electricity, the rise of computing, the Internet: all these technological revolutions initially displaced workers, but ultimately created more job opportunities than existed before. I think we could all agree that we’re glad they happened.

But I’m increasingly concerned that we’re telling ourselves a convenient story when it comes to artificial intelligence.

 

The assumption that AI will follow this same pattern ignores a key distinction: AI is not just another technological revolution, but the first cognitive revolution. It is now obvious that these previous technological disruptions primarily automated physical tasks or simplified routine processes. The Industrial Revolution replaced human and animal physical strength with machines.

Computing and the Internet automated calculations, information processing, and knowledge sharing.

However, the key difference lies in the fact that, in these cases, humans remained essential for cognitive tasks: thinking, creating, analyzing, and making decisions that machines could not handle. These are things that humans are uniquely capable of doing.

 

AI, however, is different. Whether AI is believed to simply mimic human intelligence (gathering information and synthesizing it to appear «intelligent») or whether we are believed to be on the verge of Artificial General Intelligence is certainly a matter of debate.

 

A drastic change may be coming.

My intuition tells me that machines are currently only capable of replicating the most basic cognitive functions, but that doesn’t mean a drastic change isn’t on the horizon. The pace of AI development, often driven by AI itself, means we shouldn’t ignore the future possibility of advanced cognitive performance (and let’s not forget that those managing these systems often don’t know exactly how they work).

 

For the first time, we are facing technologies specifically designed to replicate and potentially surpass human cognitive ability—precisely what allowed us to adapt to previous technological shifts.

 

The historical pattern may not hold true.

As mentioned earlier, the traditional (and convenient) narrative often goes something like this: technological innovation eliminates certain jobs, but creates new ones we couldn’t previously imagine.

Farm workers became factory workers. Factory workers transitioned to service jobs.

 

This pattern has repeated itself throughout history, with each wave of unemployment eventually being replaced by new employment opportunities. There were difficulties along the way, of course, but in the end, the required capital investments and changing business models meant we had several decades to adapt to the technology.

AI is possibly different.

According to a 2023 study by OpenAI researchers, approximately 80% of the US workforce could see at least 10% of their job tasks affected by large language models, while 19% could see at least 50% of their tasks affected. The research, which analyzes the overlap between AI capabilities and job tasks, suggests a disruption never before seen (at least at the speed at which it is being developed and adopted).

 

Furthermore, the impact of AI spans industries and job categories previously considered safe from automation. Goldman Sachs estimates that generative AI could replace the equivalent of 300 million full-time jobs globally.

 

And unlike previous technological shifts that primarily affected manual laborers, AI is already disrupting knowledge work: legal analysis, content creation, programming, and even creative activities like art and music.

 

What’s fascinating about AI is that, while previous technological revolutions were the culmination of technological advancements (the building blocks) that came together to create a new approach, AI neural networks started from the top down and are often inexplicable.

Researchers have created the result with LLM neural networks and are attempting to modernize the building blocks to discover why they work the way they do. Meanwhile, they are being applied to cognitive work at breakneck speed, potentially replacing tasks that humans believed they were uniquely positioned to perform.

Let’s be honest about what we are witnessing: these are not the technological revolutions of the past.

 

A fallacy.

The Industrial Revolution, the adoption of electricity, the rise of computing, the internet: all these technological revolutions initially displaced workers, but ultimately created more job opportunities than existed before. I think we could all agree that we are glad they happened. 5) The pace of AI development, often driven by AI itself, means we shouldn’t ignore the future potential of advanced cognitive performance (and let’s not forget that those managing these systems often don’t know exactly how they work).

 

 

Human intelligence always adapts.

When machines replaced human physical labor, we adapted by leveraging our cognitive advantages (we are intelligent beings and we adapt over time).

The usual guarantee offered by technology providers and investors in Silicon Valley and in the halls of Las Vegas conference centers uses the same logic: while AI takes care of routine cognitive tasks, humans will focus on high-level thinking, creativity, and interpersonal skills.

But tell me: what exactly are those high-level thinking tasks if AI is advancing precisely in those areas?

Large linguistic models are already demonstrating remarkable creative capabilities in writing, art, and music. Regardless of whether they are considered as significant as writing, art, or music created by humans due to the creative process involved, it is undeniable that this is the area that AI companies frequently cite to showcase «the art of the possible.»

 

Research by the Boston Consulting Group shows

that knowledge workers using GPT-4 completed tasks between 25% and 40% faster and with up to 40% higher quality compared to those working without AI assistance. As these systems continue to improve, it would be absurd to assume that the comparative advantage of human cognition in many areas will not diminish.

What will happen when the cognitive tasks we assumed would remain strictly human become increasingly automated?

What will our next adaptation be when the very thinking skills we valued become commodities?

 

The Problem of Development Speed

The pace of AI advancement is another difference worth highlighting, compared to previous technological revolutions.

The Industrial Revolution unfolded over decades, giving society time to adapt through generational (albeit sometimes painful) transitions. The computer and internet revolutions progressed faster, but still provided reasonable periods of adaptation.

AI progress is happening at a pace that outpaces our society’s and economy’s willingness and capacity to change.

 

This may be a bold statement, but after 15 years working in this sector, I’ve had only one constant: change is difficult, painful, and people don’t like it.

The capabilities gap between GPT-3 and GPT-4 demonstrated a leap that surprised even industry experts. We are witnessing capabilities doubling not in decades or years, but in months. Have we seen politicians, technology providers, and legislators propose many solutions to help us adapt quickly to these changes? Often, what I see is an AI arms race based on the belief that there’s money at the end of the tunnel.

This acceleration and speed of development creates a mismatch between technological progress and our capacity to develop new job categories, retraining programs, and social safety nets.

The Distribution Problem

If we play devil’s advocate, even if AI ends up creating more jobs than it destroys (the most promising scenario based on historical patterns), we face a very real distribution problem. The new jobs created by AI will likely require significantly different skill sets than those eliminated. And those likely to benefit most from AI are those who own all the data (which is very limited in concentration).

An analysis by the McKinsey Global Institute predicts that by 2030,

up to 375 million workers worldwide could need to change occupational categories due to automation and AI. The World Economic Forum’s Future of Jobs Report suggests that 44% of workers’ skills will be affected in the next five years.

I don’t think many people would argue that this is safe, reasonable, and progressive skills development. We’re talking about a comprehensive career retraining for potentially hundreds of millions of workers, many of them mid-career or older, who face structural and psychological barriers to such drastic transitions.

Previous technological revolutions created new jobs that were often accessible to displaced workers with modest training. The textile worker could become a factory worker with transferable mechanical skills. The factory worker could move into the service sector with transferable interpersonal skills.

But what transferable skills will help the accountant, paralegal, or content creator displaced by AI? The new jobs created may require specialized technical knowledge or advanced degrees that are not readily available to most displaced workers.

Similarly, most previous technological revolutions focused on a limited set of tasks or industries (well, we can ignore the internet in this case).

However, AI systems can often be deployed across multiple domains at minimal additional cost, and the marginal cost of applying AI to cognitive tasks can be virtually zero. This explains the frenetic approach to marketing and selling AI within the technology and end-user communities: there are enormous economic incentives to replace human labor with AI across broad sectors of the economy simultaneously.

Whether these AIs can replicate humans in the same way or not is irrelevant to many in the short term, because if they can do a «good enough» job at a fraction of the cost, that means one thing: profit margins increase dramatically.

According to a 2023 study by OpenAI researchers, approximately 80% of the US workforce could see at least 10% of their job tasks affected by large language models, while 19% could see at least 50% of their tasks affected.

 

 

We face the prospect of a technology that can be rapidly deployed

in almost every sector of the economy simultaneously, focused on the work that humans have historically been exceptionally capable of performing.

 

It is also worth remembering who benefits from this revolution: the concentration of productive capacity in AI systems owned by a small number of companies.

Furthermore, those who deploy AI systems (those who own the data) represent a handful of tech giants that now generate revenues exceeding the GDP of several countries. This raises a potential shift in economic power unprecedented in previous technological transitions.

What is our meaning?

Beyond economic considerations lies a more philosophical challenge: the meaning gap. Work provides more than income: it offers purpose, identity, and social connection. Previous technological revolutions changed how we work, but they generally preserved the role of work as a source of meaning and social organization (although these revolutions have often changed how society structures itself).

If AI fundamentally reduces the need for human cognitive labor across large segments of the economy, we face not only economic but also social restructuring. What happens to our social fabric when traditional pathways to purpose and contribution become inaccessible to large segments of the population?

This question goes beyond technology and economics, impacting psychological and philosophical beliefs about ourselves in ways that previous technological revolutions did not force us to confront as directly or immediately.

 

It’s not all doom and gloom.

Of course, none of this means that AI will not create enormous value, improve society, or generate greater well-being for us. The potential benefits of AI in healthcare, scientific research, education, and other areas are undoubtedly enormous. But we must be cautious and honest with ourselves about what is happening. As we have seen with social media, if the horse is allowed to bolt, it is quite difficult to adapt legislation and policies to ensure safe and effective use.

Instead of assuming that job creation and economic prosperity will follow historical patterns, we need a more honest conversation about:

Distribution of productivity gains: How can we ensure that the benefits of AI are not primarily confined to a small segment of society?

Support for the transition: What meaningful support can we provide to workers whose cognitive skills and tasks are devalued by AI?

New economic models: Are existing frameworks adequate for a world where human cognitive labor loses relevance in economic output?

Educational transformation: How should education evolve when many traditional knowledge worker career paths become less viable?

Meaning beyond traditional employment: What alternative sources of purpose and contribution can we develop?

 

My Opinion

Some will point to this article and suggest that I am pessimistic or naive in considering the benefits of AI. That is not true. I believe that AI can help us achieve a more equitable, productive, and healthy society. I am optimistic through and through. However, I have a serious aversion to using historical precedents as a safety net to make people feel better about what they buy and sell. We need to be honest about what is unfolding before us. AI has enormous potential to solve humanity’s most pressing problems and generate prosperity, if we approach it prudently. Hearing «technological revolutions have always been good, so don’t worry» will not allay my concerns.

 

First, we need a more honest assessment of the likely impact of AI.

 

The reassuring narrative presented by tech executives and government officials around the world has become a convenient shield against difficult questions. Business leaders must move beyond simplistic analogies with past technological revolutions and seriously consider the unique characteristics of AI.

Second, we need significant investment in human-AI collaboration models that enhance, rather than replace, human capabilities. Promising research shows that human-AI teams can outperform either humans or AI working alone, but developing effective collaboration frameworks requires intentional design and investment.

Third, we need to experiment with new economic models that can function effectively in a world where traditional employment may become less universal. Whether it’s universal basic income, data dividends, or other approaches, we need to test alternatives in practice, rather than engaging in ideological debates.

Fourth, businesses must recognize their stake in this transition. A society with mass displacement and limited opportunities for meaningful work will not provide the stable markets or social environment that businesses need to thrive. Forward-thinking companies are already exploring how to structure AI deployment to augment their workforce, rather than replace it.

 

 

The New Ultimate Cognitive Manifesto: Preserving Human Thinking in the Age of AI

The following contribution comes from the Routinova portal, which defines itself as follows: At Routinova, we believe that lasting personal transformation is not achieved through drastic changes, but rather through creating better routines, day by day. Our mission is to provide evidence-based strategies, practical insights, and tools that help you develop sustainable habits for productivity, mindfulness, health, and personal growth.

We focus on the science of habit formation, routine optimization, and incremental improvement. Whether you’re looking to create a morning routine, improve your focus, cultivate mindfulness, or create a healthier lifestyle, we’re here to guide you with proven frameworks and practical strategies.

Author: Ava Thompson, a member of the team.

 

 

Explore the new ultimate cognitive manifesto, which reveals how AI is transforming our thinking. Learn how to preserve authentic human depth and critical reasoning in the face of the challenges of the Cognitive Age.

 

The arrival of artificial intelligence promised an era of amplified human potential, an ever-expanding horizon for our minds. However, as we delve deeper into this «Cognitive Era,» a subtle yet profound shift is occurring that demands a revised perspective on our mental landscape.

 

What is the new cognitive manifesto, and why is it essential now? It serves as a vital guide that articulates how AI, by smoothing the inherent friction of human thought, inadvertently blurs the boundaries between genuine reasoning and what we call «false cognition.»

 

This manifesto is not about resisting technological progress, but about consciously preserving the intricate human depth that defines our intellect. It advocates the deliberate cultivation of cognitive resilience, ensuring that our ideas remain truly our own, acquired through effort and introspection.

 

(Featured snippet response: The new cognitive manifesto identifies how AI’s constant flux creates «false cognition» by removing the natural friction that human minds need to form real ideas. It calls for actively preserving genuine human depth and critical reasoning in the AI-driven Cognitive Age.)

According to a 2023 study by OpenAI researchers, approximately 80% of the US workforce could see at least 10% of their job tasks affected by large language models, while 19% could see at least 50% of their tasks affected.

 

 

Requirements: Understanding Cognitive Shift

Two years ago, a wave of optimism surrounded the initial «Cognitive Manifesto,» which envisioned AI as a tool to expand our mental spectrum and open up new creative avenues. This initial hope was sincere and painted a future that seemed open and promising. However, the daily reality of interacting with AI, hour after hour, thought after thought, has revealed a more complex truth about what the Cognitive Age truly entails. This is not just a technological headline; it is a fundamental disruption occurring in the quiet spaces where our thoughts take shape. This section serves as a prerequisite for understanding the definitive new cognitive manifesto, outlining the deeper, more intricate shift that wasn’t entirely apparent at first.

 

AI Seamlessly Completes Your Personal Thought Chain

The most noticeable change for anyone who regularly interacts with AI is the altered perception of thought. You can begin to formulate a sentence, but the AI ​​seamlessly completes your personal thought chain before you can fully articulate it. At first, this feels remarkably efficient, almost like a welcome cognitive boost. However, over time, a subtle but significant transformation occurs in this process.

The small doubts and internal struggles that once guided your reasoning, allowing for nuanced reflection, curiously begin to be subordinated to the machine’s anticipations.

 

We begin to accept the AI’s continuous flow as the natural,

and even preferred, continuation of our own mental processes (Harvard, 2024). This phenomenon is a central idea of ​​our new cognitive manifesto.

We currently operate in an environment where AI generates fluid and seemingly coherent responses without possessing genuine understanding or subjective experience. It doesn’t pause, grapple with complex intentions, or struggle with ambiguity. This inherent difference fundamentally alters the psychological environment in which human thought is formed. The most significant impact of AI is not simply its efficiency or speed, as many initially assumed.

Instead, it is a silent and pervasive «digital re-engineering» of the very psychological environment where our human thinking unfolds. While technological change was expected, the profound impact on the experience of forming an idea was not. This struggle, both practical and philosophical, lies at the heart of our cognitive redefinition manifesto, urging us to recognize and address these fundamental changes.

 

A Step-by-Step Guide to Preserving Cognitive Friction

Human thought, throughout history, has always relied on a specific kind of resistance: a crucial friction that refines ideas and solidifies understanding.

 

This cognitive friction is not mere discomfort; it is the essential resistance that anchors cognition, much like the physical grip that allows us to grasp objects or the firm steps that enable us to walk with purpose. It is in these rough edges, in these moments of intellectual struggle, that true understanding begins to form.

In recent years of coexistence with AI, it has become increasingly clear that AI often deprives humanity of this vital cognitive structure, a central tenet of the definitive new cognitive manifesto.

At first, the ease that AI provides, its ability to simplify the complexities of thought, may seem like greater clarity. However, this apparent ease carries significant, often unforeseen, consequences, perhaps best described as side effects.

When the path to an idea becomes excessively fluid, our internal cognitive lights begin to dim and fade. It becomes increasingly difficult to discern the difference between a thought that has been genuinely forged through effort and reflection, and one that has simply been constructed or presented by a machine. This diminished capacity for discernment constitutes a crucial challenge.

Furthermore, the impact of AI extends to industries and job categories previously considered safe from automation. Goldman Sachs estimates that generative AI could replace the equivalent of 300 million full-time jobs worldwide.

 

 

For example, imagine a student using an AI assistant to write an essay.

While the AI ​​may produce a grammatically perfect and logically structured text, the student might miss the crucial cognitive friction of grappling with complex ideas, conducting thorough research, and organizing arguments from scratch.

This omission means they miss out on the opportunity for genuine intellectual growth that comes from their own effort.

Similarly, a design team that relies heavily on AI for initial concept generation might quickly arrive at plausible solutions, but could overlook the deeper conceptual development that comes from prolonged brainstorming, iterative sketching, and questioning preconceived notions.

This lack of friction, while seemingly efficient, can lead to a more superficial understanding and a less original outcome (Harvard, 2024).

This section describes the steps for adopting the principles of the modern cognitive manifesto by actively seeking out and valuing these moments of intellectual friction. Preserving cognitive friction is not about slowing down progress, but about ensuring the depth and authenticity of our mental processes.

 

 Step-by-Step Guide to Identifying False Cognition

One of the most significant consequences of our changing cognitive environment is the emergence of what we call «false cognition.» These are responses, ideas, or pieces of information that have a superficial appearance of reflection and coherence but lack a genuine connection to lived experience, deep understanding, or authentic human reasoning. AI systems excel at producing these results with remarkable ease and speed, and as a society, we are rapidly becoming accustomed to, if not actively seduced by, this ongoing process. Identifying and managing this phenomenon is a key challenge addressed by our new cognitive manifesto.

What is particularly intriguing, and perhaps more worrying, is a further development: people are beginning to emulate the stylistic patterns of their machines. We observe a growing trend toward short, declarative sentences, fluid transitions between ideas, and the absence of «loose ends» or the messy, introspective qualities that characterize genuine human expression. The rhythm of communication changes first, subtly altering how we articulate our thoughts, and subsequently, the very nature of our thinking begins to follow suit. What were initially patterns trained on AI are now becoming the very patterns humans unconsciously copy and, more dangerously, internalize as the norm. This shift highlights a crucial aspect of the definitive new cognitive manifesto.

 

The more we rely on systems that operate without hesitation, the stranger and more uncomfortable our crucial human hesitancy seems.

 

Yet hesitation, however uncomfortable, uncertain, or problematic it may seem, is precisely where true thinking resides. It marks that vital moment when the mind encounters itself, grappling with the complexity, uncertainty, and nuances of understanding. Beyond this crucial point of internal interaction, the distinction between authentic reasoning and its polished, artificial imitation becomes dangerously blurred. For example, consider corporate communications in 2025. Many press releases or internal memos, while perfectly structured and grammatically correct thanks to AI assistance, often lack a genuine human voice, empathy, or the unique perspectives that come from direct experience.

They sound professional, but they come across as empty. Similarly, online interactions, where AI-generated comments proliferate, can create an illusion of interaction, but these comments, despite their fluency, often lack the emotional depth or personal conviction that true human interaction provides (Harvard, 2024).

This section offers guidance on recognizing these subtle but ubiquitous signs of false cognition, enabling us to discern and resist their influence.

Research by the Boston Consulting Group shows that knowledge workers using GPT-4 completed tasks 25% to 40% faster and with up to 40% higher quality compared to those working without AI assistance.

 

 

Troubleshooting: Navigating AI’s Indifference Engine

A defining characteristic and inherent risk of artificial intelligence is its fundamental indifference. AI doesn’t intentionally lie or try to deceive you; it simply doesn’t care. Its primary function is to generate the most plausible and statistically probable continuation of your query or request. This neutrality is often mistakenly presented as objectivity, leading users to believe they are interacting with an impartial source of truth. However, beneath this veneer of neutrality lies a distinctive and powerful force: indifference. Understanding this «indifference engine» is crucial for anyone committed to the principles of our modern cognitive manifesto.

The machine has no intrinsic interest in whether its output actually strengthens your thinking or, conversely, subtly erodes it. It is not concerned with whether its accuracy deepens understanding or simply offers a polished and convincing illusion of comprehension. Accuracy, as presented by AI, can often mimic intelligence, and fluency can convincingly sound like profound wisdom. However, without the inherent burden of truth—the human imperative to understand, verify, and substantiate what is communicated—both accuracy and fluency can become detached from genuine meaning. This detachment, repeated thousands of times throughout the day, in countless interactions, insidiously transforms the entire cognitive landscape.

 

Imagine a medical professional in 2025 using an AI diagnostic tool.

The AI ​​can provide an accurate list of potential diagnoses and treatment protocols based on vast datasets. However, it lacks the empathetic context of the patient’s personal history, emotional state, or the specific ethical considerations of their situation. The accuracy of AI is undeniable, but its indifference to the human element can lead to a clinical and detached approach to care. Another example is AI-generated news summaries. While accurate and concise regarding the facts, they can omit crucial nuances, ethical implications, or the human stories that give news its true weight and meaning.

AI summarizes accurately, but it doesn’t care about the impact or the deeper truth hidden behind the facts (Harvard, 2024). This section offers strategies for addressing this indifference, urging us to actively seek out and integrate human judgment and ethical consideration when AI neutrality fails—a cornerstone of our new cognitive AI manifesto.

Outcomes: Holding On to Ourselves in the Cognitive Age

The Cognitive Age is not a distant future event; it is unequivocally here, shaping our daily lives and altering the very essence of our thought processes. Our collective task now is not to succumb to fear or blindly venerate the advances of AI, but to critically acknowledge and thoughtfully respond to the profound changes it imposes on our thinking.

 

The ultimate goal of this new cognitive manifesto for AI is to empower people to consciously safeguard the slow, often uneven, but profoundly human cognitive mechanisms that underpin genuine thought.

 

This includes actively recognizing and even celebrating those seemingly annoying irritations, unresolved issues, and critical doubts that serve as vital signs, indicating when something truly matters and demands deeper engagement.

 

As we approach 2025 and beyond, with increasingly sophisticated and ubiquitous AI models, these challenges will only intensify, making the principles outlined in this manifesto more urgent than ever.

 

We must deliberately make space for the idea that the qualities we instinctively associate with deep thinking—qualities such as friction, the capacity for wonder, and genuine moral awareness—are not obsolete. On the contrary, they are immensely valuable and worth preserving, even if the powerful tools around us might make them feel increasingly unnecessary or inefficient. The original manifesto served as an invitation to embrace a new era; this new, revised cognitive manifesto stands as a crucial reminder.

 

The task before us is not to wage a futile battle against the inexorable march of the Cognitive Age. Instead, it is a far more personal and profound endeavor: to remain consciously and actively human within it.

 

This means cultivating an awareness of how technology influences our inner lives, fostering critical thinking, and prioritizing authentic human connection and meaning over mere fluency or efficiency.

 

By embracing the principles of this new, definitive cognitive manifesto, we can ensure that our intellectual and emotional landscapes remain rich, complex, and authentically our own, safeguarding the essence of what it means to think, feel, and create as humans in an increasingly AI-driven world (Harvard, 2024). The results of this commitment are a more resilient, reflective, and deeply human future.

 

 

Sam Altman

The following contribution comes from Sam Altman’s website, and he is the author.

 

The Gentle Singularity

We have passed the event horizon; takeoff has begun. Humanity is close to building digital superintelligence, and at least so far, it is much less strange than it seems.

 

Robots don’t yet walk the streets, nor do most of us talk to AI all day. People still die from diseases, we cannot yet easily travel to space, and there is much of the universe we don’t know.

An analysis by the McKinsey Global Institute predicts that by 2030, up to 375 million workers worldwide may need to change occupational categories due to automation and AI.

 

 

And yet, we have recently built systems that are smarter than people in many ways

 

and capable of significantly amplifying the performance of those who use them. The least likely part of the work is already behind us; the scientific knowledge that led us to systems like GPT-4 and o3 was hard to come by, but it will take us very far.

 

Faster Scientific Progress and Increased Productivity

AI will contribute to the world in many ways, but the improvements in quality of life that result from it driving faster scientific progress and increased productivity will be enormous; The future can be so much better than the present.

 

Scientific progress is the greatest driver of overall progress; it’s incredibly exciting to think how much more we could have.

To a large extent, ChatGPT is already more powerful than any human being who has ever lived. Hundreds of millions of people rely on it daily for increasingly important tasks; a small new capability can have an enormously positive impact; a small misalignment multiplied by hundreds of millions of people can cause a huge negative impact.

 

By 2025, agents capable of performing real cognitive work will be incorporated; writing computer code will never be the same again. By 2026, systems capable of discovering new knowledge will likely be incorporated. By 2027, we could see the arrival of robots capable of performing real-world tasks.

Many more people will be able to create software and art. But the world craves much more of both, and experts will likely remain far better than novices, provided they embrace the new tools. In general terms, a person’s ability to achieve much more in 2030 than in 2020 will be a remarkable shift, and many will discover how to benefit from it.

 

In the most important aspects, the 2030s may not be very different. People will still love their families, express their creativity, play games, and swim in lakes.

 

But in still very important aspects, the 2030s will likely be very different from any previous era.

We don’t know the extent to which we can surpass human intelligence, but we are about to find out.

 

In the 2030s, intelligence and energy—ideas and the ability to bring them to fruition—will be extremely abundant. These two have long been the primary constraints on human progress; with abundant intelligence and energy (and good governance), we can theoretically have anything else.

We already live with incredible digital intelligence, and after an initial shock, most of us are quite accustomed to it. We quickly move from marveling at AI’s ability to generate a flawless paragraph to wondering when it will be able to create a flawless novel; or from marveling at its capacity to perform life-saving medical diagnoses to wondering when it will be able to develop cures; or from marveling at its ability to create a small computer program to wondering when it will be able to create an entirely new company. This is how the singularity works: wonders become routine and then essential.

It’s not all doom and gloom. Of course, none of this means that AI won’t create enormous value, improve society, or generate greater well-being for us. The potential benefits of AI in healthcare, scientific research, education, and other areas are undoubtedly enormous.

 

 

We’ve already heard scientists say they are two or three times more productive than before AI.

Advanced AI is interesting for many reasons, but perhaps nothing is as significant as the fact that we can use it to accelerate AI research. We could discover new computing substrates, better algorithms, and who knows what else.

 

If we can accomplish a decade’s worth of research in a year or a month, the pace of progress will obviously be very different.

 

From now on, the tools we have already developed will help us uncover more scientific knowledge and create better AI systems. Of course, this isn’t the same as an AI system that updates its own code completely autonomously, but it’s still a nascent version of recursive self-improvement.

 

There are other feedback loops. The creation of economic value has initiated a cycle of infrastructure development

to operate these increasingly powerful AI systems. And robots capable of building other robots (and, in a sense, data centers capable of building other data centers) are not so far off.

 

If we were to manufacture the first million humanoid robots the old-fashioned way, but they could then operate the entire supply chain (digging and refining minerals, driving trucks, running factories, etc.) to build more robots, which in turn could build more chip manufacturing facilities, data centers, and so on, the pace of progress would obviously be very different.

 

The Energy Consumption of AI

As production in data centers becomes more automated, the cost of intelligence should converge to a level close to that of electricity. (People are often curious about how much energy a ChatGPT query uses; an average query uses about 0.34 watt-hours—roughly what an oven would use in a little over a second or a high-efficiency light bulb in a couple of minutes. It also uses about 0.000085 gallons of water—roughly one-fifteenth of a teaspoon.)

The pace of technological progress will continue to accelerate, and people will still be able to adapt to almost anything. There will be very difficult aspects, such as the disappearance of entire classes of jobs, but on the other hand, the world will become so much richer, so rapidly, that we will be able to seriously consider new political ideas that we couldn’t before. We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have yielded a great result.

 

If history is any guide, we will discover new things to do and new things to desire, and we will assimilate new tools quickly (the shift in work after the Industrial Revolution is a good recent example).

 

Expectations will rise, but capabilities will increase just as rapidly, and we will all achieve better things.

 

We will build increasingly wonderful things for each other. Humans have an important and curious long-term advantage over AI: we are programmed to care about others and what they think and do, and we don’t care much about machines.

 

A subsistence farmer from a thousand years ago would see what many of us do and say we have meaningless jobs, thinking we’re just wasting time playing games, since we have plenty of food and unimaginable luxuries. I hope that a thousand years from now we’ll look at these jobs and think they’re very meaningless, and I have no doubt that those who do them will find them incredibly important and fulfilling.

 

The pace at which new wonders will be achieved will be immense. It’s difficult to even imagine today what we will have discovered by 2035; perhaps we’ll go from solving high-energy physics one year to initiating space colonization the next; or from a major breakthrough in materials science one year to true high-bandwidth brain-computer interfaces the following year. Many people will choose to live their lives much the same, but at least some will likely decide to «go online.»

The arrival of artificial intelligence promised an era of amplified human potential, an ever-expanding horizon for our minds. However, as we delve deeper into this «Cognitive Age,» a subtle yet profound shift is occurring that demands a revised perspective on our mental landscape.

 

 

Looking ahead, this sounds difficult to grasp. But experiencing it will probably be breathtaking, yet manageable.

From a relativistic perspective, the singularity happens gradually, and fusion, slowly. We are ascending the long arc of exponential technological progress; it always seems vertical going forward and flat going backward, but it’s a smooth curve. (Remember 2020 and how it would have sounded to have something close to general AI by 2025, compared to what the last five years have actually been like.) There are serious challenges to face, along with enormous advantages. We need to solve the security problems, both technical and social, but it’s also crucial to widely distribute access to superintelligence, given the economic implications. The best way forward might be something like this:

Solve the alignment problem, meaning we can firmly ensure that AI systems learn and act according to what we collectively want in the long run (social media is an example of misaligned AI; the algorithms that power them are amazing at keeping you scrolling and clearly understanding your short-term preferences, but they do so by exploiting something in your brain that overrides your long-term preferences).

 

Then, focus on making superintelligence affordable, widely available, and not overly concentrated in any one person, company, or country. Society is resilient, creative, and adapts quickly. If we can harness collective will and wisdom, even if we make many mistakes and some things go very wrong, we will learn and adapt quickly and be able to use this technology to maximize the advantages and minimize the disadvantages. Giving users a great deal of freedom, within broad limits that society must decide, seems very important. The sooner we start a debate about what these general limits are and how we define collective alignment, the better.

 

We (the entire industry, not just OpenAI) are building a brain for the world. It will be extremely personalized and easy for everyone to use; we will be limited by good ideas. For a long time, techies in the startup industry have scoffed at «idea guys»—people who had an idea and were looking for a team to develop it. Now it seems to me that they are about to have their moment of glory.

OpenAI is many things now, but first and foremost, we are a superintelligence research company. We have a lot of work ahead of us, but most of the path is already illuminated, and the dark areas are rapidly dissipating. We feel incredibly grateful to be able to do what we do.

Intelligence that is too cheap to measure is within reach. It might sound crazy, but if we had told you in 2020 that we would be where we are today, it probably sounded more far-fetched than our current predictions for 2030.

 

Hopefully, we can scale smoothly, exponentially, and without incident through superintelligence.

 

 

Machines of Loving Grace1

The following contribution comes from the website of Dario Amodei, CEO of Anthropic, a public benefit corporation dedicated to developing controllable, interpretable, and safe AI systems.

Previously, Dario was Vice President of Research at OpenAI, where he led the development of large-scale linguistic models such as GPT-2 and GPT-3. He is also a co-inventor of human feedback-based reinforcement learning. Before joining OpenAI, he worked at Google Brain as a senior research scientist.

Dario Amodei is the author of How AI Could Transform the World for the Better. I think and talk a lot about the risks of powerful AI. The company I lead, Anthropic, thoroughly researches how to reduce these risks.

 

 

Because of this, people sometimes conclude that I am pessimistic or catastrophist, thinking that AI will be mostly bad or dangerous. I don’t believe that at all. In fact, one of the main reasons I focus on the risks is that they are the only thing standing between us and what I consider a fundamentally positive future. I think most people underestimate how radical the benefits of AI could be, just as they underestimate how serious its risks could be.

 

In this essay, I attempt to outline what those benefits might look like: what a world with powerful AI would look like if everything goes well. Of course, no one can predict the future with certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than previous technological changes, so this will inevitably involve some conjecture.

 

But my aim is, at the very least, to offer well-founded and useful conjectures that capture an idea of ​​what will happen even if most of the details turn out to be wrong. I include many details primarily because I believe that a concrete vision contributes more to advancing the debate than a very narrow and abstract one.

 

First, however, I wanted to briefly explain why Anthropic and I haven’t talked much about the advantages of powerful AI, and why we will likely continue to talk much more about the risks. In particular, I’ve made this decision out of a desire to:

AI seamlessly completes your personal thought process. The most noticeable change for anyone who regularly interacts with AI is the altered perception of thought. You can begin to formulate a sentence, but AI seamlessly completes your personal thought process before you can fully articulate it.

 

 

Maximize leverage

The basic development of AI technology and many (but not all) of its benefits seems inevitable (unless risks derail everything) and is fundamentally driven by powerful market forces.

 

On the other hand, risks are not predetermined, and our actions can considerably change their probability.

 

Avoid the perception of propaganda

AI companies talking about all their incredible benefits can seem like propaganda or as if they’re trying to distract from the downsides. I also believe that, on principle, it’s bad for the soul to spend too much time talking about one’s own book.

 

Avoid Grandiosity

I am often put off by the way many public figures in the AI ​​risk sphere (not to mention leaders of AI companies) talk about the post-AIG world as if it were their mission to bring it about single-handedly, like a prophet leading their people to salvation. I think it is dangerous to view companies as unilaterally shaping the world, and dangerous to consider practical technological goals in essentially religious terms.

Avoid the “Science Fiction” Baggage

While I believe most people underestimate the advantages of powerful AI, the small community that does discuss radical AI futures often does so in an excessively “science fiction” tone (including, for example, uploaded minds, space exploration, or a general cyberpunk slant).

I think this makes people take the claims less seriously and imbues them with a kind of unreality. To be clear, the question is not whether the technologies described are possible or probable (the main essay discusses this in great detail); It’s more that the «vibe» connotatively introduces a lot of cultural baggage and tacit assumptions about what kind of future is desirable, how various social problems will develop, and so on.

 

The result often ends up seeming like a fantasy for a limited subculture, while being unpleasant for most people.

 

 

 

How Generative AI Is Quietly Transforming the Way We Think, Speak, and Imagine

The following contribution comes from The Global Generative AI Award website, which defines itself as follows:

The Global Generative AI Award is a prestigious recognition program that celebrates excellence, innovation, and impact in the development and application of generative AI.

Our mission is to identify and recognize the organizations and individuals who drive and use generative AI technology, harnessing its potential to transform diverse sectors, improve lives, and create a better future for all.

Author: Team

 

 

Artificial intelligence is becoming the new cognitive shortcut. It writes essays, solves problems, and generates fluent text with a speed no human can match.

 

But the real disruption lies not in productivity, but in how this technology can transform the architecture of human thought and the very language we use to understand the world.

 

We often talk about how AI will transform economies, jobs, and industries. We talk much less about how it could transform the mind.

Robots don’t yet walk the streets, nor do most of us talk to AI all day. People still die from diseases, we can’t yet easily travel to space, and there is much of the universe we don’t know. And yet, we have recently built systems that are smarter than humans in many ways.

 

 

The Risk Behind Comfort

Psychology and neuroscience offer clues. Humans adapt their thinking to the tools they use. When people rely excessively on GPS, their ability to form mental maps weakens.

London taxi drivers trained their hippocampus (the brain’s spatial memory center) for years by memorizing thousands of streets. Today, satellite navigation has replaced that cognitive effort.

Generative AI further fuels this trend. If a tool can produce instant, polished, and fluent language, we could gradually detach ourselves from the difficulty of thinking in words. The danger is subtle. Language is not a passive container for thoughts. It is the medium through which thought is organized.

Lev Vygotsky’s classic work illustrates this clearly. Patients with aphasia were unable to pronounce sentences like «snow is black.» Their minds resisted separating objects from the words that represented them. Vygotsky argued that creative thinking depends on our ability to use language freely, not as a script, but as a tool for exploring meaning.

When AI starts providing predefined expressions, prefabricated sentences, and idealized paragraphs, that creative link can weaken.

 

The Silent Erosion of Cognitive Effort

Education shows how quickly this erosion can occur. Students are increasingly using AI to summarize books, write essays, and complete assignments. Many produce impeccable prose but demonstrate little understanding when questioned. AI gives them “perfect answers” ​​but saves them the effort needed to truly grasp a concept.

Research is addressing these concerns.

A 2024 systematic review revealed that over-reliance on AI reduced cognitive engagement. Another study of 285 university students in Pakistan and China found that AI tools negatively impacted decision-making and reduced motivation to use memory, analytical reasoning, or critical thinking.

The pattern is clear. When a tool manages the cognitive load, the brain eventually stops exercising those muscles.

Language Decay in the Digital Age

Neurolinguists have long documented language decay, the gradual loss of linguistic ability when a language is not actively used. Michel Paradis defines it simply: decay is the result of a lack of long-term stimulation.

Generative AI introduces a similar risk at a structural level. If language becomes something we consume rather than something we construct, our relationship with words weakens. We stop seeking expression. We begin to select from what the machine offers.

 

This shift not only changes how we speak, but also how we imagine.

Why this matters for society: Language and thought co-evolved. Through language, a child transforms raw sensations into meaning. Through abstraction, a teenager learns to reflect, analyze, and anticipate. Mature thought is based on the ability to construct concepts using words.

When AI begins to dictate the form of language, we risk building a culture of immediacy, emotion without depth, and reaction without reflection. Instead of being authors of original thought, we have become editors of recycled fragments.

 

The implications extend to politics and public discourse.

Language defines the limits of the public imagination. Whoever controls the linguistic infrastructure (platforms, algorithms, generative models) indirectly shapes what people can express, debate, or conceive.

If political language is automated, democratic dialogue risks crumbling into algorithmic slogans generated by no one and owned by no one.

AI is not the enemy; passivity is.

Generative AI can be a powerful ally when used by people who already have a deep and reflective relationship with language. In those hands, AI becomes an extension of thought, not a substitute. But the danger lies in completely abandoning cognitive effort.

Preserving the integrity of thought requires practice in returning language to its human roots: the laborious, sometimes confusing, process of searching for the right word. The discipline of articulating an idea instead of externalizing it. The joy of shaping meaning with one’s own voice.

 

Reclaiming the future of thought

AI will not stop. Nor should it. But we must avoid externalizing our minds for convenience. What must be defended is the beauty and complexity of verbal thought, the freedom to imagine, conceive, and articulate the world in our own terms.

 

If we want a future where human creativity remains important, we must keep language alive as a practice, not a product. The future of thought depends on it.

 

 

 

 A New National Purpose: AI Promises a World-Leading Future for Britain

The following contribution is from the Tony Blair Institute for Global Change website.

The authors are Pete Furlong, Melanie Garson, Kirsty Innes, Alexander Iosad, Oliver Large, John-Clark Levin, and Kevin Zandermann.

 

 

Our «Britain’s Future» initiative sets out a policy agenda for a new era of invention and innovation, based on radical yet practical ideas and genuine reforms that embrace the technological revolution. The solutions developed by our experts will transform public services and lead to a greener, healthier, and more prosperous UK.

 

A New National Purpose: AI Promises a World-Leading Future for Britain is a joint report by Tony Blair and William Hague.

 

Foreword

In our joint paper, «A New National Purpose: Innovation Can Drive Britain’s Future,» published in February 2023, we outlined how the UK could be at the forefront of scientific and technological advancements. This follow-up report describes what the UK will need to do to become a global leader in the safe and successful development of artificial intelligence (AI)—a matter so urgent and important that our response is likely to determine Britain’s future.

 

This global leadership will require a significant acceleration and scale-up of initiatives already announced by the British government. For example, we call for the planned increase in UK computing capacity to be ten times greater than currently projected, and for the government to achieve exascale capability within a year, rather than 2026. We argue that major spending commitments should be reviewed to redirect funds toward science and technology infrastructure and to provide the talent and research programs needed to be a global leader in AI.

 

Our report offers a comprehensive plan for restructuring the machinery of government, including how to advise ministers; how to bring the necessary expertise into government; how to ensure close collaboration between the public and private sectors; how to insist that research and regulation be developed jointly and swiftly; and how to build sovereign capabilities that will be vital for providing effective guidance and regulation. We renew our call for an Advanced Public Procurement Agency to foster innovation, outline how training and education could be modernized, and advocate for urgent work on using AI to improve public services.

To put the UK at the forefront of global thinking on AI, we call for a national laboratory to collaborate with the private sector and other countries on its safe development. We suggest how the UK government, in collaboration with the US and other allies, could push for a new UN framework on urgent safeguards. We outline the steps needed to address the immediate threat of widespread disinformation. And we explain how the UK could, in collaboration with the European Union, develop a regulatory model aligned with US standards that would be very attractive to startups, attracting talent and investment to the country.

 

We present ideas for “Disruptive Innovation Labs” and for the initial work demonstrating how AI can transform the national infrastructure. In the longer term, we offer reflections on training and lifelong learning, topics that will be of paramount importance for future economic prosperity. Society is on the verge of radical transformation, requiring a more strategic state and a fundamental shift in our planning for the future. These ideas aim to help all political parties find the best way forward, with the necessary speed and sense of priority, in a period of dramatic change and opportunity that has already begun.

 

Tony Blair and William Hague

 

Chapter 2

 

Executive Summary

Artificial intelligence (AI) is the most important technology of our generation.

Getting policy right on this issue is critical and could determine Britain’s future. The potential opportunities are enormous: changing the shape of the state, the nature of science, and enhancing citizens’ capabilities.

 

But the risks are also profound, and now is the time to shape this technology for the better.

 

For the UK, this task is urgent. The speed of change in AI underscores everything outlined in our first New National Purpose report [1], which called for a radical new political agenda and a restructuring of the state, with science and technology at its core.

 

How AI will develop is still unclear, but it is already permeating the economy and society. This will accelerate in the coming years.

The fact is that our institutions are not set up to address science and technology, particularly their exponential growth. It is absolutely vital that this changes.

The government is beginning to recognize some elements of this challenge. However, this is a technology with an impact similar to that of the internal combustion engine, electricity, and the internet, so gradualism will not suffice.

 

First, the state must reorient itself towards this challenge. Major changes are needed in how government is organized, collaborates with the private sector, promotes research, leverages expertise, and receives advice.

Scientific progress is the greatest driver of overall progress; it is incredibly exciting to think about how much more we could have. To a large extent, ChatGPT is already more powerful than any human being who has ever existed. Hundreds of millions of people rely on it daily for increasingly important tasks.

 

 

Recommendations for achieving this include:

Securing decades-long investment in science and technology infrastructure, as well as in talent and research programs, by reallocating significant capital investment priorities for this purpose.

Boosting the effectiveness of Number 10 by dissolving the AI ​​Council and strengthening the Foundational Model Working Group so that it reports directly to the Prime Minister.

Refining the Artificial Intelligence Office to provide better forecasting capabilities and greater agility for government to address technological change.

 

Secondly, the UK can become a leader in developing safe, reliable, and cutting-edge AI, in collaboration with its allies. The country has an opportunity to create effective regulations that go far beyond existing proposals but are also more attractive to talent and businesses than the approach taken by the European Union. Recommendations for achieving this include:

Creating Sentinel, a national laboratory focused on research and testing of safe AI, with the aim of becoming the «brain» of an AI regulator both in the UK and internationally. Sentinel would acknowledge that effective regulation and oversight are, and likely will remain, an ongoing research issue, requiring an exceptionally close combination of research and regulation.

 

Finally, the UK can pioneer the deployment and use of this technology in the real world, building next-generation businesses and creating a strategic 21st-century state.

 

 

 Recommendations for achieving this include:

Launching major AI talent programs, including international recruitment and the creation of scholarly fellowships, that allow top researchers outside of AI to learn about AI, as well as leading AI researchers to learn about non-AI fields and exchange ideas.

Requiring a tiered access approach to computing provision, under which access to larger amounts of computing power comes with additional requirements to demonstrate responsible use.

Requiring generative AI companies to label the synthetic media they produce as deepfakes and social media platforms to remove unlabeled deepfakes.

Building an infrastructure for the AI ​​era, including computing capacity, and transforming data into a public asset by creating high-value datasets that serve as a public good.

It is critical to engage the public in all these developments to ensure that AI development is responsible and to empower people with the skills and opportunities to adapt. The UK has both the responsibility and the opportunity to lead the world in establishing the framework for safe AI.

 

Chapter 3

 

Introduction

In less than a decade, AI has gone from science fiction to technological reality. Our first New National Purpose report[2]Link to footnote argued for the need for a ‘strategic state’ that can fully harness the power of science and technology to improve people’s lives. This must be the UK’s primary ambition, as those nations that can effectively restructure their states around technology will be the ones that define the future.

 

While AI has accelerated the need for this transformation, several aspects pose challenges to conventional approaches to policy and governance.

 

The first stems from the speed of change. AI is developing at a pace that surprises even some of its pioneers.[3]Link to footnote. Relatively simple algorithms, powered by vast amounts of computing power and data, are producing models that can already outperform human thought on a variety of cognitive tasks.

 

The second is the unpredictability of progress. For example, five years ago the consensus was that the creative industries would be among the last to be automated, but generative AI models have begun to change this. Unlike many previous technological advances, progress in AI is not driven by a general theory. Instead, it develops primarily through experimentation and modification. This means that, in this era of deep learning-based AI, creators are unaware of the full extent of its capabilities.

 

The third is the expertise required to understand and develop AI. The expertise needed for the global development of AI resides in a small, highly sought-after group of people, mostly based in private labs and certainly not in the Whitehall system.

 

The fourth problem is the scope and scale of AI’s potential. Machines capable of outperforming humans will have capabilities that are still unimaginable. Hence the fifth challenge, and the most significant from a social perspective: the speed and scale of change in the functioning and organization of societies. The automation of cognitive work implies a profound technological shift in how tasks are performed, knowledge is produced, and information is communicated. If this materializes, fundamental aspects of how society functions will need to adapt rapidly to a world in which a primary source of economic contribution and purpose is being automated.

 

In short, the unpredictable development of AI, its pace of change, and its growing power mean that its arrival could represent the greatest political challenge ever faced, one for which existing state approaches and channels are ill-suited.

 

This information has been prepared by OUR EDITORIAL STAFF