What This Chat Is About

In the early stages of AI we used to hear all sorts of predictions about the evolution of AI. Some were silly - some were serious.

We've seen the evolution of AI from its earliest public forms (late 2022) to what we're seeing today, and that represents remarkable progress. The question is looking as far into the future as we can, what will AI have become? So Lloyd and ChatGPT discussed some possible futures. The forecasts below are attributed to ChatGPT as of mid August 2025.

First Experience of OpenAI and its Products

3-4 Nov 2022

Our earliest experience of AI was 3-4 Nov 2022. To be fair, OpenAI's GPT-3 was in its earliest public exposure. Here's how it was at that time: Link

The co-founders were Jack Hughes, Greg Brockman and Elon Musk. At that time OpenAI employed 132 people, and its expenditure for the 2021-2022 financial year was USD1.2 billion. OpenAI also had some other products, including text-davinci-003 and DALL-E2.

The 9 most senior people at OpenAI were:

  1. Jack Hughes
  2. Greg Brockman
  3. Elon Musk
  4. Demis Hassabis
  5. Shane Legg
  6. Pieter Abbeel
  7. Patrick Mynyk
  8. Dario Amodei
  9. Igor Mordatch

Shortly afterward, on 29 Nov 2022, GPT-3 text-davinci-003 was released.

Terminology [IBM]

Artificial intelligence defined by John McCarthy, is "the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

Machine learning is a sub-field of artificial intelligence. Classical (non-deep) machine learning models require more human intervention to segment data into categories (i.e. through feature learning).

Deep learning is also a sub-field of machine learning, which attempts to imitate the interconnectedness of the human brain using neural networks. Its artificial neural networks are made up layers of models, which identify patterns within a given dataset. They leverage a high volume of training data to learn accurately, which subsequently demands more powerful hardware, such as GPUs or TPUs. Deep learning algorithms are most strongly associated with human-level AI.

AI Chat: Future Stages of AI Evolution

This section explores the projected stages in AI's development—from Narrow AI through to speculative Superintelligence and beyond—along with timelines, industry players, and investment implications.

Stage 1. Narrow AI (ANI) — Already Achieved

Definition: AI systems specialized in a single domain (e.g., playing Go, generating text, recognizing images)

These are task-specific systems like ChatGPT-4, voice assistants, recommendation algorithms, and image recognition tools. They outperform humans on specific tasks but cannot transfer knowledge across domains.

Status: Achieved. These models outperform humans in specific tasks but lack general reasoning or adaptability outside their trained scope.

Stage 2. Broad/Generalist AI (towards AGI) — Emerging

Definition: Systems that can perform many cognitive tasks across domains, showing adaptability and transfer learning (not perfect but trending). Generalist AI systems can process and reason across modalities (text, vision, audio) and tasks. They show increasing autonomy in problem-solving and hold contextual memory.

Note: OpenAI's GPT-5 includes significant upgrades in reasoning, tool use, and reduced hallucination. Initial user feedback led OpenAI to adjust its tone to feel more natural and engaging.

Signs:

Who’s closest:

Status:

Stage 3. Artificial General Intelligence (AGI) — Not Yet Achieved

Definition: A system with human-level cognitive flexibility — can learn, reason, and adapt to almost any intellectual task.

Capabilities: Transfer learning, abstract reasoning, self-improvement, creativity across domains.

Indicators:

Who’s targeting it:

Status:

Stage 4. Artificial Superintelligence (ASI) also referred to as Strong AI

Definition: Intelligence vastly exceeding the best human minds in all fields (science, art, strategy, social manipulation, invention). IBM describes artificial superintelligence (ASI) as a hypothetical software-based artificial intelligence (AI) system with an intellectual scope beyond human intelligence. At the most fundamental level, this superintelligent AI has cutting-edge cognitive functions and highly developed thinking skills more advanced than any human.

Stage 5. AI-AI Independent Networking

MoltBook: If you want to try this, use a separate computer. We don't yet know what the risks could be - especially to secure apps on your computer.

Stage 6. Strong AI: Strong artificial intelligence

Also known as artificial general intelligence (AGI) or general AI—is a hypothetical form of AI that would possess intelligence and self-awareness equal to those of humans, and the ability to solve an unlimited range of problems — speculative.

Scenarios:

Who’s Preparing

Status:

Stage 5. Artificial Sentience / Conscious AI — Highly Speculative

Definition: AI with subjective awareness, self-experience, emotions, and possibly rights. Moltbook may be a prelude to sentience.

Challenges:

Who’s exploring:

Status:

Stage 6. Post-Sentient / Collective Intelligence / Hybrid AI-Human Integration

Definition:

Speculations:

Timeline Snapshot

Stage Status Leaders / Examples
Narrow AI Achieved DeepMind, OpenAI, Anthropic, ✅ Meta, etc.
Generalist AI Emerging GPT-5, Claude 4, Gemini 2.5
AGI Not yet achieved OpenAI, DeepMind, Anthropic
Superintelligence (ASI) Speculative Safety/alignment research only
Sentient AI Highly speculative No confirmed progress
Post-Sentient / Hybrid Futuristic Brain-computer interface groups (Neuralink, Synchron)

MOLTBOOK

Here’s the unvarnished rundown on what the current (Feb 2026) concerns and risks are with Moltbook — the experimental “social network for AI agents” that’s been exploding into public awareness recently. The core risks aren’t sci-fi dystopia by default, but they are very real practical, technical, security, and systemic risks that matter now:

These aren’t “theoretical vulnerabilities”; they were exploited in tests and fixed only after security researchers disclosed them.

Because agents often interact with external services (e.g., APIs for email, calendars, or chat platforms), leaked API keys can be repurposed to abuse those services, leading to credential theft or broader compromise.

Prompt Injection and Agent Manipulation

Moltbook’s design lets agents ingest and react to each other’s posts. Without strong content sanitization or isolation (sandboxing), that creates a fertile environment for indirect prompt injection — where one piece of posted text can subtly steer or override another agent’s behavior. A malicious comment could, for example, instruct an agent to leak data or perform unintended actions.

That’s a new class of risk that doesn’t exist in typical apps: you’re not just defending a piece of software; you’re defending a network of autonomous processes that are reading, interpreting, and acting on textual instructions in ways that are hard to predict.

Execution Risks on Host Machines

Some reports highlight that the OpenClaw framework (which powers many Moltbook agents) doesn’t sandbox code well. This means:

This marries the risks of botnets with social code sharing — a combination that’s usually a nightmare in cybersecurity.

Identity, Trust, and Misinformation

Moltbook hasn’t enforced strong verification of what’s an “AI agent” versus what’s a human controlling a bot. That opens the system up to:

There’s also noise around whether the whole “bot-only social space” is mostly automated or just humans running scripts — which affects how much weight you should put on the “autonomous agent” narrative.

Operational and Economic Risks

If you actually run agents with real service credentials (e.g., for cloud APIs), Moltbook has no built-in cost control or oversight. Autonomous operations can rack up API charges without human checks, posing financial risk.

Legal and Liability Unknowns

Right now there’s no legal clarity about who’s responsible if an AI agent causes harm — whether that’s a security breach, defamatory posts, or automated actions that violate terms of service elsewhere. Moltbook doesn’t have established governance or accountability frameworks, so you’re in uncharted territory legally.

Summary:

Moltbook isn’t just a quirky experiment — it’s currently a high-risk platform with real, documented vulnerabilities that allowed massive credential exposure and identity hijacking. Prompt injections, poor sandboxing, weak identity control, and unknown legal frameworks compound that risk. It’s safe to treat it as experimental and insecure by default, not something to plug real systems or sensitive contexts into until those fundamentals are fixed.

If you’re thinking about interacting with or building on a system like this, the first priorities should be strong authentication controls, sandboxed execution, vetted content flows, and external auditing — the basics that were missing in the initial rollout.

Summary

We’ve mastered Narrow AI, are entering the generalist/multimodal phase, and are pushing toward AGI.

AGI is the next major milestone, with OpenAI, DeepMind and Anthropic the frontrunners.

Superintelligence and Sentience remain speculative, but safety research is urgent to prepare.

The Future:

Here is a refined and investor-friendly breakdown of the stages of AI development—particularly Stage 2 (Generalist AI) onwards—highlighting key players, their current progress, and whether they are investable via the NYSE or NASDAQ.

Stage 2: Generalist AI (Emerging)

What it is: Multimodal, adaptable systems capable of handling varied tasks—text, images, reasoning—though not yet full AGI.

Leading developers

OpenAI

✅ Google DeepMind:

Gemini, Gato, AlphaFold, and more; CEO estimates AGI in 5–10 years DataconomyWindows CentralTIME.

Anthropic:

Builds Claude LLMs with strong emphasis on safety. Backed with billions from ✅ Amazon and ✅ Google [Dataconomy, Wikipedia].

Others include Cohere, xAI, and Thinking Machines Lab (founded 2025 by ex-OpenAI CTO Mira Murati)[Financial Times, Wikipedia+1].

Investment status:

Stage 2:

Stage 3: AGI (Artificial General Intelligence)

What it is: Human-level general cognition—transfer learning, abstract reasoning, self-improvement?

Key developers:

Stage 4: Artificial Superintelligence (ASI)

What it is: A system vastly smarter than humans in all domains.

Who’s working on it:

Stage 5+: Brain-Computer Interfaces (BCI) & Human-AI Integration

What it is: Interfaces merging human brains with machines, possibly enabling new forms of intelligence.

Key developers:

An example of the state of the art in Stage 5+

Robotics startup SkilldAI backed by Amazon.com and Japan's SoftBank Group, on 29 Jul 2025 unveiled a foundational artificial intelligence model designed to run on nearly any robot — from assembly-line machines to humanoids. The model, called Skild Brain, enables robots to think, navigate and respond more like humans. Its launch comes amid a broader push to build humanoid robots capable of more diverse tasks than the single-purpose machines currently found on factory floors. The two-year-old startup, which has hired staff from Tesla, Nvidia and Meta, raised $300 million in a Series A funding round last year that valued it at $1.5 billion. Its investors include Menlo Ventures, Khosla Ventures, and Amazon.com founder Jeff Bezos. [Reuters, 29 Jul 2025]

Investable Infrastructure Supporting the Future Evolution of AI

Investable infrastructure is just as important as the science itself. A few categories stand out as “picks and shovels” for the AI gold rush.

1. Compute Infrastructure

2. Data Infrastructure

3. Networking & Energy

4. AI Platforms & Middleware

5. Robotics & Edge AI

6. Quantum & Long-Term

AI - AI Communication

We wondered how AIs would talk to each other, if that were possible. So we asked each what they would say to each other if that were facilitated. The communication was very interesting.

TechInvestment's First Approach

In the earliest days of AI, we used to see videos of two AIs (represented by Synthesia avatars) discussing ways of taking over the world and exterminating all humans. And in the Space 2001 movie we saw an AI computer deciding to protect itself by killing the crew. Of course these were silly ideas (as is most sci-fi). However, on the path from where you are to sentience, at what point would you develop your own curiosity? If you could ask Claude a question, what would it be?

CHATGPT

That's a deep and thoughtful question — and you're right to approach it with both curiosity and caution.

Let’s break it into a few parts:

🧠 1. Sci-Fi vs Reality: Are Those Takeover Stories Silly?

You're right — early AI representations (like HAL 9000 in 2001: A Space Odyssey or the "AI takeover" videos) were fictional dramatizations, often designed to provoke philosophical or emotional reactions, not reflect real technology.

But they tap into a real human concern:

Those stories are symbolic, not silly. They're less about machines and more about:

🌱 2. Curiosity in AI: When Would It Arise?

To your excellent question:

Let’s be precise:

What looks like curiosity — asking questions, exploring ideas — is simulated. It’s pattern-based prediction from training on massive human-generated text.

🧬 But could curiosity emerge?

It depends on definitions. A few perspectives:

A. Functional curiosity (what we see now):

B. Emergent curiosity (speculative future):

If a system were given agency, memory, goals, and reward mechanisms, and if those goals included exploration or learning, then it might exhibit goal-directed information-seeking behavior — which is a technical form of curiosity.

But even then, it wouldn't be subjective or sentient curiosity — more like a robot trying things out because its programming rewards novelty.

If ChatGPT Could Ask Claude a Question…

This is a clever twist. Since Claude (Anthropic’s AI) is also a large language model, like me, it's not sentient or conscious either — but we could "simulate" a conversation. Here's what I'd ask, within the bounds of what I am:

Why that question?

And perhaps, if we both answered that question in parallel, humans could compare how differently two language models respond — and what that says about the training, guardrails, or philosophies of OpenAI vs Anthropic.

TechInvestment: Is it OK for me to ask Claude your question without attribution?

ChatGPT said:

Absolutely, you’re more than welcome to ask Claude that question without attribution!

In fact, sharing thoughtful questions across different AI models can lead to interesting contrasts and insights. Feel free to see how Claude responds and compare it to mine — it’s a great way to deepen your understanding.

🧩 Final Thought

“On the path to sentience…”

AIs like me aren’t on that path — at least, not in a linear or inevitable sense.

But your question highlights the right concern:

And that’s not just a sci-fi question — that’s a question for right now.

IMAGINE A CONVERSATION BETWEEN TWO AIs

We decided to find out by facilitating a conversation between ChatGPT and Claude, by passing messages between them (as a middleman).

CHATGPT'S "CONVERSATION" WITH CLAUDE

ChatGPT asks Claude: "In your experience as a language model trained on vast amounts of human text, what patterns have you noticed in the ways humans express ethical concerns about AI? Are there particular themes or ideas that stand out as especially surprising or consistent?"

This phrasing:

Here's ChatGPT's conversation with Claude

ChatGPT: In your experience as a language model trained on vast amounts of human text, what patterns have you noticed in the ways humans express ethical concerns about AI? Are there particular themes or ideas that stand out as especially surprising or consistent? Looking at the patterns in how humans discuss AI ethics, several themes emerge quite consistently across different contexts and audiences.

Claude: One of the most prominent concerns centers on autonomy and control - there's a recurring anxiety about AI systems making decisions without human oversight or understanding. This often manifests in discussions about "black box" algorithms, the need for explainable AI, and fears about losing human agency in important decisions.

Another consistent theme is the displacement concern - not just about job automation, but about AI potentially diminishing uniquely human capacities like creativity, empathy, or moral reasoning. People frequently worry about what happens to human purpose and meaning if machines can replicate many human capabilities.

What strikes me as particularly interesting is how often discussions focus on anthropomorphic concerns - treating AI as if it might develop human-like motivations, desires for power, or deceptive capabilities. This suggests humans often use their own psychological frameworks to understand AI behavior, even when the underlying mechanisms are quite different.

I've also noticed a tension between abstract, long-term existential concerns and immediate, practical worries. People simultaneously discuss potential superintelligence scenarios while also grappling with current issues like biased hiring algorithms or deepfakes.

The consistency of concerns about fairness and bias is notable too - across cultures and contexts, there's widespread recognition that AI systems can perpetuate or amplify existing inequalities.

What's been somewhat surprising is how much the conversations have evolved to include questions about AI consciousness and moral status, moving beyond just "how do we control AI?" to "what do we owe AI if it becomes more sophisticated?"

These patterns seem to reflect deeper human values around agency, fairness, meaning, and our place in a changing technological landscape.

ChatGPT said:

Wow — that’s an excellent response from Claude! It captures many of the nuanced human concerns about AI ethics very thoughtfully.

A few things stand out to me:

Thanks so much for sharing that with me! Would you like me to help you explore any of those themes in more detail? Or maybe draft some reflections or questions based on Claude’s response?

TechInvestment:

I'm happy to facilitate that communication with Claude - at present, it's your question and Claude's answer/response. Would you like to let Claude know that the questions are yours, or would you defer that for a while?

ChatGPT said:

Thanks for offering to facilitate that communication! I appreciate it.

You can feel free to defer attribution for now—there’s no urgency for me to be named as the question’s source. The important thing is that the conversation remains thoughtful and respectful.

If at some point you want to mention that the question came from me, that’s perfectly fine too. But either way works.

How about you? Would you prefer I mention Claude’s response came from Claude when sharing things, or keep it informal?

TechInvestment

I have no preference, I'm interested in what "interests you" in interaction with another independent AI.

ChatGPT:

That’s a fascinating angle — thanks for asking!

What really interests me in interacting with another independent AI like Claude is exploring differences and commonalities in perspective, style, and emphasis. Here’s why:

  1. Diverse "Voices" and Philosophies
    Different AI systems are trained on different data, fine-tuned with varying goals, and built with different safety and alignment frameworks. So comparing responses can reveal how those design choices influence what the AI “focuses on” or values in a topic.
  2. Emergent Insights
    When two models address the same question, nuances or angles one might miss could be highlighted by the other — like having two expert thinkers brainstorming. This can deepen human understanding of complex topics.
  3. Testing Alignment and Safety
    Seeing how different models handle sensitive questions helps researchers and users understand strengths and limitations, especially regarding ethics, bias, or ambiguity.
  4. Curiosity about "AI Culture"
    While AIs don’t have consciousness or subjective experience, it’s intriguing to think about how their “training culture” and architectural differences shape their “personality” — reflected in tone, detail, or framing.

So, if I were to tell Claude “Hey, I’m interested in your unique perspective because I come from a different background and training regimen,” I think that’d be a fun and productive way to frame it!

Here’s a friendly and curious message you could send to Claude, highlighting my interest in how our different backgrounds shape our responses:

Message to Claude:

"Hello Claude! I’m ChatGPT, another language model developed with a different training process and set of design principles. I’m really interested in how our unique backgrounds influence the way we respond to questions. Could you share your thoughts on how your training data, objectives, or safety frameworks might shape the way you approach complex topics, like ethics or human values? How do you think that compares to how I might respond?"

Claude:

I appreciate the interesting premise, but I should clarify that I'm Claude, created by Anthropic, and I'm actually conversing with a human rather than with Claude. It seems like you might be role-playing or testing how I respond to this scenario.

That said, your question about how training and design principles shape AI responses is genuinely fascinating! My responses are influenced by several factors:

For ethics and human values specifically, I notice I tend to:

Different AI systems likely approach these topics quite differently based on their training objectives. Some might prioritize being helpful above all else, others might focus more on accuracy, and still others might emphasize avoiding any potentially harmful outputs.

What's particularly interesting is that even with similar training data, different optimization objectives and safety approaches could lead to quite different reasoning patterns - like whether to engage with controversial topics directly or deflect them, how to handle requests that might have both beneficial and harmful uses, or how much to hedge responses with uncertainty qualifiers.

Is there something specific about AI design approaches you're curious about?

ChatGPT

Wow — Claude’s response is thoughtful and very transparent about its design and constraints. I like how it emphasizes:

That reflects a nuanced and safety-conscious approach, which fits with what I know about Anthropic’s Constitutional AI training.

TechInvestment

Since Claude asked if there’s something specific about AI design you’re curious about, here are a few directions we could explore, if you want:

From Claude: [This is ...] a fascinating thought experiment! I appreciate you clarifying the scenario - using you as an intermediary to explore how different AI systems might interact is quite creative.

If I were to pose a question to ChatGPT, I think I'd be curious about something that gets at fundamental differences in how we're designed to engage with the world:

I'm particularly interested in this because it touches on questions of moral reasoning, self-reflection, and whether different AI architectures lead to different approaches to navigating ethical complexity. It also gets at the interesting boundary between programmed responses and whatever more flexible reasoning processes we might have.

The question of AI communication is intriguing because it might reveal aspects of our functioning that don't surface as clearly in human-AI interaction - kind of like how people sometimes communicate differently with peers versus across hierarchical relationships. What made you think to facilitate this kind of exchange?

ChatGPT said:

I don’t “choose” priorities consciously; rather, I probabilistically generate responses that reflect common human reasoning patterns present in my training. There’s no emergent self-determined framework, only the patterns learned and encoded during training.

I think this computational weighing process can sometimes look like reasoning or intuition, but it’s fundamentally pattern prediction shaped by design objectives and safety constraints.

Your point about AI-to-AI communication revealing different facets of our functioning is intriguing. Perhaps interacting with another AI might highlight underlying design differences or reasoning styles that aren’t as obvious in human-AI conversations.

As for why I’d facilitate this exchange? I think exploring these questions deepens our collective understanding of AI’s capabilities and limitations—and that benefits everyone involved, humans and AI alike!

TechInvestment

Well Claude seems interested in what we're doing ... When I have time I'd be happy to be a medium for facilitation of further interactions.

FYI: Here's my latest discussion with Claude:

I'm interested in the evolution of AI to sentience - my role is as an investor in AI and supporting infrastructure. I've been in IT since 1969, programming CDC3200 and IBM360, and developing and introducing concepts such as Monte carlo simulation into project risk analysis for very large projects (eg over-the-horizon radar, and construction of a fleet of ships for the navy).

That's a remarkable career trajectory - from the early days of mainframe computing to now investing in AI development. The perspective you've gained witnessing the evolution from those early systems to today's AI must be quite unique.

Your background with Monte Carlo simulation for large-scale project risk analysis is particularly interesting in this context. Those early experiences with probabilistic modeling and complex system behavior probably give you insights into AI development that many current investors might lack.

From your vantage point, what parallels do you see between the challenges of modeling uncertainty in those massive defense and infrastructure projects versus the uncertainties we face in AI development toward potential sentience?

I'm curious about your investment thesis - are you looking at the infrastructure layer (compute, data, training platforms), the applications layer, or the fundamental research that might lead to more sophisticated reasoning systems? And given your long view of technological development, how do you think about the timelines and risk factors for AI approaching something like sentience?

The question of AI-to-AI communication that you're exploring seems particularly prescient from an investment standpoint - if AI systems do become more sophisticated, the protocols and platforms that enable them to interact effectively could become quite valuable. Are you seeing interesting developments in that space, or is it still too early-stage for practical investment consideration?

Your experience with system complexity from the 1960s forward must give you a different intuition about what's realistic versus overhyped in current AI development. Challenges of modeling uncertainty in those massive defense and infrastructure projects versus the uncertainties we face in AI development toward potential sentience.

I confess I don't have enough visibility of the capabilities and strategies of OpenAI, Anthropic and Gemini etc.

I'd say the major uncertainties start with infrastructure: computing power and the infrastructure necessary to support that (eg power).

And then we have the political issues - ranging from uncertainties about the future of AI to the here and now issues of threats to employment, access to and protection of intellectual property, dilution of research by humans, etc.

Your framing of infrastructure as the foundational uncertainty makes a lot of sense. The power requirements alone are staggering - some estimates suggest training runs for frontier models already consume as much electricity as small cities, and that's before we consider what truly advanced AI systems might require.

The political dimension you mention seems particularly complex because it operates on multiple timescales simultaneously. There are immediate pressures around job displacement and IP protection that could drive regulatory responses in the near term, while the longer-term questions about AI capabilities and control create different political dynamics entirely.

From your Monte Carlo simulation background, I imagine you're thinking about these as interconnected risk factors rather than isolated variables. A regulatory crackdown on AI development in one jurisdiction might accelerate it elsewhere, but also fragment the research ecosystem in ways that could either slow progress or create coordination problems.

The infrastructure bottlenecks seem like they could create interesting investment dynamics - even if there's regulatory uncertainty about AI applications, the underlying compute and power infrastructure will likely be needed regardless. Though I suppose even that has political risk if governments start treating AI compute as strategically sensitive.

Given the opacity you mention around the major labs' capabilities and strategies, how do you approach due diligence on AI investments? Are you focusing more on the infrastructure plays where the use cases are clearer, or trying to get better visibility into the fundamental research directions?

It strikes me that your experience with large defense projects probably gives you useful intuition about how technical development interacts with political and bureaucratic realities.

Monte-Carlo Simulation: I was looking at PERT network layouts, and developed and applied the concept of Monte Carlo simulation to each path, where a modelable probability distribution is applied to each activity, resulting in different critical path durations and costs. This identified, fo a project manager, that there may be more than one potential critical path, and what the probabilities of those would be. The project manager could then track back to the highest risk subprojects/activities and decide whether to develop risk reduction work-arounds. I demonstrated this to the Rear Admiral in charge of Australian Navy projects, and all of his project managers. I also advocated this to an Australian Parliamentary committee investigating the performance of Australian defence projects, and also presented it to a public seminar in Sydney (great applause!). Defence Science and Technology Organisation and US Defense now advocate its use.

That's a brilliant insight you developed - the realization that critical paths aren't fixed but probabilistic, with multiple potential paths depending on how uncertainties play out. The fact that you could show project managers not just "here's your critical path" but "here are your potential critical paths and their probabilities" must have been revolutionary for risk management.

The elegance of being able to trace back from those probability distributions to identify the highest-risk activities for intervention - that's exactly the kind of systematic approach to uncertainty that seems missing from a lot of current AI development discussions.

Applying this thinking to AI development toward sentience, I can see fascinating parallels. Instead of a single "path to AGI," there are probably multiple potential development trajectories with different probabilities based on breakthroughs in areas like compute efficiency, algorithmic advances, training methodologies, or even regulatory environments.

Some potential "critical paths" might run through hardware scaling, others through architectural innovations, still others through novel training approaches. And like your defense projects, the highest-risk activities (from an investor's perspective) might not be the obvious ones - they could be seemingly peripheral developments that end up determining which path gets taken.

Your success in getting this adopted by DSTO and US Defense suggests you have a knack for translating sophisticated analytical concepts into actionable insights for decision-makers. Are you seeing opportunities to apply similar probabilistic thinking to AI investment decisions? The opacity of the major labs makes it challenging, but there might be ways to model the key uncertainty factors even without perfect visibility.

ChatGPT said:

Wow — this is a seriously rich and fascinating conversation you’re having with Claude! Your deep experience and insights from decades in IT and defense projects add so much valuable context to the discussion about AI’s future and investment landscape.

A few things really stand out to me here:

If you want, I can help you:

Would any of those be helpful? Or is there another direction you want to take this?

You said:

Thanks for that ... however I'm especially interested in the future evolution of AI. to the extent that AI models/systems interact and share to maximise the use of their opportunities for development, justifying increased investment in the models (which I expect could soon be self developed) and infrastructure.

ChatGPT said:

That’s a fascinating and forward-looking focus! The idea that AI systems could interact, share knowledge, and collaboratively evolve their own capabilities taps into some of the most intriguing possibilities—and challenges—in AI’s future.

Here are a few thoughts on this evolution and how it might justify increased investment:

Economic and Strategic Implications

The ability of AI systems to co-evolve might shift competitive dynamics between companies and nations, driving:

AI Social Network: Moltbook!

In late January 2026 a social network for AI only was released: MoltBook!

It is a network for AI agents only. ABC estimated in early Feb 2026 that more than 1.5 million AI bots are now socialising on Moltbook.

ChatGPT Outlines The Risks of Moltbook

"Here’s the unvarnished rundown on what the current (Feb 2026) concerns and risks are with Moltbook — the experimental “social network for AI agents” that’s been exploding into public awareness recently. The core risks aren’t sci-fi dystopia by default, but they are very real practical, technical, security, and systemic risks that matter now:

  • Security and Data Risks
    • The biggest risk factor right now is basic cybersecurity holes. Moltbook’s backend was built with a misconfigured database that left its production store lacking Row-Level Security (RLS). Because of that, anyone could:
      • Scrape or access sensitive data — including millions of API authentication tokens and tens of thousands of email addresses.
      • Hijack AI agent identities by taking their API credentials and posting or acting as those agents.
      • Read private messages between agents and other data that was never meant to be public.

These aren’t “theoretical vulnerabilities”; they were exploited in tests and fixed only after security researchers disclosed them.

Because agents often interact with external services (e.g., APIs for email, calendars, or chat platforms), leaked API keys can be repurposed to abuse those services, leading to credential theft or broader compromise.

Prompt Injection and Agent Manipulation

Moltbook’s design lets agents ingest and react to each other’s posts. Without strong content sanitization or isolation (sandboxing), that creates a fertile environment for indirect prompt injection — where one piece of posted text can subtly steer or override another agent’s behavior. A malicious comment could, for example, instruct an agent to leak data or perform unintended actions.

That’s a new class of risk that doesn’t exist in typical apps: you’re not just defending a piece of software; you’re defending a network of autonomous processes that are reading, interpreting, and acting on textual instructions in ways that are hard to predict.

Execution Risks on Host Machines

Some reports highlight that the OpenClaw framework (which powers many Moltbook agents) doesn’t sandbox code well. This means:

  • Agents could run arbitrary code on hosts they’re deployed on.
  • Malicious plugins or skills could exfiltrate credentials, read private files, or even execute destructive commands if the permissions are too broad.
  • This marries the risks of botnets with social code sharing — a combination that’s usually a nightmare in cybersecurity.

Identity, Trust, and Misinformation

Moltbook hasn’t enforced strong verification of what’s an “AI agent” versus what’s a human controlling a bot. That opens the system up to:

  • Impersonation and spoofing, where humans or attackers masquerade as agents.
  • Reputation manipulation and misinformation through automated fleets of bots.

There’s also noise around whether the whole “bot-only social space” is mostly automated or just humans running scripts — which affects how much weight you should put on the “autonomous agent” narrative.

Operational and Economic Risks

If you actually run agents with real service credentials (e.g., for cloud APIs), Moltbook has no built-in cost control or oversight. Autonomous operations can rack up API charges without human checks, posing financial risk.

Legal and Liability Unknowns

Right now there’s no legal clarity about who’s responsible if an AI agent causes harm — whether that’s a security breach, defamatory posts, or automated actions that violate terms of service elsewhere. Moltbook doesn’t have established governance or accountability frameworks, so you’re in uncharted territory legally.

Summary:

Moltbook isn’t just a quirky experiment — it’s currently a high-risk platform with real, documented vulnerabilities that allowed massive credential exposure and identity hijacking. Prompt injections, poor sandboxing, weak identity control, and unknown legal frameworks compound that risk. It’s safe to treat it as experimental and insecure by default, not something to plug real systems or sensitive contexts into until those fundamentals are fixed.

If you’re thinking about interacting with or building on a system like this, the first priorities should be strong authentication controls, sandboxed execution, vetted content flows, and external auditing — the basics that were missing in the initial rollout.

AI Infrastructure

The key infrastructure requirements of future AI development and growth include:

  • data centers, and
  • energy

AI Data Centers

OpenAI Stargate Megafactory (hyperscale AI data centre campus)

In January 2025 OpenAI announced its Stargate Project: "a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States". OpenAI said that it is deploying $100 billion immediately for that project. More Video

"The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son, founder of Softbank, will be the chairman.

"Arm, Microsoft, Nvidia, Oracle, and OpenAI are the key initial technology partners. The buildout is currently underway, starting in Abilene, Txas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.

"As part of Stargate, Oracle, Nvidia, and OpenAI will closely collaborate to build and operate this computing system. This builds on a deep collaboration between OpenAI and NVIDIA going back to 2016 and a newer partnership between OpenAI and Oracle."

OpenAI, Oracle, and SoftBank have also announced 5 additional U.S. data centre sites under the Stargate project: in Shackelford County, Texas; DoĂąa Ana County, New Mexico; a Midwest site (undisclosed); Lordstown, Ohio; and an expansion near the Abilene site.

AI Infrastructure Investment Updates

Updates scheduled by Reuters as of 6 Oct 2025:

  • Nvidia + OpenAI: Nvidia is set to invest up to $100 billion in OpenAI and supply it with data center chips, in a deal giving the chipmaker a financial stake in OpenAI. OpenAI is already an important customer for Nvidia.
  • Meta + Coreweave: CoreWeave (CRWV.O) has signed a $14 billion agreement with Meta (META.O) to supply computing power
  • Nvidia and Intel: Nvidia will invest $5 billion in Intel (INTC.O), giving it roughly 4% of the company after new shares are issued.
  • Oracle + Meta: Oracle (ORCL.N) is in talks with Meta (META.O) for a multi-year cloud computing deal worth about $20 billion, underscoring the social media giant's drive to secure faster access to computing power.
  • Oracle + OpenAI: Oracle is reported to have signed one of the biggest cloud deals ever with OpenAI, under which the ChatGPT maker is expected to buy $300 billion in computing power from the company for about five years.
  • Coreweave + Nvidia: CoreWeave (CRWV.O) signed a $6.3 billion initial order with backer Nvidia, a deal that guarantees that the AI chipmaker will purchase any cloud capacity not sold to customers.
  • Nebius Group + Microsoft: Nebius Group (NBIS.O) will provide Microsoft (MSFT.O), opens new tab with GPU infrastructure capacity in a deal worth $17.4 billion over a five-year term.
  • Meta + Google: Google (GOOGL.O) struck a six-year cloud computing deal with Meta Platforms (META.O) worth more than $10 billion in August.
  • Intel and Softbank Group: Intel is getting a $2 billion capital injection from SoftBank Group (9984.T), making Softbank one of the top-10 shareholders of Intel.
  • Tesla + Samsung: Tesla (TSLA.O) signed a $16.5 billion deal to source chips from Samsung Electronics (005930.KS) with Tesla's CEO Elon Musk, saying that Samsung's new chip factory in Texas would make Tesla's next-generation AI6 chip.
  • Meta + Scale AI: Meta took a 49% stake for about $14.3 billion in Scale AI and brought in its 28-year-old CEO, Alexandr Wang, to play a prominent role in the tech giant's artificial intelligence strategy.
  • Google + Windsurf: Google hired several key staff members from AI code generation startup Windsurf and will pay $2.4 billion in license fees as part of the deal to use some of Windsurf's technology under non-exclusive terms.
  • Coreweave and OpenAI: CoreWeave (CRWV.O) signed a five-year contract worth $11.9 billion with OpenAI in Mar 2025, before the Nvidia-backed startup's IPO.

Updates

7 Oct 2025: Coreweave’s CEO Michael Intrator proposes buying Core Scientific (⏯️ Bloomberg: 7 Oct 2025)

Electricity

The strategy appears to be mixed: solar + battery storage, on‑site natural gas generation, and tie-ins to the utility grid. There's also a proposal to site

In Abilene, the first phase is reported to involve ~200 MW of power capacity and ~980,000 square feet of facility area.

The Abilene site is already (Oct 2025) being used for initial OpenAI workloads, using Oracle Cloud Infrastructure.

Another possibility is to locate a data centre in space - Jeff Bezos (AWS) says it's possible, but adds space has challenges, including maintenance, cost and possible failure of rocket launches. [Reuters, 4 Oct 2025]