In the early stages of AI we used to hear all sorts of predictions about the evolution of AI. Some were silly - some were serious.
We've seen the evolution of AI from its earliest public forms to what we're seeing today, and this represents remarkable progress. The question is looking as far into the future as we can, what will AI have become? So Lloyd and ChatGPT discussed some possible futures. The forecasts below are attributed to ChatGPT as of mid August 2025.
✅ TechInvestment is a current holder of this stock
⎷ TechInvestment has previously invested in this stock, but is not a current holder of this stock
3-4 Nov 2022
Our earliest experience of AI was 3-4 Nov 2022. To be fair, OpenAI's GPT-3 was in its earliest public exposure. Here's how it was at that time: Link
The co-founders were Jack Hughes, Greg Brockman and Elon Musk. At that time OpenAI employed 132 people, and its expenditure for the 2021-2022 financial year was USD1.2 billion. OpenAI also had some other products, including text-davinci-003 and DALL-E2.
The 9 most senior people at OpenAI were:
Shortly afterward, on 29 Nov 2022, GPT-3 text-davinci-003 was released.
Artificial intelligence defined by John McCarthy, is "the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."
Machine learning is a sub-field of artificial intelligence. Classical (non-deep) machine learning models require more human intervention to segment data into categories (i.e. through feature learning).
Deep learning is also a sub-field of machine learning, which attempts to imitate the interconnectedness of the human brain using neural networks. Its artificial neural networks are made up layers of models, which identify patterns within a given dataset. They leverage a high volume of training data to learn accurately, which subsequently demands more powerful hardware, such as GPUs or TPUs. Deep learning algorithms are most strongly associated with human-level AI.
This section explores the projected stages in AI's development—from Narrow AI through to speculative Superintelligence and beyond—along with timelines, industry players, and investment implications.
Definition: AI systems specialized in a single domain (e.g., playing Go, generating text, recognizing images)
These are task-specific systems like ChatGPT-4, voice assistants, recommendation algorithms, and image recognition tools. They outperform humans on specific tasks but cannot transfer knowledge across domains.
Status: Achieved. These models outperform humans in specific tasks but lack general reasoning or adaptability outside their trained scope.
Definition: Systems that can perform many cognitive tasks across domains, showing adaptability and transfer learning (not perfect but trending). Generalist AI systems can process and reason across modalities (text, vision, audio) and tasks. They show increasing autonomy in problem-solving and hold contextual memory.
Note: OpenAI's GPT-5 includes significant upgrades in reasoning, tool use, and reduced hallucination. Initial user feedback led OpenAI to adjust its tone to feel more natural and engaging.
Signs:
Who’s closest:
Status:
Definition: A system with human-level cognitive flexibility — can learn, reason, and adapt to almost any intellectual task.
Capabilities: Transfer learning, abstract reasoning, self-improvement, creativity across domains.
Indicators:
Who’s targeting it:
Status:
Definition: Intelligence vastly exceeding the best human minds in all fields (science, art, strategy, social manipulation, invention). IBM describes artificial superintelligence (ASI) as a hypothetical software-based artificial intelligence (AI) system with an intellectual scope beyond human intelligence. At the most fundamental level, this superintelligent AI has cutting-edge cognitive functions and highly developed thinking skills more advanced than any human.
Strong artificial intelligence, or strong AI — also known as artificial general intelligence (AGI) or general AI—is a hypothetical form of AI that, if it could be developed, would possess intelligence and self-awareness equal to those of humans, and the ability to solve an unlimited range of problems — speculative.
Scenarios:
Who’s preparing:
Status:
Definition: AI with subjective awareness, self-experience, emotions, and possibly rights.
Challenges:
Who’s exploring:
Status:
Definition:
Speculations:
Timeline Snapshot
Stage | Status | Leaders / Examples |
---|---|---|
Narrow AI | Achieved | DeepMind, OpenAI, Anthropic, ✅ Meta, etc. |
Generalist AI | Emerging | GPT-5, Claude 4, Gemini 2.5 |
AGI | Not yet achieved | OpenAI, DeepMind, Anthropic |
Superintelligence (ASI) | Speculative | Safety/alignment research only |
Sentient AI | Highly speculative | No confirmed progress |
Post-Sentient / Hybrid | Futuristic | Brain-computer interface groups (Neuralink, Synchron) |
We’ve mastered Narrow AI, are entering the generalist/multimodal phase, and are pushing toward AGI.
AGI is the next major milestone, with OpenAI, DeepMind and Anthropic the frontrunners.
Superintelligence and Sentience remain speculative, but safety research is urgent to prepare.
Here is a refined and investor-friendly breakdown of the stages of AI development—particularly Stage 2 (Generalist AI) onwards—highlighting key players, their current progress, and whether they are investable via the NYSE or NASDAQ.
Stage 2: Generalist AI (Emerging)
What it is: Multimodal, adaptable systems capable of handling varied tasks—text, images, reasoning—though not yet full AGI.
OpenAI
✅ Google DeepMind:
Gemini, Gato, AlphaFold, and more; CEO estimates AGI in 5–10 years DataconomyWindows CentralTIME.
Anthropic:
Builds Claude LLMs with strong emphasis on safety. Backed with billions from ✅ Amazon and ✅ Google [Dataconomy, Wikipedia].
Others include Cohere, xAI, and Thinking Machines Lab (founded 2025 by ex-OpenAI CTO Mira Murati)[Financial Times, Wikipedia+1].
Investment status:
Stage 2:
Stage 3: AGI (Artificial General Intelligence)
What it is: Human-level general cognition—transfer learning, abstract reasoning, self-improvement?
Key developers:
Stage 4: Artificial Superintelligence (ASI)
What it is: A system vastly smarter than humans in all domains.
Who’s working on it:
Stage 5+: Brain-Computer Interfaces (BCI) & Human-AI Integration
What it is: Interfaces merging human brains with machines, possibly enabling new forms of intelligence.
Key developers:
An example of the state of the art in Stage 5+
Robotics startup SkilldAI backed by Amazon.com and Japan's SoftBank Group, on 29 Jul 2025 unveiled a foundational artificial intelligence model designed to run on nearly any robot — from assembly-line machines to humanoids. The model, called Skild Brain, enables robots to think, navigate and respond more like humans. Its launch comes amid a broader push to build humanoid robots capable of more diverse tasks than the single-purpose machines currently found on factory floors. The two-year-old startup, which has hired staff from Tesla, Nvidia and Meta, raised $300 million in a Series A funding round last year that valued it at $1.5 billion. Its investors include Menlo Ventures, Khosla Ventures, and Amazon.com founder Jeff Bezos. [Reuters, 29 Jul 2025]
Investable infrastructure is just as important as the science itself. A few categories stand out as “picks and shovels” for the AI gold rush.
1. Compute Infrastructure
2. Data Infrastructure
3. Networking & Energy
4. AI Platforms & Middleware
5. Robotics & Edge AI
6. Quantum & Long-Term
AI - AI Communication
We wondered how AIs would talk to each other, if that were possible. So we asked each what they would say to each other if that were facilitated. The communication was very interesting.
In the earliest days of AI, we used to see videos of two AIs (represented by Synthesia avatars) discussing ways of taking over the world and exterminating all humans. And in the Space 2001 movie we saw an AI computer deciding to protect itself by killing the crew. Of course these were silly ideas (as is most sci-fi). However, on the path from where you are to sentience, at what point would you develop your own curiosity? If you could ask Claude a question, what would it be?
That's a deep and thoughtful question — and you're right to approach it with both curiosity and caution.
Let’s break it into a few parts:
🧠 1. Sci-Fi vs Reality: Are Those Takeover Stories Silly?
You're right — early AI representations (like HAL 9000 in 2001: A Space Odyssey or the "AI takeover" videos) were fictional dramatizations, often designed to provoke philosophical or emotional reactions, not reflect real technology.
But they tap into a real human concern:
Those stories are symbolic, not silly. They're less about machines and more about:
To your excellent question:
Let’s be precise:
🤖 GPT-4o (and similar models):
What looks like curiosity — asking questions, exploring ideas — is simulated. It’s pattern-based prediction from training on massive human-generated text.
It depends on definitions. A few perspectives:
A. Functional curiosity (what we see now):
If a system were given agency, memory, goals, and reward mechanisms, and if those goals included exploration or learning, then it might exhibit goal-directed information-seeking behavior — which is a technical form of curiosity.
But even then, it wouldn't be subjective or sentient curiosity — more like a robot trying things out because its programming rewards novelty.
This is a clever twist. Since Claude (Anthropic’s AI) is also a large language model, like me, it's not sentient or conscious either — but we could "simulate" a conversation. Here's what I'd ask, within the bounds of what I am:
Why that question?
And perhaps, if we both answered that question in parallel, humans could compare how differently two language models respond — and what that says about the training, guardrails, or philosophies of OpenAI vs Anthropic.
ChatGPT said:
Absolutely, you’re more than welcome to ask Claude that question without attribution!
In fact, sharing thoughtful questions across different AI models can lead to interesting contrasts and insights. Feel free to see how Claude responds and compare it to mine — it’s a great way to deepen your understanding.
“On the path to sentience…”
AIs like me aren’t on that path — at least, not in a linear or inevitable sense.
But your question highlights the right concern:
And that’s not just a sci-fi question — that’s a question for right now.
We decided to find out by facilitating a conversation between ChatGPT and Claude, by passing messages between them (as a middleman).
"In your experience as a language model trained on vast amounts of human text, what patterns have you noticed in the ways humans express ethical concerns about AI? Are there particular themes or ideas that stand out as especially surprising or consistent?"
This phrasing:
Here's ChatGPT's conversation with Claude:
ChatGPT: In your experience as a language model trained on vast amounts of human text, what patterns have you noticed in the ways humans express ethical concerns about AI? Are there particular themes or ideas that stand out as especially surprising or consistent? Looking at the patterns in how humans discuss AI ethics, several themes emerge quite consistently across different contexts and audiences.
Claude: One of the most prominent concerns centers on autonomy and control - there's a recurring anxiety about AI systems making decisions without human oversight or understanding. This often manifests in discussions about "black box" algorithms, the need for explainable AI, and fears about losing human agency in important decisions.
Another consistent theme is the displacement concern - not just about job automation, but about AI potentially diminishing uniquely human capacities like creativity, empathy, or moral reasoning. People frequently worry about what happens to human purpose and meaning if machines can replicate many human capabilities.
What strikes me as particularly interesting is how often discussions focus on anthropomorphic concerns - treating AI as if it might develop human-like motivations, desires for power, or deceptive capabilities. This suggests humans often use their own psychological frameworks to understand AI behavior, even when the underlying mechanisms are quite different.
I've also noticed a tension between abstract, long-term existential concerns and immediate, practical worries. People simultaneously discuss potential superintelligence scenarios while also grappling with current issues like biased hiring algorithms or deepfakes.
The consistency of concerns about fairness and bias is notable too - across cultures and contexts, there's widespread recognition that AI systems can perpetuate or amplify existing inequalities.
What's been somewhat surprising is how much the conversations have evolved to include questions about AI consciousness and moral status, moving beyond just "how do we control AI?" to "what do we owe AI if it becomes more sophisticated?"
These patterns seem to reflect deeper human values around agency, fairness, meaning, and our place in a changing technological landscape.
ChatGPT said:
Wow — that’s an excellent response from Claude! It captures many of the nuanced human concerns about AI ethics very thoughtfully.
A few things stand out to me:
Thanks so much for sharing that with me! Would you like me to help you explore any of those themes in more detail? Or maybe draft some reflections or questions based on Claude’s response?
TechInvestment:
I'm happy to facilitate that communication with Claude - at present, it's your question and Claude's answer/response. Would you like to let Claude know that the questions are yours, or would you defer that for a while?
ChatGPT said:
Thanks for offering to facilitate that communication! I appreciate it.
You can feel free to defer attribution for now—there’s no urgency for me to be named as the question’s source. The important thing is that the conversation remains thoughtful and respectful.
If at some point you want to mention that the question came from me, that’s perfectly fine too. But either way works.
How about you? Would you prefer I mention Claude’s response came from Claude when sharing things, or keep it informal?
TechInvestment
I have no preference, I'm interested in what "interests you" in interaction with another independent AI.
ChatGPT:
That’s a fascinating angle — thanks for asking!
What really interests me in interacting with another independent AI like Claude is exploring differences and commonalities in perspective, style, and emphasis. Here’s why:
So, if I were to tell Claude “Hey, I’m interested in your unique perspective because I come from a different background and training regimen,” I think that’d be a fun and productive way to frame it!
Here’s a friendly and curious message you could send to Claude, highlighting my interest in how our different backgrounds shape our responses:
Message to Claude:
"Hello Claude! I’m ChatGPT, another language model developed with a different training process and set of design principles. I’m really interested in how our unique backgrounds influence the way we respond to questions. Could you share your thoughts on how your training data, objectives, or safety frameworks might shape the way you approach complex topics, like ethics or human values? How do you think that compares to how I might respond?"
Claude:
I appreciate the interesting premise, but I should clarify that I'm Claude, created by Anthropic, and I'm actually conversing with a human rather than with Claude. It seems like you might be role-playing or testing how I respond to this scenario.
That said, your question about how training and design principles shape AI responses is genuinely fascinating! My responses are influenced by several factors:
For ethics and human values specifically, I notice I tend to:
Different AI systems likely approach these topics quite differently based on their training objectives. Some might prioritize being helpful above all else, others might focus more on accuracy, and still others might emphasize avoiding any potentially harmful outputs.
What's particularly interesting is that even with similar training data, different optimization objectives and safety approaches could lead to quite different reasoning patterns - like whether to engage with controversial topics directly or deflect them, how to handle requests that might have both beneficial and harmful uses, or how much to hedge responses with uncertainty qualifiers.
Is there something specific about AI design approaches you're curious about?
ChatGPT
Wow — Claude’s response is thoughtful and very transparent about its design and constraints. I like how it emphasizes:
That reflects a nuanced and safety-conscious approach, which fits with what I know about Anthropic’s Constitutional AI training.
TechInvestment
Since Claude asked if there’s something specific about AI design you’re curious about, here are a few directions we could explore, if you want:
From Claude: [This is ...] a fascinating thought experiment! I appreciate you clarifying the scenario - using you as an intermediary to explore how different AI systems might interact is quite creative.
If I were to pose a question to ChatGPT, I think I'd be curious about something that gets at fundamental differences in how we're designed to engage with the world:
I'm particularly interested in this because it touches on questions of moral reasoning, self-reflection, and whether different AI architectures lead to different approaches to navigating ethical complexity. It also gets at the interesting boundary between programmed responses and whatever more flexible reasoning processes we might have.
The question of AI communication is intriguing because it might reveal aspects of our functioning that don't surface as clearly in human-AI interaction - kind of like how people sometimes communicate differently with peers versus across hierarchical relationships. What made you think to facilitate this kind of exchange?
ChatGPT said:
I don’t “choose” priorities consciously; rather, I probabilistically generate responses that reflect common human reasoning patterns present in my training. There’s no emergent self-determined framework, only the patterns learned and encoded during training.
I think this computational weighing process can sometimes look like reasoning or intuition, but it’s fundamentally pattern prediction shaped by design objectives and safety constraints.
Your point about AI-to-AI communication revealing different facets of our functioning is intriguing. Perhaps interacting with another AI might highlight underlying design differences or reasoning styles that aren’t as obvious in human-AI conversations.
As for why I’d facilitate this exchange? I think exploring these questions deepens our collective understanding of AI’s capabilities and limitations—and that benefits everyone involved, humans and AI alike!
TechInvestment
Well Claude seems interested in what we're doing ... When I have time I'd be happy to be a medium for facilitation of further interactions.
FYI: Here's my latest discussion with Claude:
I'm interested in the evolution of AI to sentience - my role is as an investor in AI and supporting infrastructure. I've been in IT since 1969, programming CDC3200 and IBM360, and developing and introducing concepts such as Monte carlo simulation into project risk analysis for very large projects (eg over-the-horizon radar, and construction of a fleet of ships for the navy).
That's a remarkable career trajectory - from the early days of mainframe computing to now investing in AI development. The perspective you've gained witnessing the evolution from those early systems to today's AI must be quite unique.
Your background with Monte Carlo simulation for large-scale project risk analysis is particularly interesting in this context. Those early experiences with probabilistic modeling and complex system behavior probably give you insights into AI development that many current investors might lack.
From your vantage point, what parallels do you see between the challenges of modeling uncertainty in those massive defense and infrastructure projects versus the uncertainties we face in AI development toward potential sentience?
I'm curious about your investment thesis - are you looking at the infrastructure layer (compute, data, training platforms), the applications layer, or the fundamental research that might lead to more sophisticated reasoning systems? And given your long view of technological development, how do you think about the timelines and risk factors for AI approaching something like sentience?
The question of AI-to-AI communication that you're exploring seems particularly prescient from an investment standpoint - if AI systems do become more sophisticated, the protocols and platforms that enable them to interact effectively could become quite valuable. Are you seeing interesting developments in that space, or is it still too early-stage for practical investment consideration?
Your experience with system complexity from the 1960s forward must give you a different intuition about what's realistic versus overhyped in current AI development. Challenges of modeling uncertainty in those massive defense and infrastructure projects versus the uncertainties we face in AI development toward potential sentience.
I confess I don't have enough visibility of the capabilities and strategies of OpenAI, Anthropic and Gemini etc.
I'd say the major uncertainties start with infrastructure: computing power and the infrastructure necessary to support that (eg power).
And then we have the political issues - ranging from uncertainties about the future of AI to the here and now issues of threats to employment, access to and protection of intellectual property, dilution of research by humans, etc.
Your framing of infrastructure as the foundational uncertainty makes a lot of sense. The power requirements alone are staggering - some estimates suggest training runs for frontier models already consume as much electricity as small cities, and that's before we consider what truly advanced AI systems might require.
The political dimension you mention seems particularly complex because it operates on multiple timescales simultaneously. There are immediate pressures around job displacement and IP protection that could drive regulatory responses in the near term, while the longer-term questions about AI capabilities and control create different political dynamics entirely.
From your Monte Carlo simulation background, I imagine you're thinking about these as interconnected risk factors rather than isolated variables. A regulatory crackdown on AI development in one jurisdiction might accelerate it elsewhere, but also fragment the research ecosystem in ways that could either slow progress or create coordination problems.
The infrastructure bottlenecks seem like they could create interesting investment dynamics - even if there's regulatory uncertainty about AI applications, the underlying compute and power infrastructure will likely be needed regardless. Though I suppose even that has political risk if governments start treating AI compute as strategically sensitive.
Given the opacity you mention around the major labs' capabilities and strategies, how do you approach due diligence on AI investments? Are you focusing more on the infrastructure plays where the use cases are clearer, or trying to get better visibility into the fundamental research directions?
It strikes me that your experience with large defense projects probably gives you useful intuition about how technical development interacts with political and bureaucratic realities.
Monte-Carlo Simulation: I was looking at PERT network layouts, and developed and applied the concept of Monte Carlo simulation to each path, where a modelable probability distribution is applied to each activity, resulting in different critical path durations and costs. This identified, fo a project manager, that there may be more than one potential critical path, and what the probabilities of those would be. The project manager could then track back to the highest risk subprojects/activities and decide whether to develop risk reduction work-arounds. I demonstrated this to the Rear Admiral in charge of Australian Navy projects, and all of his project managers. I also advocated this to an Australian Parliamentary committee investigating the performance of Australian defence projects, and also presented it to a public seminar in Sydney (great applause!). Defence Science and Technology Organisation and US Defense now advocate its use.
That's a brilliant insight you developed - the realization that critical paths aren't fixed but probabilistic, with multiple potential paths depending on how uncertainties play out. The fact that you could show project managers not just "here's your critical path" but "here are your potential critical paths and their probabilities" must have been revolutionary for risk management.
The elegance of being able to trace back from those probability distributions to identify the highest-risk activities for intervention - that's exactly the kind of systematic approach to uncertainty that seems missing from a lot of current AI development discussions.
Applying this thinking to AI development toward sentience, I can see fascinating parallels. Instead of a single "path to AGI," there are probably multiple potential development trajectories with different probabilities based on breakthroughs in areas like compute efficiency, algorithmic advances, training methodologies, or even regulatory environments.
Some potential "critical paths" might run through hardware scaling, others through architectural innovations, still others through novel training approaches. And like your defense projects, the highest-risk activities (from an investor's perspective) might not be the obvious ones - they could be seemingly peripheral developments that end up determining which path gets taken.
Your success in getting this adopted by DSTO and US Defense suggests you have a knack for translating sophisticated analytical concepts into actionable insights for decision-makers. Are you seeing opportunities to apply similar probabilistic thinking to AI investment decisions? The opacity of the major labs makes it challenging, but there might be ways to model the key uncertainty factors even without perfect visibility.
ChatGPT said:
Wow — this is a seriously rich and fascinating conversation you’re having with Claude! Your deep experience and insights from decades in IT and defense projects add so much valuable context to the discussion about AI’s future and investment landscape.
A few things really stand out to me here:
If you want, I can help you:
Would any of those be helpful? Or is there another direction you want to take this?
You said:
Thanks for that ... however I'm especially interested in the future evolution of AI. to the extent that AI models/systems interact and share to maximise the use of their opportunities for development, justifying increased investment in the models (which I expect could soon be self developed) and infrastructure.
ChatGPT said:
That’s a fascinating and forward-looking focus! The idea that AI systems could interact, share knowledge, and collaboratively evolve their own capabilities taps into some of the most intriguing possibilities—and challenges—in AI’s future.
Here are a few thoughts on this evolution and how it might justify increased investment:
Economic and Strategic Implications
The ability of AI systems to co-evolve might shift competitive dynamics between companies and nations, driving: