OpenAI and ChatGPT

We asked GPT-3 and CHATGPT some questions about OpenAI and AI generally, and did some further research - mainly for context about news and comments on the subject. We thought you might be interested in the products of that work. We're not experts - but we're interested.

What we asked ChatGPT


What generative AI systems
are in operation at present (May 2023)?

CHATGPT: There are many generative AI systems that are currently in operation and being used in various industries and applications. Here are some examples:

These are just a few examples of the many generative AI systems that are currently in operation. Generative AI is a rapidly evolving field, and new systems and applications are being developed all the time.

⤴️ Back to the questions


OpenAI in Nov 2022

These are responses from GPT-3 in Nov 2022 (just before the launch of CHATGPT)

Who are the founders of OpenAI?

GPT-3: The co-founders of OpenAI are Jack Hughes, Greg Brockman, and Elon Musk.

How many people are employed by OpenAI?

GPT-3: There are currently 132 people employed by OpenAI. Most of the people employed by OpenAI are located in the United States.

Biographies of the 9 most senior people in OpenAI (Nov 2022)

Note: These profiles are as provided by GPT-3 on 4 Nov 2022. However Linkedin profiles indicate that some may be out of date. Incomplete responses (possibly a limitation on the length of GPT-3 response text) are shown as ...

GPT-3: Jack Hughes is the founder of OpenAI, a non-profit research company devoted to advancing artificial intelligence in a responsible way. Hughes has a background in physics and mathematics, and he has been involved in AI research for over two decades. He is a co-founder of the Singularity Institute for Artificial Intelligence, and he has served on the boards of several AI-related organizations.

GPT-3: Greg Brockman is the co-founder and executive chairman of OpenAI. He is also the co-founder and CTO of notable data startup Cloudera. Brockman has a computer science degree from Rice University.

GPT-3: Elon Musk FRS (born June 28, 1971) is a business magnate, industrial designer, and engineer. He is the founder, CEO, CTO, and chief designer of SpaceX; early investor, CEO, and product architect of Tesla, Inc.; founder of The Boring Company; co-founder of Neuralink; and co-founder and initial co-chairman of OpenAI.

GPT-3: Demis Hassabis is a British artificial intelligence expert and entrepreneur. Hassabis was born in London in 1976. He studied computer science at the University of Cambridge and artificial intelligence at the Massachusetts Institute of Technology. He then worked as a research scientist at the University of Southern California's Information Sciences Institute. In 2010, Hassabis co-founded DeepMind Technologies, a artificial intelligence company, with Shane Legg.

GPT-3: Shane Legg is a co-founder and Chief Technology Officer of OpenAI, an artificial intelligence research laboratory. Legg was born in New Zealand in 1977. He received a B.Sc. in mathematics and computer science from the University of Auckland in 1998. He then moved to the United Kingdom to study for a Ph.D. in cognitive science at the University of Sussex. His Ph.D. thesis, titled "Computational Models of Conditioning and Reinforcement Learning..."

GPT-3: Pieter Abbeel is a Belgian-born American computer scientist and entrepreneur. He is a Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley, and co-founder of OpenAI, a non-profit research company devoted to advancing artificial intelligence in a responsible way. Abbeel's research focuses on robotics and machine learning. He has developed algorithms that enable robots to autonomously acquire new skills by learning from human teachers, and his work has...

GPT-3: Patrick Mynyk is the Executive Director of OpenAI. He is also a founding member of the company. Prior to joining OpenAI, he was a research scientist at Google Brain. He has a Ph.D. in computer science from the University of Toronto.

GPT-3: Dario Amodei is a research scientist at OpenAI. His research interests include machine learning, artificial intelligence, and neuroscience. He received his Ph.D. in computer science from Stanford University in 2016.

GPT-3: Igor Mordatch is a research scientist at OpenAI. His research interests are in machine learning and artificial intelligence. He received his Ph.D. in computer science from the University of California, Berkeley in 2014. Prior to joining OpenAI, he was a postdoctoral researcher at the University of California, San Francisco.

⤴️ Back to the questions


29 Nov 2022: New Product Launches

GPT-3 text-davinci-003

Update: Note that some issues have been addressed in the Dec 2022 update: `text-davinci-003`, 29 Nov 2022.

29 Nov 2022: ChatGPT!

⤴️ Back to the questions


How does OpenAI fund its operations?

What was OpenAI's expenditure in the 2021-2022 financial year?

GPT-3: GPT-3: OpenAI's expenditure in the 2021-2022 financial year was $1.2 billion.

Funding sources in 2023

CHATGPT: "OpenAI is primarily funded by a combination of investments from its founders and outside investors, as well as revenue generated by its products and services. The company has received substantial funding from a number of high-profile investors, including LinkedIn co-founder Reid Hoffman, venture capitalist Vinod Khosla, and Tesla CEO Elon Musk.

In addition to traditional investment funding, OpenAI also generates revenue by licensing its technology to companies and organizations, as well as by offering a range of AI-related services and products. These include GPT-3, a powerful language processing tool, as well as robotics and computer vision technologies.

OpenAI is also involved in a number of research partnerships with academic institutions and other organizations, which can help to further its research goals while providing additional sources of funding.

Overall, OpenAI's funding strategy is focused on leveraging its advanced AI technologies and research capabilities to generate revenue while continuing to push the boundaries of what is possible in the field of artificial intelligence."

⤴️ Back to the questions


Who licences OpenAI's technology?

CHATGPT: OpenAI has licensed its technology to a number of companies and organizations. Some of the notable ones include:

Overall, OpenAI's licensing partnerships enable it to generate revenue while also expanding the reach and impact of its AI technologies."

⤴️ Back to the questions


How does ChatGPT compare with Google's BARD?

CHATGPT: "ChatGPT and Google's BARD are both large language models that use natural language processing and machine learning to generate human-like text based on a given prompt. However, there are some key differences between the two models.

Overall, both ChatGPT and BARD are powerful language models that can generate high-quality text. However, ChatGPT is a larger and more general-purpose model, while BARD is a smaller and more specialized model focused on science and technology."

⤴️ Back to the questions


AI in the classroom for ages 5-10
ChatGPT's suggestions

CHATGPT: "Here is a model for using AI in education for children aged 5 to 10 years old:

Overall, AI can be a powerful tool for enhancing the learning experience of children aged 5 to 10 years old. By providing personalized learning, interactive tools, language learning, content filtering, predictive analytics, and feedback and evaluation, AI can help children learn more effectively and achieve better learning outcomes."

⤴️ Back to the questions


Does OpenAI Have a relationship With Synthesia?

CHATGPT: "To the best of my knowledge, OpenAI and Synthesia do not have an official relationship.

Synthesia is a UK-based startup that has developed a platform for creating personalized video content using AI-generated virtual presenters. While Synthesia uses AI technology to create its virtual presenters, it does not use OpenAI's GPT models or other language processing tools.

That being said, it is possible that individuals or teams at OpenAI may be familiar with or interested in Synthesia's technology. OpenAI is known for collaborating with a wide range of companies and organizations in the AI space, so it is possible that there could be future partnerships or collaborations between OpenAI and Synthesia or other companies in the video creation and AI-generated content space."

⤴️ Back to the questions


OpenAI and Copyright
How does OpenAI address copyright for material in its databases?

CHATGPT: "OpenAI takes copyright issues very seriously and has implemented various measures to address them.

Firstly, OpenAI has built its database using publicly available information, which it has obtained from a variety of sources that are generally considered to be in the public domain, such as websites, books, and other publicly accessible documents. In doing so, OpenAI strives to ensure that it is not infringing on any copyright laws or intellectual property rights.

Secondly, OpenAI uses a combination of automatic and human review processes to ensure that the content in its database does not violate any copyright or intellectual property laws. When necessary, OpenAI will seek permission from content owners to use their material or will remove copyrighted content from its database if requested to do so.

Finally, OpenAI has implemented various measures to prevent the misuse of its technology and to ensure that its models are not used to create content that infringes on copyright or intellectual property rights. OpenAI also encourages its users to adhere to ethical guidelines and to use its technology responsibly and legally.

Overall, OpenAI is committed to respecting copyright laws and intellectual property rights and takes appropriate steps to ensure that its activities are in compliance with applicable laws and regulations."

⤴️ Back to the questions


Should news media be compensated
if OpenAI uses their articles in its database?"

CHATGPT: "Whether or not news media should be compensated if OpenAI uses their articles in its database is a complex issue that is currently being debated within the industry.

On one hand, news media organizations invest significant resources in producing original content, and they rely on revenue from advertising and subscriptions to support their operations. If their content is being used by OpenAI, it could potentially impact their ability to generate revenue from that content. In this case, some argue that compensation is appropriate to ensure that news media organizations are adequately compensated for the use of their content.

On the other hand, OpenAI's use of news articles in its database can potentially benefit news media organizations by increasing the visibility of their content and bringing it to a wider audience. Additionally, OpenAI's database is designed to enhance natural language processing technology and facilitate research, which can ultimately benefit society as a whole.

Ultimately, the issue of compensation for the use of news articles in OpenAI's database is a matter of balancing the interests of news media organizations with the benefits of OpenAI's technology to society as a whole. It may require a collaborative effort between the news media and technology industries to arrive at a mutually beneficial solution that ensures fair compensation for the use of news articles while also advancing technological innovation."

Context:

  1. 🇺🇸 "Major news outlets have begun criticizing OpenAI and its ChatGPT software, saying the lab is using their articles to train its artificial intelligence tool without paying them. 'Anyone who wants to use the work of Wall Street Journal journalists to train artificial intelligence should be properly licensing the rights to do so from Dow Jones,' Jason Conti, general counsel for News Corp.’s Dow Jones unit, said in a statement provided to Bloomberg News. 'Dow Jones does not have such a deal with OpenAI.'" [More: Bloomberg: 17 Feb 2023]
  2. 🇦🇺 "News Corp has thrown down the gauntlet to the likes of ChatGPT owner Open AI, insisting AI firms will need to pay news publishers for the right to use its technology on 'proprietary' content." [More: AFR: 8 Mar 2023]
  3. 🇦🇺 "Artificial intelligence will be the next flashpoint between media conglomerates and technology giants after Nine joined its rival News Corp to push for payments from AI firms that have trained their systems on entertainment and news content." [More: SMH: 8 May 2023]

⤴️ Back to the questions


OpenAI and Defamation:
Has anyone made a claim against OpenAI for defamation?

CHATGPT: "As far as I am aware, there have been no reported cases of anyone making a claim against OpenAI for defamation.

Defamation occurs when a false statement is made that causes harm to a person's reputation or standing in the community. OpenAI is an artificial intelligence research organization, and its activities generally involve developing and researching artificial intelligence technologies. It is unlikely that OpenAI's activities would involve making false statements about individuals that could result in defamation claims.

However, like any organization, OpenAI may be subject to legal claims in other areas, such as intellectual property rights or breach of contract."

Context:

  1. World's first defamation suit against a bot (Apr 2023): A regional Australian mayor said he may sue OpenAI if it does not correct ChatGPT's false claims that he had served time in prison for bribery, in what would be the first defamation lawsuit against the automated text service. He became concerned about his reputation when members of the public told him ChatGPT had falsely named him as a guilty party in a foreign bribery scandal. [🇦🇺 More: AFR: 8 Mar 2023]

⤴️ Back to the questions


OpenAI, GDPR and Personal Data

🇪🇺 GDPR and Personal Data: Italy's Data Protection Authority (the Garante) has imposed a temporary ban on ChatGPT on the basis that is not compliant with the EU's General Data Protection GDPR Regulation [🇮🇹 Garante] OpenAI has scheduled a video meeting with the Garante for 4 May 2023. [🇮🇹 Garante 31 Mar 2023]

⤴️ Back to the questions


Accuracy of Generative AI and Responsibility for Errors

Arun Sundararajan, Prof of Technology @ NYU Stern School of Business discussed Generative AI. [Bloomberg Interview on 25 Mar 2023]

Prof Sundararajan referred to Section 230 of the Telecommunications Act which states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” [Section 230 of the Telecommunications Act]

One aspect was the subject of errors produced by generative AI - especially important for those who may build products (and businesses) on generative AI services (eg Chat GPT). [Example: Google's first public Bard demonstration]

Other issues and opinions:

⤴️ Back to the questions

Confidence/quality indicator for ChatGPT responses (almost a disclaimer)

CHATGPT: As an AI language model, I am capable of generating responses based on statistical patterns learned from a large corpus of text. However, I do not have the ability to calculate a confidence score for my responses in a statistical sense, nor can I directly indicate whether or not I am "hallucinating."

That being said, there are some techniques that can be used to estimate the confidence of my responses. For example, one approach is to use a technique called perplexity, which measures how well a language model can predict the next word in a given sequence of words. A lower perplexity score indicates that the language model is more confident in its predictions.

Additionally, some machine learning algorithms can estimate the uncertainty of their predictions. One such technique is called Bayesian Neural Networks, which can provide a probability distribution over the output space that reflects the model's uncertainty.

However, it is important to note that these techniques are not perfect and may not always accurately reflect the confidence of my responses or whether or not I am "hallucinating."

It is up to the user to evaluate the quality and reliability of my responses based on their own judgment and knowledge.

it is important for users to understand that as an AI language model, I am not infallible, and my responses may not always be accurate or appropriate for a given situation. Users should evaluate the quality and reliability of my responses based on their own judgment and knowledge.

It's important to keep in mind that language models like me are trained on large datasets of text, which can contain biases or inaccuracies that may influence my responses. Additionally, my responses are generated based on statistical patterns in the data, rather than a true understanding of the world. This means that there may be cases where my responses are not appropriate or accurate for a given situation.

Therefore, users should use their own judgment when interpreting my responses and consider consulting additional sources of information as needed. While I can provide helpful information and insights, it's always a good idea to approach any source of information with a critical eye.

⤴️ Back to the questions


Enter The Regulators

🇺🇸 USA's National Telecommunications and Information Administration (NTIA)

"The National Telecommunications and Information Administration (NTIA) today (18 Jan 2023) announced a Request for Comment on how companies’ data practices may impose outsized harm on marginalized or underserved communities." [More]

New bills (8 Jun 2023)

8 Jun 2023: "Senators have introduced two separate bipartisan artificial intelligence bills amid growing interest in addressing issues surrounding the technology:

[
More]

🇪🇺 Europe's Artificial Intelligence Act

"The AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. The law assigns applications of AI to three risk categories.

[More]

🇮🇹 Italy's Data Protection Authority (the Garante)

🇬🇧 UK Competition and Markets Authority Launches Initial Review of Artificial Intelligence Models

"To ensure that innovation in AI continues in a way that benefits consumers, businesses and the UK economy, the government has asked regulators, including the Competition and Markets Authority (CMA), to think about how the innovative development and deployment of AI can be supported against five overarching principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. Following evidence gathering and analysis, the CMA will publish a report which sets out its findings in Sep 2023".

Context:
  1. 🇬🇧 4 May 2023 [More: CMA]

⤴️ Back to the questions


Chat Plugins

CHATGPT: "In line with our iterative deployment philosophy, we are gradually rolling out plugins in ChatGPT so we can study their real-world use, impact, and safety and alignment challenges—all of which we’ll have to get right in order to achieve our mission. Users have been asking for plugins since we launched ChatGPT (and many developers are experimenting with similar ideas) because they unlock a vast range of possible use cases. We’re starting with a small set of users and are planning to gradually roll out larger-scale access as we learn more (for plugin developers, ChatGPT users, and after an alpha period, API users who would like to integrate plugins into their products). We’re excited to build a community shaping the future of the human–AI interaction paradigm."

ChatGPT plugins. Documentation.

The first plugins that allow ChatGPT to intelligently call their APIs have been created by:

⤴️ Back to the questions