FAQ ChatGPT: Functionality, opportunities and risks for companies

from | 10 February 2023 | Basics

Welcome to our ChatGPT FAQ section! This page contains the most frequently asked questions about the language model and its use that have come up in our ChatGPT webinars. We hope you will find this page helpful and informative in answering your questions about ChatGPT.

For more in-depth information, we recommend that you take a look at the Recording of one of our previous webinars on ChatGPT view. If you want to learn more about ChatGPT and stay up to date, we also offer Staff training and counselling sessions an.

Do you need support with your Generative AI projects? From our Use Case Workshop to our executive training to the development and maintenance of GenAI products, Alexander Thamm GmbH offers its customers a wide range of services in the field of Generative AI. Find out more on our services page and contact us at any time for a no-obligation consultation:

General functioning

Do I see it correctly that ChatGPT has no real (logical) knowledge, but only infers very well from input to output?

Correct, ChatGPT predicts only the words (and accordingly answers) that are most likely in the given context. It does not draw on fixed knowledge or basic understanding of the content.

How can I access ChatGPT?

By registering free of charge on the OpenAI website chat.openai.com or via the offered Interface (API).

Is ChatGPT a chatbot?

Yes, in its handling, ChatGPT is a software that exchanges information in the form of conversations. The dialogue format enables, as with a Chatbotto ask questions and to question themselves.

Can I also use ChatGPT in German and other foreign languages?

ChatGPT can understand requests and generate content in German. The model can also be operated in other common languages (Spanish, Italian). However, due to the lower proportion of foreign language training data, the quality may be somewhat lower compared to English. For more complex questions and tasks, English is therefore generally recommended as the input language.

Does ChatGPT already use GPT-4?

GPT-4 is integrated via the paid ChatGPT Plus premium version of the service. In the free version, there is still GPT-3 used as a basis.

Is it possible to give such models a memory? For example, memories of preferences and past questions and answers.

ChatGPT can already fall back on questions and statements that have already been asked and thus has a kind of "memory".

What is the reason for the lack of topicality of ChatGPT?

I have not found an official statement on this, but I suspect that it is connected with the collection of the Training data and the training itself. This process takes some time and that is where this difference comes from.

How does it work that ChatGPT gives a wrong output on the first statement and then improves on the second statement and the output is correct?

The model has more contextual knowledge in the second answer by explaining the output. More context (also in the input) usually improves the quality of the answers.

With the example "5+5" and a user input that claims that the result is 11: Is the model in the background falsified here, or do you assume that no influence was taken on the model there?

The output of the model has been nudged or "biased" in another direction. The model itself is not affected.

If false statements are made ChatGPT to correct false statements. Does ChatGPT take this as feedback for its internal model or is it only session related?

Only within the session. But we don't know what OpenAI will do with our input later. Some see the current phase as a big beta test, for which we users provide data for free.

How does ChatGPT recognise which information or answers give good feedback (human feedback loop)?

For this, the "Reward Model" was trained to approximate the "Human Feedback" as estimating how useful the answer would be for humans.

Will you ever get to a point where ChatGPT's answers are guaranteed to be correct?

Of course, there are never any guarantees. Even with human statements. Even a mathematical proof is not necessarily correct. But we are working on approaches to develop a reliability measure.

Do you think that the Artificial intelligence can be actively "attacked" by many false entries? Or even different artificial intelligences could interfere with each other?

The models in the background have so far been static and thus cannot be changed from the outside.

Are you seeing steps in others (besides OpenAI) that are Reasoning (to support causality with in statements) with? - So besides Transformer, Reinforcement Learning etc. as techniques?

The area of causality in responses from Machine Learning Integrating models is a separate research area. There are various papers on this on Google Scholar.

How can the risks be mitigated?

Difficult question, especially the "human feedback" during the training phase should help to avoid problematic answers. The FZI, for example, has launched a project dedicated to the task of detecting fake news with the help of AI: https://www.fzi.de/project/defakts/

How can you limit the spread of fake news?

By independently checking the output of ChatGPT and not (accidentally) spreading false information. That's where I see a great deal of personal responsibility above all. In addition, the FZI, for example, has launched a project dedicated to the task of detecting fake news with the help of AI: https://www.fzi.de/project/defakts/

Chatbots - Compactly explained - Header with the logo of OpenAI's ChatGPT and the company logo of Alexander Thamm GmbH in green

You can find more exciting information about chatbots and where you can use them in your company in our blog:

Chatbots: Explained compactly

Training and training data

Could you give more details on the training of the reward model? How much human data are we talking about as input?

The information regarding the human feedback and a hint on the selection of users can be found in the following paper (ChatGPT is a "twin" of InstructGPT): https://arxiv.org/pdf/2203.02155.pdf.

Is it known which people give the feedback? I ask because of the diversity. There are different perspectives on many questions. Simple example: What is the most delicious food?

The information regarding the human feedback and a hint on the selection of users can be found in the following paper (ChatGPT is a "twin" of InstructGPT): https://arxiv.org/pdf/2203.02155.pdf.

Are there strategies to correct the bias of the models if this cannot be done with the data itself?

An approach to reduce bias, toxic content etc. is achieved with the help of human feedback (reinforcement learning from human feedback) during the training phase.

Use of ChatGPT in the company

Can I use the texts created with ChatGPT commercially?

The copyright question has not yet been clarified. But for reasons of authenticity of your own content, we would recommend using texts from ChatGPT as a starting point and then giving it a personal touch with your own thoughts and words.

Is ChatGPT free of charge?

Yes, there is a free version of ChatGPT for which only a profile at OpenAi must be created. However, this is limited in its availability and accordingly also in its transmission stability and security. If the servers are overloaded due to high demand, you will not be able to access ChatGPT.
The premium version ChatGPT Plus is therefore more reliable and currently costs $20 per month. With this version, you always get access to the servers, faster response times and advance access to new functions.

What are the possibilities to use the ChatGPT API and under which restrictions?

So far, the API can be used free of charge. The only restriction is prior registration and, in the free version, partial server problems.

What could be concrete use cases for companies in customer service? Is it possible to fine-tune ChatGPT with your own content?

Yes, this is possible. Azure OpenAI Service already advertises fine-tuning possibilities.

When will ChatGPT be available on Azure?

"Azure OpenAI Service" is already available.

Is there or will there be an in-house version for companies on their own server?

As of now, ChatGPT can be obtained via Azure OpenAI, whether there will also be an "on-premise" version is (as far as I know) not yet known.

Generating data sets, training, scripting, there is quite a lot of effort behind artificial intelligence.
Is it possible to create a company-owned, self-sufficient, self-learning?

It is definitely possible to train your own language models, which are then also focused on the individual subject area. Please feel free to contact us if you have any questions.

Copyright and references

How is copyright ensured?

It cannot be ensured. Therefore, even with other generative models (Generative Adversarial NetworkGAN for short) such as Stable Diffusion etc. are arguing about the copyright issue and there are already first court cases: https://netzpolitik.org/2023/sammelklagestable-diffusion-bild-generator-rechtsstreit/.

What is the copyright or legal basis for the resulting text? E.g. code, posts, etc.?

The copyright question has not yet been clarified. But for reasons of authenticity of your own content, we would recommend using texts from ChatGPT as a starting point and then giving it a personal touch with your own thoughts and words.

Do you support the mandatory integration of source references or author attribution for each text or the lines of text output?

From our point of view, it would be desirable. However, the technical feasibility is a complex task, as an enormous amount of memory would have to be available at runtime to manage all these references.

Sustainability and competition

Is there actually an estimate of how much COâ‚‚ is emitted per request with such large language models?

I do not have any information about an individual request. But here is an interesting paper on AI & COâ‚‚ emissions:
https://arxiv.org/ftp/arxiv/papers/2104/2104.10350.pdf

Microsoft has probably already invested more than 10 billion in ChatGPT. Microsoft is competing with companies like Alphabet and Meta. Is there any information about their "counter-offensive" yet!

Google has already launched "LaMDA"A (former) Google employee has even suggested that it has a consciousness. Meta is also already working on large language models (e.g. Galactica).
However, both companies are somewhat more cautious about "public" use, as these models (just like ChatGPT) sometimes make false statements and this could be damaging to the image. Galactica was also withdrawn from public use relatively quickly due to false texts.

Are there alternatives to ChatGPT on the market?

Yes, numerous providers have specialised chatbots in their offering; Jasper Chat, Bing Chat, YouChat, neevaAI, Google LaMDA Google Bard and Sparrow from Deepmind are just a few examples.

In our Data & AI Glossary entry on ChatGPT you will find a further informative overview of the functions and access options.

Author

Patrick

Pat has been responsible for Web Analysis & Web Publishing at Alexander Thamm GmbH since the end of 2021 and oversees a large part of our online presence. In doing so, he beats his way through every Google or Wordpress update and is happy to give the team tips on how to make your articles or own websites even more comprehensible for the reader as well as the search engines.

0 Kommentare