Welcome to our ChatGPT FAQ section! This page contains the most frequently asked questions about the language model and its use that have come up in our ChatGPT webinars. We hope you will find this page helpful and informative in answering your questions about ChatGPT.
For more in-depth information, we recommend that you take a look at the Recording of one of our previous webinars on ChatGPT view. If you want to learn more about ChatGPT and stay up to date, we also offer Staff training and counselling sessions an.
General functioning
Correct, ChatGPT predicts only the words (and accordingly answers) that are most likely in the given context. It does not draw on fixed knowledge or basic understanding of the content.
By registering free of charge on the OpenAI website chat.openai.com or via the offered Interface (API).
Yes, in its handling, ChatGPT is a software that exchanges information in the form of conversations. The dialogue format enables, as with a Chatbotto ask questions and to question themselves.
ChatGPT can understand requests and generate content in German. The model can also be operated in other common languages (Spanish, Italian). However, due to the lower proportion of foreign language training data, the quality may be somewhat lower compared to English. For more complex questions and tasks, English is therefore generally recommended as the input language.
GPT-4 is integrated via the paid ChatGPT Plus premium version of the service. In the free version, there is still GPT-3 used as a basis.
ChatGPT can already fall back on questions and statements that have already been asked and thus has a kind of "memory".
I have not found an official statement on this, but I suspect that it is connected with the collection of the Training data and the training itself. This process takes some time and that is where this difference comes from.
The model has more contextual knowledge in the second answer by explaining the output. More context (also in the input) usually improves the quality of the answers.
The output of the model has been nudged or "biased" in another direction. The model itself is not affected.
Only within the session. But we don't know what OpenAI will do with our input later. Some see the current phase as a big beta test, for which we users provide data for free.
For this, the "Reward Model" was trained to approximate the "Human Feedback" as estimating how useful the answer would be for humans.
Of course, there are never any guarantees. Even with human statements. Even a mathematical proof is not necessarily correct. But we are working on approaches to develop a reliability measure.
The models in the background have so far been static and thus cannot be changed from the outside.
The area of causality in responses from Machine Learning Integrating models is a separate research area. There are various papers on this on Google Scholar.
Difficult question, especially the "human feedback" during the training phase should help to avoid problematic answers. The FZI, for example, has launched a project dedicated to the task of detecting fake news with the help of AI: https://www.fzi.de/project/defakts/
By independently checking the output of ChatGPT and not (accidentally) spreading false information. That's where I see a great deal of personal responsibility above all. In addition, the FZI, for example, has launched a project dedicated to the task of detecting fake news with the help of AI: https://www.fzi.de/project/defakts/
You can find more exciting information about chatbots and where you can use them in your company in our blog:
Training and training data
The information regarding the human feedback and a hint on the selection of users can be found in the following paper (ChatGPT is a "twin" of InstructGPT): https://arxiv.org/pdf/2203.02155.pdf.
The information regarding the human feedback and a hint on the selection of users can be found in the following paper (ChatGPT is a "twin" of InstructGPT): https://arxiv.org/pdf/2203.02155.pdf.
An approach to reduce bias, toxic content etc. is achieved with the help of human feedback (reinforcement learning from human feedback) during the training phase.
Use of ChatGPT in the company
The copyright question has not yet been clarified. But for reasons of authenticity of your own content, we would recommend using texts from ChatGPT as a starting point and then giving it a personal touch with your own thoughts and words.
Yes, there is a free version of ChatGPT for which only a profile at OpenAi must be created. However, this is limited in its availability and accordingly also in its transmission stability and security. If the servers are overloaded due to high demand, you will not be able to access ChatGPT.
The premium version ChatGPT Plus is therefore more reliable and currently costs $20 per month. With this version, you always get access to the servers, faster response times and advance access to new functions.
So far, the API can be used free of charge. The only restriction is prior registration and, in the free version, partial server problems.
Yes, this is possible. Azure OpenAI Service already advertises fine-tuning possibilities.
"Azure OpenAI Service" is already available.
As of now, ChatGPT can be obtained via Azure OpenAI, whether there will also be an "on-premise" version is (as far as I know) not yet known.
Is it possible to create a company-owned, self-sufficient, self-learning?
It is definitely possible to train your own language models, which are then also focused on the individual subject area. Please feel free to contact us if you have any questions.
Copyright and references
It cannot be ensured. Therefore, even with other generative models (Generative Adversarial NetworkGAN for short) such as Stable Diffusion etc. are arguing about the copyright issue and there are already first court cases: https://netzpolitik.org/2023/sammelklagestable-diffusion-bild-generator-rechtsstreit/.
The copyright question has not yet been clarified. But for reasons of authenticity of your own content, we would recommend using texts from ChatGPT as a starting point and then giving it a personal touch with your own thoughts and words.
From our point of view, it would be desirable. However, the technical feasibility is a complex task, as an enormous amount of memory would have to be available at runtime to manage all these references.
Sustainability and competition
I do not have any information about an individual request. But here is an interesting paper on AI & CO₂ emissions:
https://arxiv.org/ftp/arxiv/papers/2104/2104.10350.pdf
Google has already launched "LaMDA"A (former) Google employee has even suggested that it has a consciousness. Meta is also already working on large language models (e.g. Galactica).
However, both companies are somewhat more cautious about "public" use, as these models (just like ChatGPT) sometimes make false statements and this could be damaging to the image. Galactica was also withdrawn from public use relatively quickly due to false texts.
Yes, numerous providers have specialised chatbots in their offering; Jasper Chat, Bing Chat, YouChat, neevaAI, Google LaMDA Google Bard and Sparrow from Deepmind are just a few examples.
In our Data & AI Glossary entry on ChatGPT you will find a further informative overview of the functions and access options.
0 Comments