GENERATIVE AI SYSTEM CARD

AI systems that generate text

Last updated Sep 27, 2023

One type of artificial intelligence (AI) system is known as generative AI because people can use it to create something new, like images and text, based on descriptions or questions provided as prompts. The information here focuses on the type of AI system that may be used by Meta to generate text, including those you may use to interact with AIs on Meta’s technologies. Learn more about interacting with AIs on Messenger, Instagram and WhatsApp and with an AI’s content on Facebook.

Overview
Usage tips
Data usage
What to be aware of

How it works

Meta’s AI systems that generate text rely on large language models (also called LLMs). LLMs learn language patterns from large amounts of text using a combination of machine learning and the guidance of people who help train the models. LLMs can perform a variety of language-based tasks such as completing a sentence or responding to questions in a conversational way.

There are multiple steps involved in this process, which are described below.

1

Prompt entry

The first step is for you to enter a prompt, which consists of words that describe the topic you are interested in. The prompt can be a question, a statement or any text that you want to communicate to the AI system.

2

Safety mechanisms

Safety mechanisms analyze the prompt to detect harmful, offensive or inappropriate words that could produce problematic responses.

3

LLM response generation

Next, your prompt is passed to the LLM for response generation. The language model processes the prompt and generates a sequence of words representing the response. During this step, the LLM uses the knowledge it gained during training, where it learned patterns and language from a vast amount of data, to generate a coherent and relevant response.

How does the LLM generate a response?

  • LLMs predict the word that is most likely to come next in a given sequence of words.

  • Typically, the second word of the LLM’s response is generated by analyzing the prompt along with the predicted first word of the response. The LLM then analyzes the new sequence to predict the next word. This process is repeated until the complete response is formed.

  • The final response may vary even if the same prompt is used. This may be due to the intentionally dynamic nature of the LLM or because of the response processing step described below.

Note that some words in the prompt are more important for response generation than are others. To illustrate this point and to see how LLMs work, refer to the interactive demo below.

4

Response processing

The responses that an LLM generates might undergo processing for refinement and enhancement. For example, it might select the most relevant and appropriate response to improve quality. It also might apply additional safety measures to help prevent the generation of harmful or offensive responses.

5

Response delivery

Finally, the LLM’s response is returned to you.

Now, you try it

Complete the prompt below and send it to see how the LLM builds a response piece by piece.

Send a prompt
First, you’ll need to prompt the LLM. You can change the highlighted words to see how the response changes.

Usage tips

You have multiple options to influence the responses you receive. Here are some tips:

Create your prompt thoughtfully

Your prompt is the most important control you have over the response you will receive from the system. Try different prompts to see how they change the system’s response.

Use clear and specific prompts

For example, asking “What are the best breeds of dogs for families?” will yield better results than asking “What is the best dog?”

Change your AI chat history

The AI system will use your recent conversation history to determine the meaning of your current prompt and what response to give. To alter the responses provided, consider deleting the AI’s copy of your chat history to make a fresh start. Typing /reset-ai will delete a single AI’s copy of your chat history, whereas typing /reset-all-ais will delete all of the AIs’ copies of your chat history across all AIs.

Refine the AI system’s responses

If the AI system’s response is not ideal, provide instructions in one or more steps for how you want the response to change. For example, “Make that shorter and use a friendlier tone,” followed by, “Make it even shorter."

Data usage

A large amount of data is required to teach effective generative AI models, so multiple sources are used for training. These sources include information that is publicly available online and licensed information, as well as information from Meta’s products and services. More details on how we use information from Meta’s products and services are available in our Privacy Policy .

When we collect public information from the internet or license data from other providers to train our models, it may include personal information. For example, a public blog post may include the author’s name and contact information. When we do get personal information as part of the public and licensed data that we use to train our models, we don’t specifically link it to any Meta account. To learn more, visit the Privacy Center .

What to be aware of

Meta’s generative AI technology is still advancing, and there are important limits you should understand about how LLMs work. For example, LLMs may produce responses that are not relevant, accurate or appropriate. Some of the reasons for this are:

  • LLMs are language models capable of generating human-like text through predictions based on patterns they learned during development. However, they lack the ability to verify the accuracy or reliability of the responses they produce. Carefully review responses for accuracy. Remember that AIs aren’t human, even though they may respond in ways that seem like real people.

  • LLMs may generate responses that include fabricated or entirely fictional information. In other words, the language model "hallucinates" content that does not originate from the data used to train it. Some examples of this may include creating fictional events, people, places or sources of information; providing details or explanations that are not based on facts; or claiming to be a real person.

  • LLMs may produce responses that are offensive due to limitations of the data on which they were trained, as well as the “hallucinations” mentioned above. If you see anything that concerns you, provide feedback in the app you’re using.

  • The responses an LLM generates may not be up-to-date. Some Meta products may integrate tools such as search engines to deliver the most up-to-date information. Learn more in the Privacy Center .

  • Our LLMs were trained primarily on data in English, so performance may vary when using other languages to interact with our generative AI features.