Testing Chatbot (RAG)
Target
After successfully ingesting data into the Vector Store, it is time to verify the results. In this section, you will act as an end-user, asking the Chatbot questions directly within the AWS Console interface to observe how the RAG system operates.
We will focus on 2 factors:
- Accuracy: Does the AI answer correctly based on the documents?
- Transparency: Can the AI cite the source (Citation) of the information?
Implementation Steps
Step 1: Configure the test window
To start chatting, we need to select a Foundation Model that will act as the “responder”.
In your Knowledge Base details interface, look at the right panel titled Test knowledge base.

Click the Select model button.

- In the selection panel that appears:
- Category: Select
Anthropic. - Model: Select
Claude 3 Sonnet (or Claude 3.5 Sonnet / Haiku depending on the model you enabled). - Throughput: Keep
On-demand.
- Click Apply.

Step 2: Conduct conversation (Chat)
Now, try asking a question related to the document content you uploaded.
- In the input box (Message input), type your question.
- Example: If you uploaded the “AWS Overview” document, ask: “Can you explain to me what EC2 is?”.
- Click Run.
- Observe the result:
- The AI will think for a few seconds (querying the Vector Store).
- Then, it will answer in natural language, summarizing the found information.

Step 3: Verify data source
This is the most important feature of RAG that distinguishes it from standard ChatGPT: the ability to prove the source of information.
- In the AI’s response, pay attention to the small numbers (footnotes) or the text Show source details.
- Click on those numbers or the details button.
- A Source details window will appear, displaying:
- Source chunk: The exact original text segment that the AI found in the document.
- Score: Similarity score (relevance).
- S3 Location: Path to the original file.

Seeing this original text segment proves that the AI is not “hallucinating” but is actually reading your documents.
Step 4: Test with irrelevant questions (Optional)
To see how the system reacts when information is not found.
- Ask a question completely unrelated to the documents.
- Example: “Can you explain some knowledge about personal finance?” (While your documents are about Cloud Computing).
- Expected Result:
- The AI might answer based on its general knowledge (if not restricted).
- OR the AI will answer “Sorry, I am unable to answer your question based on the retrieved data” - This is the ideal behavior for an enterprise RAG application.
