Before the AI can answer, data must be ingested into the vector storage (Vector Store). We will perform a “Before and After” check to clearly see how Bedrock automatically encodes and stores data into OpenSearch.
Step 1: Check Vector Store (Empty State)
We will directly access Amazon OpenSearch Serverless to confirm that no data exists yet.
In the AWS Console search bar, type Amazon OpenSearch Service and select Amazon OpenSearch Service.

In the left menu, under Serverless, select Collections.

Click on the Collection name newly created by Bedrock (usually named like bedrock-knowledge-data...).

On the Collection details page, click the Open Dashboard button (located at the top right of the screen).

Click the Menu (3 horizontal lines) icon in the top left corner.

Select Dev Tools (usually located at the bottom of the menu list).

In the Console pane (on the left), enter the following command to check data:
GET _search { "query": { "match_all": {} } }

Click the Play (Run) button (small triangle next to the command line).
Result: Observe the right pane, hits -> total -> value is 0.

Step 2: Sync Data
Now we will trigger Bedrock to read files from S3 and load them into OpenSearch.
s3-datasource).
Syncing to Available.
Step 3: Re-check Vector Store (Populated)
After Bedrock reports Sync completion, we return to the repository to verify the data has been successfully ingested.
GET _search
{
"query": {
"match_all": {}
}
}
hits -> total -> value section will be greater than 0 (e.g., 10, 20… depending on the number of text chunks)._source field.
Congratulations! You have completed building the “brain” for the AI. The data has been encoded and sits safely in the Vector Database, ready for retrieval.