We will use the Amazon Bedrock Wizard to set up the entire RAG architecture. This process will connect the S3 data source, the Embedding model, and automatically initialize the Vector storage (OpenSearch Serverless).


Step 1: Configure Knowledge Base
On the first configuration screen:
knowledge-base-demoKnowledge Base from AWS Overview (This section requires you to describe the data you have previously uploaded to S3).AmazonBedrockExecutionRoleForKnowledgeBase_...).

Step 2: Configure Data Source
Connect to the S3 Bucket containing the documents:
Data source name: Enterknowledge-base-demo

S3 URI:
rag-workshop-demo you created in the previous section.
Keep Default configurations. Click Next.

Step 3: Storage & Processing
This is the most critical step to define the AI model and vector storage location:
Embeddings model:
Click Select model.

Select model: Titan Embeddings G1 - Text v2.

Vector Store:
Quick create a new vector store - RecommendedAmazon OpenSearch ServerlessClick Next.

Step 4: Review and Create Knowledge Base

Step 5: Wait for Initialization
After clicking Create, the system will begin the background infrastructure initialization process for the Vector Store.
