Automate document processing with Amazon Bedrock Prompt Flows (preview)


Enterprises in industries like manufacturing, finance, and healthcare are inundated with a constant flow of documents—from financial reports and contracts to patient records and supply chain documents. Historically, processing and extracting insights from these unstructured data sources has been a manual, time-consuming, and error-prone task. However, the rise of intelligent document processing (IDP), which uses the power of artificial intelligence and machine learning (AI/ML) to automate the extraction, classification, and analysis of data from various document types is transforming the game. For manufacturers, this means streamlining processes like purchase order management, invoice processing, and supply chain documentation. Financial services firms can accelerate workflows around loan applications, account openings, and regulatory reporting. And in healthcare, IDP revolutionizes patient onboarding, claims processing, and medical record keeping.

By integrating IDP into their operations, organizations across these key industries experience transformative benefits: increased efficiency and productivity through the reduction of manual data entry, improved accuracy and compliance by reducing human errors, enhanced customer experiences due to faster document processing, greater scalability to handle growing volumes of documents, and lower operational costs associated with document management.

This post demonstrates how to build an IDP pipeline for automatically extracting and processing data from documents using Amazon Bedrock Prompt Flows, a fully managed service that enables you to build generative AI workflow using Amazon Bedrock and other services in an intuitive visual builder. Amazon Bedrock Prompt Flows allows you to quickly update your pipelines as your business changes, scaling your document processing workflows to help meet evolving demands.

Solution overview

To be scalable and cost-effective, this solution uses serverless technologies and managed services. In addition to Amazon Bedrock Prompt Flows, the solution uses the following services:

  • Amazon Textract – Automatically extracts printed text, handwriting, and data from
  • Amazon Simple Storage Service (Amazon S3) – Object storage built to retrieve data from anywhere.
  • Amazon Simple Notification Service (Amazon SNS) – A highly available, durable, secure, and fully managed publish-subscribe (pub/sub) messaging service to decouple microservices, distributed systems, and serverless applications.
  • AWS Lambda – A compute service that runs code in response to triggers such as changes in data, changes in application state, or user actions. Because services such as Amazon S3 and Amazon SNS can directly trigger an AWS Lambda function, you can build a variety of real-time serverless data-processing systems.
  • Amazon DynamoDB – a serverless, NoSQL, fully-managed database with single-digit millisecond performance at

Solution architecture

The solution proposed contains the following steps:

  1. Users upload a PDF for analysis to Amazon S3.
  2. The Amazon S3 upload triggers an AWS Lambda function execution.
  3. The function invokes Amazon Textract to extract text from the PDF in batch mode.
  4. Amazon Textract sends an SNS notification when the job is complete.
  5. An AWS Lambda function reads the Amazon Textract response and calls an Amazon Bedrock prompt flow to classify the document.
  6. Results of the classification are stored in Amazon S3 and sent to a destination AWS Lambda function.
  7. The destination AWS Lambda function calls an Amazon Bedrock prompt flow to extract and analyze data based on the document class provided.
  8. Results of the extraction and analysis are stored in Amazon S3.

This workflow is shown in the following diagram.

In the following sections, we dive deep into how to build your IDP pipeline with Amazon Bedrock Prompt Flows.

Prerequisites

To complete the activities described in this post, ensure that you complete the following prerequisites in your local environment:

Implementation time and cost estimation

Time to complete ~ 60 minutes
Cost to run 1000 pages Under $25
Time to cleanup ~20 minutes
Learning level Advanced (300)

Deploy the solution

To deploy the solution, follow these steps:

  1. Clone the GitHub repository
  2. Use the shell script to build and deploy the solution by running the following commands from your project root directory:
chmod +x deploy.sh
./deploy.sh

  1. This will trigger the AWS CloudFormation template in your AWS account.

Test the solution

Once the template is deployed successfully, follow these steps to test the solution:

  1. On the AWS CloudFormation console, select the stack that was deployed
  2. Select the Resources tab
  3. Locate the resources labeled SourceS3Bucket and DestinationS3Bucket, as shown in the following screenshot. Select the link to open the SourceS3Bucket in a new tab

CloudFormation S3 Resources

  1. Select Upload and then Add folder
  2. Under sample_files, select the folder customer123, then choose Upload

Alternatively, you can upload the folder using the following AWS CLI command from the root of the project:

aws s3 sync ./sample_files/customer123 s3://[SourceS3Bucket_NAME]/customer123

After a few minutes the uploaded files will be processed. To view the results, follow these steps:

  1. Open the DestinationS3Bucket
  2. Under customer123, you should see a folder for documents for the processing jobs. Download and review the files locally using the console or with the following AWS CLI command
aws s3 sync s3://[DestinationS3Bucket_NAME]/customer123 ./result_files/customer123

Inside the folder for customer123 you will see several subfolders, as shown in the following diagram:

customer123
└── [Long Textract Job ID]
    ├── classify_response.txt
    ├── input_doc.txt
    └── FOR_REVIEW
        ├── pages_0.txt
        └── report.txt
└── [Long Textract Job ID]
    ├── classify_response.txt
    ├── input_doc.txt
    └── URLA_1003
        ├── pages_0.json
        ├── pages_0.txt
        └── report.txt
└── [Long Textract Job ID]
    ├── classify_response.txt
    ├── input_doc.txt
    └── BANK_STATEMENT
        ├── pages_0.json
        ├── pages_0.txt
        └── report.txt
└── [Long Textract Job ID]
    ├── classify_response.txt
    ├── input_doc.txt
    └── DRIVERS_LICENSE
        ├── pages_0.json
        ├── pages_0.txt
        └── report.txt

How it works

After the document text is extracted, it is sent to a classify prompt flow along with a list of classes, as shown in the following screenshot:

Classify Flow

The list of classes is generated in the AWS Lambda function by using the API to identify existing prompt flows that contain class definitions in their description. This approach allows us to expand the solution to new document types by adding a new prompt flow supporting the new document class, as shown in the following screenshot:

Prompt flows

For each document type, you can implement an extract and analyze flow that is appropriate to this document type. The following screenshot shows an example flow from the URLA_1003 flow. In this case, a prompt is used to convert the text to a standardized JSON format, and a second prompt then analyzes that JSON document to generate a report to the processing agent.

URLA Flow

Expand the solution using Amazon Bedrock Prompt Flows

To adapt to new use cases without changing the underlying code, use Amazon Bedrock Prompt Flows as described in the following steps.

Create a new prompt

From the files you downloaded, look for a folder named FOR_REVIEW. This folder contains documents that were processed and did not fit into an existing class. Open report.txt and review the suggested document class and proposed JSON template.

  1. In the navigation pane in Amazon Bedrock, open Prompt management and select Create prompt, as shown in the following screenshot:

Create Prompt

  1. Name the new prompt IDP_PAYSTUB_JSON and then choose Create
  2. In the Prompt box, enter the following text. Replace COPY YOUR JSON HERE with the JSON template from your txt file
Analyze the provided paystub
<PAYSTUB>
{{doc_text}}
</PAYSTUB>

Provide a structured JSON object containing the following information:

[COPY YOUR JSON HERE]

The following screenshot demonstrates this step.

Prompt Builder

  1. Choose Select model and choose Anthropic Claude 3 Sonnet
  2. Save your changes by choosing Save draft
  3. To test your prompt, open the pages_[n].txt file FOR_REVIEW folder and copy the content into the doc_text input box. Choose Run and the model should return a response

The following screenshot demonstrates this step.

Prompt test

  1. When you are satisfied with the results, choose Create Version. Note the version number because you will need it in the next section

Create a prompt flow

Now we will create a prompt flow using the prompt you created in the previous section.

  1. In the navigation menu, choose Prompt flows and then choose Create prompt flow, as shown in the following screenshot:

Create flow

  1. Name the new flow IDP_PAYSTUB
  2. Choose Create and use a new service role and then choose Save

Next, create the flow using the following steps. When you are done, the flow should resemble the following screenshot.

Paystub flow

  1. Configure the Flow input node:
    1. Choose the Flow input node and select the Configure
    2. Select Object as the Type. This means that flow invocation will expect to receive a JSON object.
  2. Add the S3 Retrieval node:
    1. In the Prompt flow builder navigation pane, select the Nodes tab
    2. Drag an S3 Retrieval node into your flow in the center pane
    3. In the Prompt flow builder pane, select the Configure tab
    4. Enter get_doc_text as the Node name
    5. Expand the Inputs Set the input express for objectKey to $.data.doc_text_s3key
    6. Drag a connection from the output of the Flow input node to the objectKey input of this node
  3. Add the Prompt node:
    1. Drag a Prompt node into your flow in the center pane
    2. In the Prompt flow builder pane, select the Configure tab
    3. Enter map_to_json as the Node name
    4. Choose Use a prompt from your Prompt Management
    5. Select IDP_PAYSTUB_JSON from the dropdown
    6. Choose the version you noted previously
    7. Drag a connection from the output of the get_doc_text node to the doc_text input of this node
  4. Add the S3 Storage node:
    1. In the Prompt flow builder navigation pane, select the Nodes tab
    2. Drag an S3 Storage node into your flow in the center pane
    3. In the Prompt flow builder pane, select the Configure tab in
    4. Enter save_json as the Node name
    5. Expand the Inputs Set the input express for objectKey to $.data.JSON_s3key
    6. Drag a connection from the output of the Flow input node to the objectKey input of this node
    7. Drag a connection from the output of the map_to_json node to the content input of this node
  5. Configure the Flow output node:
    1. Drag a connection from the output of the save_json node to the input of this node
  6. Choose Save to save your flow. Your flow should now be prepared for testing
    1. To test your flow, in the Test prompt flow pane on the right, enter the following JSON object. Choose Run and the flow should return a model response
    2. When you are satisfied with the result, choose Save and exit
{
"doc_text_s3key": "[PATH TO YOUR TEXT FILE IN S3].txt",
"JSON_s3key": "[PATH TO YOUR TEXT FILE IN S3].json"
}

To get the path to your file, follow these steps:

  1. Navigate to FOR_REVIEW in S3 and choose the pages_[n].txt file
  2. Choose the Properties tab
  3. Copy the key path by selecting the copy icon to the left of the key value, as shown in the following screenshot. Be sure to replace .txt with .json in the second line of input as noted previously.

S3 object key

Publish a version and alias

  1. On the flow management screen, choose Publish version. A success banner appears at the top
  2. At the top of the screen, choose Create alias
  3. Enter latest for the Alias name
  4. Choose Use an existing version to associate this alias. From the dropdown menu, choose the version that you just published
  5. Select Create alias. A success banner appears at the top.
  6. Get the FlowId and AliasId to use in the step below
    1. Choose the Alias you just created
    2. From the ARN, copy the FlowId and AliasId

Prompt flow alias

Add your new class to DynamoDB

  1. Open the AWS Management Console and navigate to the DynamoDB service.
  2. Select the table document-processing-bedrock-prompt-flows-IDP_CLASS_LIST
  3. Choose Actions then Create item
  4. Choose JSON view for entering the item data.
  5. Paste the following JSON into the editor:
{
    "class_name": {
        "S": "PAYSTUB"
    },
    "expected_inputs": {
        "S": "Should contain Gross Pay, Net Pay, Pay Date "
    },
    "flow_alias_id": {
        "S": "[Your flow Alias ID]"
    },
    "flow_id": {
        "S": "[Your flow ID]"
    },
    "flow_name": {
        "S": "[The name of your flow]"
    }
}

  1. Review the JSON to ensure all details are correct.
  2. Choose Create item to add the new class to your DynamoDB table.

Test by repeating the upload of the test file

Use the console to repeat the upload of the paystub.jpg file from your customer123 folder into Amazon S3. Alternatively, you can enter the following command into the command line:

aws s3 cp ./sample_files/customer123/paystub.jpeg s3://[INPUT_BUCKET_NAME]/customer123/

In a few minutes, check the report in the output location to see that you successfully added support for the new document type.

Clean up

Use these steps to delete the resources you created to avoid incurring charges on your AWS account:

  1. Empty the SourceS3Bucket and DestinationS3Bucket buckets including all versions
  2. Use the following shell script to delete the CloudFormation stack and test resources from your account:
chmod +x cleanup.sh
./cleanup.sh

  1. Return to the Expand the solution using Amazon Bedrock Prompt Flows section and follow these steps:
    1. In the Create a prompt flow section:
      1. Choose the flow idp_paystub that you created and choose Delete
      2. Follow the instructions to permanently delete the flow
    2. In the Create a new prompt section:
      1. Choose the prompt paystub_json that you created and choose Delete
      2. Follow the instructions to permanently delete the prompt

Conclusion

This solution demonstrates how customers can use Amazon Bedrock Prompt Flows to deploy and expand a scalable, low-code IDP pipeline. By taking advantage of the flexibility of Amazon Bedrock Prompt Flows, organizations can rapidly implement and adapt their document processing workflows to help meet evolving business needs. The low-code nature of Amazon Bedrock Prompt Flows makes it possible for business users and developers alike to create, modify, and extend IDP pipelines without extensive programming knowledge. This significantly reduces the time and resources required to deploy new document processing capabilities or adjust existing ones.

By adopting this integrated IDP solution, businesses across industries can accelerate their digital transformation initiatives, improve operational efficiency, and enhance their ability to extract valuable insights from document-based processes, driving significant competitive advantages.

Review your current manual document processing processes and identify where Amazon Bedrock Prompt Flows can help you automate these workflows for your business.

For further exploration and learning, we recommend checking out the following resources:


About the Authors

Erik Cordsen is a Solutions Architect at AWS serving customers in Georgia. He is passionate about applying cloud technologies and ML to solve real life problems. When he is not designing cloud solutions, Erik enjoys travel, cooking, and cycling.

Vivek Mittal is a Solution Architect at Amazon Web Services. He is passionate about serverless and machine learning technologies. Vivek takes great joy in assisting customers with building innovative solutions on the AWS cloud.

Brijesh Pati is an Enterprise Solutions Architect at AWS. His primary focus is helping enterprise customers adopt cloud technologies for their workloads. He has a background in application development and enterprise architecture and has worked with customers from various industries such as sports, finance, energy, and professional services. His interests include serverless architectures and AI/ML.



Source link