Skip to content

awslabs/rhubarb

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Rhubarb

Amazon Bedrock License made-with-python Python 3.11 Ruff

Rhubarb

Rhubarb is a light-weight Python framework that makes it easy to build document and video understanding applications using Multi-modal Large Language Models (LLMs) and Embedding models. Rhubarb is created from the ground up to work with Amazon Bedrock and supports multiple foundation models including Anthropic Claude V3 Multi-modal Language Models and Amazon Nova models for document and video processing, along with Amazon Titan Multi-modal Embedding model for embeddings.

What can I do with Rhubarb?

Visit Rhubarb documentation.

Rhubarb can do multiple document processing tasks such as

  • βœ… Document Q&A
  • βœ… Streaming chat with documents (Q&A)
  • βœ… Document Summarization
    • πŸš€ Page level summaries
    • πŸš€ Full summaries
    • πŸš€ Summaries of specific pages
    • πŸš€ Streaming Summaries
  • βœ… Structured data extraction
  • βœ… Extraction Schema creation assistance
  • βœ… Named entity recognition (NER)
    • πŸš€ With 50 built-in common entities
  • βœ… PII recognition with built-in entities
  • βœ… Figure and image understanding from documents
    • πŸš€ Explain charts, graphs, and figures
    • πŸš€ Perform table reasoning (as figures)
  • βœ… Large document processing with sliding window approach
  • βœ… Document Classification with vector sampling using multi-modal embedding models
  • βœ… Logs token usage to help keep track of costs

Video Analysis (New!)

  • βœ… Video summarization
  • βœ… Entity extraction from videos
  • βœ… Action and movement analysis
  • βœ… Text extraction from video frames
  • βœ… Streaming video analysis responses

Rhubarb comes with built-in system prompts that makes it easy to use it for a number of different document understanding use-cases. You can customize Rhubarb by passing in your own system prompts. It supports exact JSON schema based output generation which makes it easy to integrate into downstream applications.

  • Supports PDF, TIFF, PNG, JPG, DOCX files (support for Excel, PowerPoint, CSV, Webp, eml files coming soon)
  • Supports MP4, AVI, MOV, and other common video formats for video analysis (S3 storage required)
  • Performs document to image conversion internally to work with the multi-modal models
  • Works on local files or files stored in S3
  • Supports specifying page numbers for multi-page documents
  • Supports chat-history based chat for documents
  • Supports streaming and non-streaming mode
  • Supports Converse API
  • Supports Cross-Region Inference

MCP Server Integration

Rhubarb now includes a built-in FastMCP server that exposes all document and video understanding capabilities through the Model Context Protocol (MCP). This allows seamless integration with MCP-compatible AI assistants like Cline, Claude Desktop, and other MCP clients.

MCP Features

  • 8 Tools: Complete access to all Rhubarb capabilities including document analysis, video processing, entity extraction, and document classification
  • 4 Resources: Built-in discovery for entities, models, schemas, and classification samples
  • Native Python: Direct integration without external dependencies
  • Conversation Memory: Maintains chat history across interactions
  • Flexible Authentication: Support for AWS profiles, access keys, and environment variables

Quick Start with MCP

  1. No installation required - The MCP server auto-installs when first used

  2. Configure in your MCP client (example for Cline):

    {
      "rhubarb": {
        "command": "uvx",
        "args": [
          "pyrhubarb-mcp@latest",
          "--aws-profile", "my-profile",
          "--default-model", "claude-sonnet"
        ]
      }
    }
  3. Alternative configurations:

    {
      "rhubarb": {
        "command": "uvx", 
        "args": [
          "pyrhubarb-mcp@latest",
          "--aws-access-key-id", "AKIA...",
          "--aws-secret-access-key", "your-secret",
          "--aws-region", "us-west-2"
        ]
      }
    }

For detailed MCP server documentation, see README_MCP.md.

Installation

Start by installing Rhubarb using pip.

pip install pyrhubarb

Usage

Create a boto3 session.

import boto3
session = boto3.Session()

Call Rhubarb

Local file

from rhubarb import DocAnalysis

da = DocAnalysis(file_path="./path/to/doc/doc.pdf", 
                 boto3_session=session)
resp = da.run(message="What is the employee's name?")
resp

With file in Amazon S3

from rhubarb import DocAnalysis

da = DocAnalysis(file_path="s3://path/to/doc/doc.pdf", 
                 boto3_session=session)
resp = da.run(message="What is the employee's name?")
resp

Video Analysis

from rhubarb import VideoAnalysis
import boto3

session = boto3.Session()

# Initialize video analysis with a video in S3
va = VideoAnalysis(
    file_path="s3://my-bucket/my-video.mp4",
    boto3_session=session
)

# Ask questions about the video
response = va.run(message="What is happening in this video?")
print(response)

Large Document Processing

Rhubarb supports processing documents with more than 20 pages using a sliding window approach. This feature is particularly useful when working with Claude models, which have a limitation of processing only 20 pages at a time.

To enable this feature, set sliding_window_overlap to a value between 1 and 10 when creating a DocAnalysis object:

doc_analysis = DocAnalysis(
    file_path="path/to/large-document.pdf",
    boto3_session=session,
    sliding_window_overlap=2     # Number of pages to overlap between windows (1-10)
)

When the sliding window approach is enabled, Rhubarb will:

  1. Break the document into chunks of 20 pages
  2. Process each chunk separately
  3. Combine the results from all chunks

Note: The sliding window technique is not yet supported for document classification. When using classification with large documents, only the first 20 pages will be considered.

For more details, see the Large Document Processing Cookbook.

For more usage examples see cookbooks.

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

About

A Python framework for multi-modal document understanding with Amazon Bedrock

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published