Rhubarb is a light-weight Python framework that makes it easy to build document and video understanding applications using Multi-modal Large Language Models (LLMs) and Embedding models. Rhubarb is created from the ground up to work with Amazon Bedrock and supports multiple foundation models including Anthropic Claude V3 Multi-modal Language Models and Amazon Nova models for document and video processing, along with Amazon Titan Multi-modal Embedding model for embeddings.
Visit Rhubarb documentation.
Rhubarb can do multiple document processing tasks such as
- β Document Q&A
- β Streaming chat with documents (Q&A)
- β
Document Summarization
- π Page level summaries
- π Full summaries
- π Summaries of specific pages
- π Streaming Summaries
- β Structured data extraction
- β Extraction Schema creation assistance
- β
Named entity recognition (NER)
- π With 50 built-in common entities
- β PII recognition with built-in entities
- β
Figure and image understanding from documents
- π Explain charts, graphs, and figures
- π Perform table reasoning (as figures)
- β Large document processing with sliding window approach
- β Document Classification with vector sampling using multi-modal embedding models
- β Logs token usage to help keep track of costs
- β Video summarization
- β Entity extraction from videos
- β Action and movement analysis
- β Text extraction from video frames
- β Streaming video analysis responses
Rhubarb comes with built-in system prompts that makes it easy to use it for a number of different document understanding use-cases. You can customize Rhubarb by passing in your own system prompts. It supports exact JSON schema based output generation which makes it easy to integrate into downstream applications.
- Supports PDF, TIFF, PNG, JPG, DOCX files (support for Excel, PowerPoint, CSV, Webp, eml files coming soon)
- Supports MP4, AVI, MOV, and other common video formats for video analysis (S3 storage required)
- Performs document to image conversion internally to work with the multi-modal models
- Works on local files or files stored in S3
- Supports specifying page numbers for multi-page documents
- Supports chat-history based chat for documents
- Supports streaming and non-streaming mode
- Supports Converse API
- Supports Cross-Region Inference
Rhubarb now includes a built-in FastMCP server that exposes all document and video understanding capabilities through the Model Context Protocol (MCP). This allows seamless integration with MCP-compatible AI assistants like Cline, Claude Desktop, and other MCP clients.
- 8 Tools: Complete access to all Rhubarb capabilities including document analysis, video processing, entity extraction, and document classification
- 4 Resources: Built-in discovery for entities, models, schemas, and classification samples
- Native Python: Direct integration without external dependencies
- Conversation Memory: Maintains chat history across interactions
- Flexible Authentication: Support for AWS profiles, access keys, and environment variables
-
No installation required - The MCP server auto-installs when first used
-
Configure in your MCP client (example for Cline):
{ "rhubarb": { "command": "uvx", "args": [ "pyrhubarb-mcp@latest", "--aws-profile", "my-profile", "--default-model", "claude-sonnet" ] } } -
Alternative configurations:
{ "rhubarb": { "command": "uvx", "args": [ "pyrhubarb-mcp@latest", "--aws-access-key-id", "AKIA...", "--aws-secret-access-key", "your-secret", "--aws-region", "us-west-2" ] } }
For detailed MCP server documentation, see README_MCP.md.
Start by installing Rhubarb using pip.
pip install pyrhubarb
Create a boto3 session.
import boto3
session = boto3.Session()Local file
from rhubarb import DocAnalysis
da = DocAnalysis(file_path="./path/to/doc/doc.pdf",
boto3_session=session)
resp = da.run(message="What is the employee's name?")
respWith file in Amazon S3
from rhubarb import DocAnalysis
da = DocAnalysis(file_path="s3://path/to/doc/doc.pdf",
boto3_session=session)
resp = da.run(message="What is the employee's name?")
respfrom rhubarb import VideoAnalysis
import boto3
session = boto3.Session()
# Initialize video analysis with a video in S3
va = VideoAnalysis(
file_path="s3://my-bucket/my-video.mp4",
boto3_session=session
)
# Ask questions about the video
response = va.run(message="What is happening in this video?")
print(response)Rhubarb supports processing documents with more than 20 pages using a sliding window approach. This feature is particularly useful when working with Claude models, which have a limitation of processing only 20 pages at a time.
To enable this feature, set sliding_window_overlap to a value between 1 and 10 when creating a DocAnalysis object:
doc_analysis = DocAnalysis(
file_path="path/to/large-document.pdf",
boto3_session=session,
sliding_window_overlap=2 # Number of pages to overlap between windows (1-10)
)When the sliding window approach is enabled, Rhubarb will:
- Break the document into chunks of 20 pages
- Process each chunk separately
- Combine the results from all chunks
Note: The sliding window technique is not yet supported for document classification. When using classification with large documents, only the first 20 pages will be considered.
For more details, see the Large Document Processing Cookbook.
For more usage examples see cookbooks.
See CONTRIBUTING for more information.
This project is licensed under the Apache-2.0 License.
