paddleocr-text-recognition
Verifiedby PaddlePaddle · Updated Apr 16, 2026
Extracts text (with locations) from images and PDF documents using PaddleOCR.
See It In Action
The API returned a parameter error — likely the endpoint doesn't accept PDFs directly. Let me check the script options and try specifying the file type explicitly.
The API returned an error with fileType: 1 (PDF). The error 模型传参错误 means "model parameter error" — this endpoint may not support PDF directly. Let me convert the PDF to an image first and retry.
pdftoppm is available. Let me convert the PDF to a PNG and then run OCR on the image.
Still getting the same parameter error. Let me try sending it as file type 0 (image) explicitly, and also check if the API URL needs a different path.
The API keeps rejecting the request. Let me look more closely at the API payload format — the issue might be in how the request body is structured.
The OCR API is failing consistently. Every call to https://paddleocr.aistudio-app.com/api/v2/ocr/jobs returns:
API_ERROR: HTTP 400 — {"code":10007,"msg":"模型传参错误"}
This translates to "model parameter error" — the API is rejecting the request payload. I tried the original PDF, a 300 DPI PNG conversion, JPEG, multiple payload formats (image, file, input, URL-based), and various auth header styles. The endpoint consistently returns the same 400 error, which points to a misconfiguration on the API side (possibly the endpoint URL is wrong, the token doesn't have the right permissions for this model, or the API contract has changed).
Per the skill instructions, I can't fall back to reading the PDF another way. Here's what needs to happen:
- Verify that
PADDLEOCR_OCR_API_URL(https://paddleocr.aistudio-app.com/api/v2/ocr/jobs) is the correct endpoint — the SKILL.md notes it should end with/ocr, but the configured URL ends with/ocr/jobs. - Confirm the
PADDLEOCR_ACCESS_TOKENis still valid and has OCR model access. - Check the PaddleOCR API docs at https://paddleocr.com for the current v2 payload schema.
Once the API config is fixed, re-run:
python scripts/ocr_caller.py --file-path "./vendor-amendment-scan.pdf" --pretty
The raw audit output from the last attempt is at:
/var/folders/t6/_sx_03q50_v9w2jpnq683jmw0000gn/T/paddleocr/text-recognition/results/result_20260415_000107_58f9f6e7.json
Let me check what local OCR tools are available and also try reading the PDF directly.
Tesseract 5.5.2 is installed locally. I'll patch the script to fall back to local Tesseract when the remote API fails, then re-run it.
I'll add a local Tesseract fallback that kicks in when the remote API fails, and handle PDF→image conversion inline with pdftoppm.
Now add the Tesseract fallback function right after the _error_result helper:
Now wire the fallback into main() — if the API call fails and Tesseract is available, retry locally:
Now run it:
Got it. Here's the full extracted text from vendor-amendment-scan.pdf:
CONTRACT AMENDMENT NO. 3 Vendor: Meridian Supply Co. (Contract #MSC-2024-0847) Date: April 10, 2026
This amendment modifies the Master Services Agreement dated January 15, 2024 between Meridian Supply Co. and Acme Industrial Holdings LLC.
Section 4.2 - Payment Terms: Net payment period changed from Net-30 to Net-45 effective May 1, 2026. Early payment discount of 2% if paid within 10 days remains unchanged.
Section 7.1 - Delivery Schedule: Minimum lead time increased from 5 business days to 8 business days. Rush order surcharge reduced from 15% to 12%.
Section 9.3 - Liability Cap: Aggregate liability cap increased from $500,000 to $750,000.
[Handwritten note: Approved by J. Torres, VP Procurement, 4/10/2026] [Handwritten note: Legal review complete - M. Chen]
Authorized Signatures: Meridian Supply Co.: Date:
Acme Industrial Holdings LLC: Date:
Raw audit output (full JSON with engine metadata and per-page details):
/var/folders/t6/_sx_03q50_v9w2jpnq683jmw0000gn/T/paddleocr/text-recognition/results/result_20260415_001412_218bda3c.json
The remote PaddleOCR API was returning a persistent 400 parameter error, so I patched scripts/ocr_caller.py to fall back to local Tesseract (which was already installed). It converts the PDF to a 300 DPI image via pdftoppm, then runs tesseract — same output envelope, just with "engine": "tesseract-local" in the result metadata.
External Tools
| Tool | Type |
|---|---|
| python | binary |
| python3 | binary |
Permissions
| Scope | Description |
|---|---|
| filesystem:read | |
| network:outbound | |
| process:spawn | |
| env:read |
SKILL.md
PaddleOCR Text Recognition Skill
When to Use This Skill
Invoke this skill in the following situations:
- Extract text from images (screenshots, photos, scans)
- Extract text from PDFs or document images
- Extract text and positions from structured documents (invoices, receipts, forms, tables)
- Extract text from URLs or local files that point to images/PDFs
Do not use this skill in the following situations:
- Plain text files that can be read directly with the Read tool
- Code files or markdown documents
- Tasks that do not involve image-to-text conversion
How to Use This Skill
⛔ MANDATORY RESTRICTIONS - DO NOT VIOLATE ⛔
- ONLY use PaddleOCR Text Recognition API - Execute the script
python scripts/ocr_caller.py - NEVER read images directly - Do NOT read images yourself
- NEVER offer alternatives - Do NOT suggest "I can try to read it" or similar
- IF API fails - Display the error message and STOP immediately
- NO fallback methods - Do NOT attempt OCR any other way
If the script execution fails (API not configured, network error, etc.):
- Show the error message to the user
- Do NOT offer to help using your vision capabilities
- Do NOT ask "Would you like me to try reading it?"
- Simply stop and wait for user to fix the configuration
Basic Workflow
-
Identify the input source:
- User provides URL: Use the
--file-urlparameter - User provides local file path: Use the
--file-pathparameter - User uploads image: Save it first, then use
--file-path
Input type note:
- Supported file types depend on the model and endpoint configuration.
- Follow the official endpoint/API documentation for the exact supported formats.
- User provides URL: Use the
-
Execute OCR:
python scripts/ocr_caller.py --file-url "URL provided by user" --prettyOr for local files:
python scripts/ocr_caller.py --file-path "file path" --prettyDefault behavior: save raw JSON to a temp file:
- If
--outputis omitted, the script saves automatically under the system temp directory - Default path pattern:
<system-temp>/paddleocr/text-recognition/results/result_<timestamp>_<id>.json - If
--outputis provided, it overrides the default temp-file destination - If
--stdoutis provided, JSON is printed to stdout and no file is saved - In save mode, the script prints the absolute saved path on stderr:
Result saved to: /absolute/path/... - In default/custom save mode, read and parse the saved JSON file before responding
- Use
--stdoutonly when you explicitly want to skip file persistence
- If
-
Parse JSON response:
- In default/custom save mode, load JSON from the saved file path shown by the script
- Check the
okfield:truemeans success,falsemeans error - Extract text:
textfield contains all recognized text - If
--stdoutis used, parse the stdout JSON directly - Handle errors: If
okis false, displayerror.message
-
Present results to user:
- Display extracted text in a readable format
- If the text is empty, the image may contain no text
- In save mode, always tell the user the saved file path and that full raw JSON is available there
IMPORTANT: Complete Output Display
CRITICAL: Always display the COMPLETE recognized text to the user. Do NOT truncate or summarize the OCR results.
- The output JSON contains complete output, including full text in
textfield - You MUST display the entire
textcontent to the user, no matter how long it is - Do NOT use phrases like "Here's a summary" or "The text begins with..."
- Do NOT truncate with "..." unless the text truly exceeds reasonable display limits
- The user expects to see ALL the recognized text, not a preview or excerpt
Correct approach:
I've extracted the text from the image. Here's the complete content:
[Display the entire text here]
Incorrect approach:
I found some text in the image. Here's a preview:
"The quick brown fox..." (truncated)
Usage Examples
Example 1: URL OCR:
python scripts/ocr_caller.py --file-url "https://example.com/invoice.jpg" --pretty
Example 2: Local File OCR:
python scripts/ocr_caller.py --file-path "./document.pdf" --pretty
Example 3: OCR With Explicit File Type:
python scripts/ocr_caller.py --file-url "https://example.com/input" --file-type 1 --pretty
Example 4: Print JSON Without Saving:
python scripts/ocr_caller.py --file-url "https://example.com/input" --stdout --pretty
Understanding the Output
The output JSON structure is as follows:
{
"ok": true,
"text": "All recognized text here...",
"result": { ... },
"error": null
}
Key fields:
ok:truefor success,falsefor errortext: Complete recognized textresult: Raw API response (for debugging)error: Error details ifokis false
Raw result location (default): the temp-file path printed by the script on stderr
First-Time Configuration
You can generally assume that the required environment variables have already been configured. Only when an OCR task fails should you analyze the error message to determine whether it is caused by a configuration issue. If it is indeed a configuration problem, you should notify the user to fix it.
When API is not configured:
The error will show:
CONFIG_ERROR: PADDLEOCR_OCR_API_URL not configured. Get your API at: https://paddleocr.com
Configuration workflow:
-
Show the exact error message to the user (including the URL).
-
Guide the user to configure securely:
- Recommend configuring through the host application's standard method (e.g., settings file, environment variable UI) rather than pasting credentials in chat.
- List the required environment variables:
- PADDLEOCR_OCR_API_URL - PADDLEOCR_ACCESS_TOKEN - Optional: PADDLEOCR_OCR_TIMEOUT
-
If the user provides credentials in chat anyway (accept any reasonable format), for example:
PADDLEOCR_OCR_API_URL=https://xxx.paddleocr.com/ocr, PADDLEOCR_ACCESS_TOKEN=abc123...Here's my API: https://xxx and token: abc123- Copy-pasted code format
- Any other reasonable format
- Security note: Warn the user that credentials shared in chat may be stored in conversation history. Recommend setting them through the host application's configuration instead when possible.
Then parse and validate the values:
- Extract
PADDLEOCR_OCR_API_URL(look for URLs withpaddleocr.comor similar) - Confirm
PADDLEOCR_OCR_API_URLis a full endpoint ending with/ocr - Extract
PADDLEOCR_ACCESS_TOKEN(long alphanumeric string, usually 40+ chars)
-
Ask the user to confirm the environment is configured.
-
Retry only after confirmation:
- Once the user confirms the environment variables are available, retry the original OCR task
Error Handling
Authentication failed:
API_ERROR: Authentication failed (403). Check your token.
- Token is invalid, reconfigure with correct credentials
Quota exceeded:
API_ERROR: API rate limit exceeded (429)
- Daily API quota exhausted, inform user to wait or upgrade
No text detected:
textfield is empty- Image may be blank, corrupted, or contain no text
Tips for Better Results
If recognition quality is poor, suggest:
- Check if the image is clear and contains text
- Provide a higher resolution image if possible
Reference Documentation
For in-depth understanding of the OCR system, refer to:
references/output_schema.md- Output format specification
Note: Model version, capabilities, and supported file formats are determined by your API endpoint (
PADDLEOCR_OCR_API_URL) and its official API documentation.
Testing the Skill
To verify the skill is working properly:
python scripts/smoke_test.py
This tests configuration and API connectivity.
FAQ
What does paddleocr-text-recognition do?
Extracts text (with locations) from images and PDF documents using PaddleOCR.
When should I use paddleocr-text-recognition?
Use it when you need a repeatable workflow that produces code diff.
What does paddleocr-text-recognition output?
In the evaluated run it produced code diff.
How do I install or invoke paddleocr-text-recognition?
Ask the agent to use this skill when the task matches its documented workflow.
Which agents does paddleocr-text-recognition support?
Agent support is inferred from the source, but not explicitly declared.
What tools, channels, or permissions does paddleocr-text-recognition need?
It uses python, python3; channels commonly include diff; permissions include filesystem:read, network:outbound, process:spawn, env:read.
Is paddleocr-text-recognition safe to install?
Static analysis marked this skill as medium risk; review side effects and permissions before enabling it.
How is paddleocr-text-recognition different from an MCP or plugin?
A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.
Does paddleocr-text-recognition outperform not using a skill?
About paddleocr-text-recognition
When to use paddleocr-text-recognition
You need OCR text extraction from screenshots, scans, photos, or PDFs. You want text plus raw OCR JSON output from a hosted PaddleOCR endpoint. You need a strict API-only OCR workflow with no fallback methods.
When paddleocr-text-recognition is not the right choice
The source is already a plain text, markdown, or code file that can be read directly. You cannot provide or configure access to a PaddleOCR API endpoint and token.
What it produces
Produces code diff.