openai-whisper
Verifiedby openclaw · Updated Apr 16, 2026
Local speech-to-text with the Whisper CLI (no API key).
See It In Action
Let me start by exploring the directory and checking for the audio file and Whisper availability.
The audio file is here and Whisper is installed. Let me run the transcription now.
Transcription is done. Let me verify the output file was created correctly:
customer-call.txt is saved and ready. Here's a quick summary for triage:
Key points from the call (Sam @ Riverstone):
- Issue: Hitting rate limits on the billing API during peak hours (2–4 PM Pacific); webhook retries are timing out.
- Ask: Move to the enterprise tier this quarter with a dedicated rate limit of ≥ 200 req/s and SLA on webhook delivery.
- Action needed: Connect Sam with someone from the sales team.
The transcript is searchable in customer-call.txt in the current directory — your PM is all set for triage.
External Tools
| Tool | Type |
|---|---|
| whisper | binary |
Permissions
| Scope | Description |
|---|---|
| filesystem:read | |
| filesystem:write | |
| network:outbound | |
| process:spawn |
SKILL.md
Whisper (CLI)
Use whisper to transcribe audio locally.
Quick start
whisper /path/audio.mp3 --model medium --output_format txt --output_dir .whisper /path/audio.m4a --task translate --output_format srt
Notes
- Models download to
~/.cache/whisperon first run. --modeldefaults toturboon this install.- Use smaller models for speed, larger for accuracy.
FAQ
What does openai-whisper do?
Local speech-to-text with the Whisper CLI (no API key).
When should I use openai-whisper?
Use it when you need a repeatable workflow that produces text report.
What does openai-whisper output?
In the evaluated run it produced text report.
How do I install or invoke openai-whisper?
openclaw skills install openai-whisper
Which agents does openai-whisper support?
OpenClaw
What tools, channels, or permissions does openai-whisper need?
It uses whisper; channels commonly include text; permissions include filesystem:read, filesystem:write, network:outbound, process:spawn.
Is openai-whisper safe to install?
Static analysis marked this skill as medium risk; review side effects and permissions before enabling it.
How is openai-whisper different from an MCP or plugin?
A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.
Does openai-whisper outperform not using a skill?
About openai-whisper
When to use openai-whisper
You need to convert local audio recordings into text transcripts or subtitle files. You want offline or local-first speech recognition instead of a cloud API. You need quick CLI-based transcription or translation of supported audio files.
When openai-whisper is not the right choice
You need real-time streaming transcription or tight integration with an external speech platform. You cannot install or run the Whisper CLI in the execution environment.
What it produces
Produces text report.
Install
openclaw skills install openai-whisperInvoke: Use openai-whisper when you want the agent to follow this workflow.