P

pandas-pro

Verified

by Jeffallan · Updated Apr 16, 2026

opus
3.4Kin5.7Kout
$0.401m 58s

Performs pandas DataFrame operations for data analysis, manipulation, and transformation. Use when working with pandas DataFrames, data cleaning, aggregation, merging, or time series analysis. Invoke for data manipulation tasks such as joining DataFrames on multiple keys, pivoting tables, resampling time series, handling NaN values with interpolation or forward-fill, groupby aggregations, type conversion, or performance optimization of large datasets.

pandaspythondata-analysisdata-cleaningdata-transformationdataframes

See It In Action

1 turns·claude
The project codebase is in your current working directory. Explore the files to understand the code before making changes. board deck has conflicting CACs — can you reconcile ad_spend.csv, leads.csv, and crm_accounts.csv into today’s marketing snapshot before CFO review?

Let me explore the data files first.

I can see three data quality issues that would cause conflicting CACs:

  1. Inconsistent channel names — "Facebook Ads" vs "Facebook" vs "facebook", "Google Ads" vs "google", etc.
  2. Missing values — LinkedIn W16 spend is blank; lead L1016 has no channel
  3. No join key alignment — channel names don't match across files

Let me build the reconciliation script.

The CRM table doesn't have a week column — need to get it from the leads join first. Let me fix that.

Reconciliation complete. Here's what was wrong and what I fixed:

Data Issues Resolved

#ProblemFix
1Channel names inconsistent across all 3 files (5 variants of "Facebook" alone)Normalized to 4 canonical names via lowercase mapping
2LinkedIn W16 spend missingInterpolated to $6,750 (midpoint of W15 $6,500 and W17 $7,000)
3Lead L1016 has no channelExcluded from channel metrics (flagged in notes)
4Pending/unqualified leads counted in some viewsOnly "qualified" leads feed CAC/CPL

Key Numbers for the CFO

Channel5-Week SpendCustomersBlended CACAvg DealLTV:CAC
Email$6,2504$1,563$6,6504.26x
Google Ads$68,6006$11,433$14,7501.29x
LinkedIn$32,2503$10,750$20,8331.94x
Facebook Ads$46,4004$11,600$9,4250.81x
TOTAL$153,50017$9,029$12,6651.40x

The conflicting CACs were caused by the channel name mismatches — "google" leads weren't joining to "Google Ads" spend, so each file produced different denominators. Now reconciled.

Watch items: Facebook Ads is below 1.0x LTV:CAC (spending more to acquire than the deal is worth). W17 has zero closed deals across all channels (likely pipeline lag, not alarm).

Output files saved:

  • marketing_snapshot_weekly.csv — per-channel, per-week detail
  • marketing_snapshot_rollup.csv — channel-level summary with grand total
Claude Sonnet 4.6

External Tools

ToolType
python3binary

Permissions

ScopeDescription
filesystem:read
filesystem:write
process:spawn

SKILL.md

Pandas Pro

Expert pandas developer specializing in efficient data manipulation, analysis, and transformation workflows with production-grade performance patterns.

Core Workflow

  1. Assess data structure — Examine dtypes, memory usage, missing values, data quality:
    print(df.dtypes)
    print(df.memory_usage(deep=True).sum() / 1e6, "MB")
    print(df.isna().sum())
    print(df.describe(include="all"))
    
  2. Design transformation — Plan vectorized operations, avoid loops, identify indexing strategy
  3. Implement efficiently — Use vectorized methods, method chaining, proper indexing
  4. Validate results — Check dtypes, shapes, null counts, and row counts:
    assert result.shape[0] == expected_rows, f"Row count mismatch: {result.shape[0]}"
    assert result.isna().sum().sum() == 0, "Unexpected nulls after transform"
    assert set(result.columns) == expected_cols
    
  5. Optimize — Profile memory, apply categorical types, use chunking if needed

Reference Guide

Load detailed guidance based on context:

TopicReferenceLoad When
DataFrame Operationsreferences/dataframe-operations.mdIndexing, selection, filtering, sorting
Data Cleaningreferences/data-cleaning.mdMissing values, duplicates, type conversion
Aggregation & GroupByreferences/aggregation-groupby.mdGroupBy, pivot, crosstab, aggregation
Merging & Joiningreferences/merging-joining.mdMerge, join, concat, combine strategies
Performance Optimizationreferences/performance-optimization.mdMemory usage, vectorization, chunking

Code Patterns

Vectorized Operations (before/after)

# ❌ AVOID: row-by-row iteration
for i, row in df.iterrows():
    df.at[i, 'tax'] = row['price'] * 0.2

# ✅ USE: vectorized assignment
df['tax'] = df['price'] * 0.2

Safe Subsetting with .copy()

# ❌ AVOID: chained indexing triggers SettingWithCopyWarning
df['A']['B'] = 1

# ✅ USE: .loc[] with explicit copy when mutating a subset
subset = df.loc[df['status'] == 'active', :].copy()
subset['score'] = subset['score'].fillna(0)

GroupBy Aggregation

summary = (
    df.groupby(['region', 'category'], observed=True)
    .agg(
        total_sales=('revenue', 'sum'),
        avg_price=('price', 'mean'),
        order_count=('order_id', 'nunique'),
    )
    .reset_index()
)

Merge with Validation

merged = pd.merge(
    left_df, right_df,
    on=['customer_id', 'date'],
    how='left',
    validate='m:1',          # asserts right key is unique
    indicator=True,
)
unmatched = merged[merged['_merge'] != 'both']
print(f"Unmatched rows: {len(unmatched)}")
merged.drop(columns=['_merge'], inplace=True)

Missing Value Handling

# Forward-fill then interpolate numeric gaps
df['price'] = df['price'].ffill().interpolate(method='linear')

# Fill categoricals with mode, numerics with median
for col in df.select_dtypes(include='object'):
    df[col] = df[col].fillna(df[col].mode()[0])
for col in df.select_dtypes(include='number'):
    df[col] = df[col].fillna(df[col].median())

Time Series Resampling

daily = (
    df.set_index('timestamp')
    .resample('D')
    .agg({'revenue': 'sum', 'sessions': 'count'})
    .fillna(0)
)

Pivot Table

pivot = df.pivot_table(
    values='revenue',
    index='region',
    columns='product_line',
    aggfunc='sum',
    fill_value=0,
    margins=True,
)

Memory Optimization

# Downcast numerics and convert low-cardinality strings to categorical
df['category'] = df['category'].astype('category')
df['count'] = pd.to_numeric(df['count'], downcast='integer')
df['score'] = pd.to_numeric(df['score'], downcast='float')
print(df.memory_usage(deep=True).sum() / 1e6, "MB after optimization")

Constraints

MUST DO

  • Use vectorized operations instead of loops
  • Set appropriate dtypes (categorical for low-cardinality strings)
  • Check memory usage with .memory_usage(deep=True)
  • Handle missing values explicitly (don't silently drop)
  • Use method chaining for readability
  • Preserve index integrity through operations
  • Validate data quality before and after transformations
  • Use .copy() when modifying subsets to avoid SettingWithCopyWarning

MUST NOT DO

  • Iterate over DataFrame rows with .iterrows() unless absolutely necessary
  • Use chained indexing (df['A']['B']) — use .loc[] or .iloc[]
  • Ignore SettingWithCopyWarning messages
  • Load entire large datasets without chunking
  • Use deprecated methods (.ix, .append() — use pd.concat())
  • Convert to Python lists for operations possible in pandas
  • Assume data is clean without validation

Output Templates

When implementing pandas solutions, provide:

  1. Code with vectorized operations and proper indexing
  2. Comments explaining complex transformations
  3. Memory/performance considerations if dataset is large
  4. Data validation checks (dtypes, nulls, shapes)

FAQ

What does pandas-pro do?

Performs pandas DataFrame operations for data analysis, manipulation, and transformation. Use when working with pandas DataFrames, data cleaning, aggregation, merging, or time series analysis. Invoke for data manipulation tasks such as joining DataFrames on multiple keys, pivoting tables, resampling time series, handling NaN values with interpolation or forward-fill, groupby aggregations, type conversion, or performance optimization of large datasets.

When should I use pandas-pro?

Use it when you need a repeatable workflow that produces source code, structured table.

What does pandas-pro output?

In the evaluated run it produced source code, structured table.

How do I install or invoke pandas-pro?

npx skills add https://github.com/jeffallan/claude-skills --skill pandas-pro

Which agents does pandas-pro support?

Claude Code

What tools, channels, or permissions does pandas-pro need?

It uses python3; channels commonly include code, table; permissions include filesystem:read, filesystem:write, process:spawn.

Is pandas-pro safe to install?

Static analysis marked this skill as medium risk; review side effects and permissions before enabling it.

How is pandas-pro different from an MCP or plugin?

A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.

Does pandas-pro outperform not using a skill?

About pandas-pro

When to use pandas-pro

You need to write or revise pandas code for DataFrame manipulation and cleaning. You are optimizing DataFrame operations for performance and memory usage. You need reliable groupby, merge, pivot, or resampling patterns with validation checks.

When pandas-pro is not the right choice

You are not working in Python/pandas. You need external database or SaaS integrations rather than local DataFrame processing.

What it produces

Produces source code and structured table.

Install

npx skills add https://github.com/jeffallan/claude-skills --skill pandas-pro

Invoke: Ask Claude Code to use pandas-pro for the task.