🐼 Pandas Cookbook

A practical guide to data manipulation with pandas

Author

Dr Jon Cardoso-Silva

Published

11 March 2025

DS105W course icon

Pandas Cookbook: Common Operations for Data Science

This guide serves as a quick reference for the most common pandas operations you’ll use in DS105W. Each section includes practical examples that you can adapt for your own data analysis tasks.

1. Data Import & Export

Reading Data

# From CSV
df = pd.read_csv('file.csv')

# From Excel
df = pd.read_excel('file.xlsx', sheet_name='Sheet1')

# From JSON
df = pd.read_json('file.json')
# For nested JSON
df = pd.json_normalize(json_data)

Saving Data

# To CSV
df.to_csv('output.csv', index=False)

# To Excel
df.to_excel('output.xlsx', index=False)

# To JSON
df.to_json('output.json')

2. Data Inspection

# Basic information
df.info()  # Data types and non-null counts
df.describe()  # Statistical summary
df.head()  # First 5 rows
df.tail()  # Last 5 rows

# Column names and data types
df.columns  # List column names
df.dtypes  # Data types of each column

# Missing values
df.isna().sum()  # Count missing values per column

3. Data Selection

Selecting Columns

# Single column
series = df['column_name']
# Multiple columns
subset = df[['col1', 'col2', 'col3']]

Filtering Rows

# Use a boolean mask
mask = df['column'] > 5
filtered = df[mask]

# Do it all in one line
filtered = df[df['column'] > 5]

# Multiple conditions
filtered = df[
    (df['col1'] > 5) & 
    (df['col2'] == 'value')
]

# Using isin
filtered = df[df['category'].isin(['A', 'B', 'C'])]

4. Data Transformation

Creating New Columns

# Based on existing columns
df['new_col'] = df['old_col'] * 2

# Apply an existing function to each element in a column
df['new_col'] = df['text_col'].apply(len)

# Apply a lambda function to each element in a column
df['new_col'] = df['col'].apply(lambda x: x.upper())

Renaming Columns

# Rename specific columns
df = df.rename(columns={
    'old_name1': 'new_name1',
    'old_name2': 'new_name2'
})

# Clean up all column names
df.columns = df.columns.str.lower()  # Make lowercase
df.columns = df.columns.str.replace(' ', '_')  # Replace spaces

5. Data Aggregation

Grouping and Summarizing (with agg())

# Use groupby() to split the data into groups
# and then apply an existing pandas operation to each group
# The output is a single number for each group
summary = df.groupby('category')['value'].mean()

# You can apply multiple operations to each group
# Use the `.agg()` method for that
summary = df.groupby('category').agg({
    'col1': 'mean',
    'col2': 'sum',
    'col3': 'count'
})

# Custom aggregation functions
def custom_metric(x):
    return x.max() - x.min()

# Apply a custom aggregation function to each group
summary = df.groupby('category')['value'].agg(custom_metric)

Grouping and Summarizing (with groupby.apply)

# Define a function to apply to each group
def process_group(group):
    return group['col1'].mean()

# Apply the function to each group
summary = df.groupby('category').apply(process_group)

# Use a more complex function, returning a Series
def process_group(group):
    output = pd.Series([
        group['col1'].mean(),
        group['col2'].sum()
    ], index=['mean', 'sum'])
    return output

# Apply the function to each group
summary = df.groupby('category').apply(process_group)

6. Data Reshaping

Melting (Wide to Long)

# Convert wide format to long format
df_long = df.melt(
    id_vars=['id_column'],  # Columns to keep as is
    value_vars=['col1', 'col2'],  # Columns to unpivot
    var_name='variable',  # Name for the variable column
    value_name='value'  # Name for the value column
)

Exploding Lists

# Expand list elements into separate rows
df_exploded = df.explode('list_column')

7. Data Cleaning

Handling Missing Values

# Drop rows with any missing values
df_clean = df.dropna()

# Fill missing values
df['column'].fillna(0)  # With a specific value
df['column'].fillna(df['column'].mean())  # With mean
df['column'].fillna(method='ffill')  # Forward fill

Removing Duplicates

# Remove duplicate rows
df_unique = df.drop_duplicates()

# Remove duplicates based on specific columns
df_unique = df.drop_duplicates(subset=['col1', 'col2'])

8. Working with Dates

# Convert to datetime
df['date'] = pd.to_datetime(df['date_string'])

# Extract components
df['year'] = df['date'].dt.year
df['month'] = df['date'].dt.month
df['day'] = df['date'].dt.day

# Formatting
# Learn about formatting codes here:
# https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
df['date'] = df['date'].dt.strftime('%Y-%m-%d')

9. Combining DataFrames

Merging

# Inner join
merged = pd.merge(df1, df2, on='key_column')

# Left join
merged = pd.merge(
    df1, df2, 
    on='key_column', 
    how='left'
)

# Using different key columns
merged = pd.merge(
    df1, df2,
    left_on='key1',
    right_on='key2'
)

Concatenating

# Vertically (stack)
combined = pd.concat([df1, df2])

# Horizontally (side by side)
combined = pd.concat([df1, df2], axis=1)

📚 Additional Resources