Incredible Python Code Snippets
Python Functions I Use Every Single Day
I’ve been writing Python professionally for a little over four years now. Long enough to stop being impressed by flashy libraries — and start obsessing over small, boring-looking functions that quietly save hours every week.
These aren’t "hello world" tricks.
They’re not the usual requests, pandas, or rich flexes either.
These are low-level, sharp-edged utilities I actually reach for when building real systems, debugging production issues, or automating things that shouldn’t need a full framework.
Every function below exists because it solved a problem I hit more than once.
Let’s get into it.
safe_open() – Atomic File Writes Without Corruption
Ever had a process crash mid-write and leave a file half-written?
Yeah. Me too. Once was enough.
This uses a temporary file + atomic rename to make writes crash-safe.
import os
import tempfile
from contextlib import contextmanager
@contextmanager
def safe_open(path, mode='w', **kwargs):
dir_name = os.path.dirname(path) or '.'
with tempfile.NamedTemporaryFile(mode=mode, dir=dir_name, delete=False, **kwargs) as tmp:
temp_name = tmp.name
yield tmp
os.replace(temp_name, path)
Usage:
with safe_open("data.json") as f:
f.write("important data")
Fact: os.replace is atomic on POSIX systems. That’s why this works.
chunks() – Memory-Safe Iteration Over Anything
I ’t like loading things into memory unless I absolutely have to. This function chunks any iterable, not just lists.
from itertools import islice
def chunks(iterable, size):
it = iter(iterable)
while True:
chunk = list(islice(it, size))
if not chunk:
break
yield chunk
Usage:
for batch in chunks(range(10_000_000), 1000):
process(batch)
Why it matters: This is the difference between code that works in dev… and code that survives production data.
timed() – Measure Real Execution Time (Not Wall Clock Lies)
Fact: time.time() lies under load. If you care about performance, use a monotonic clock.
import time
from contextlib import contextmanager
@contextmanager
def timed(label="block"):
start = time.perf_counter()
yield
end = time.perf_counter()
print(f"⏱ {label}: {end - start:.6f}s")
Usage:
with timed("db query"):
expensive_query()
I use this constantly during refactors. Micro-optimizations without measurements are just vibes.
Here is the same as python decorator.
deep_get() – Safe Access for Ugly Nested Data
APIs love returning dictionaries nested like Russian dolls. This avoids KeyError, TypeError, and your sanity breaking.
def deep_get(data, path: list, default=None) -> dict:
for key in path:
try:
data = data[key]
except (KeyError, TypeError, IndexError):
return default
return data
Usage:
email = deep_get(payload, ["user", "profile", "email"])
Why I prefer this over .get() chains: It works with dicts and lists — and it reads like intent, not defensive noise.
trace_memory() – Find Silent Memory Leaks
Memory leaks ’t announce themselves. They just slowly kill your process.
This uses tracemalloc (stdlib, criminally underused).
import tracemalloc
from contextlib import contextmanager
@contextmanager
def trace_memory(label=""):
tracemalloc.start()
yield
current, peak = tracemalloc.get_traced_memory()
tracemalloc.stop()
print(f"{label} | Current: {current/1e6:.2f}MB, Peak: {peak/1e6:.2f}MB")
Usage:
with trace_memory("image processing"):
process_images()
This has saved me from "why does this service need 2GB RAM?" more than once.
fail_fast() – Crash Loud, With Context
Silent failures are worse than crashes. This ensures exceptions print full tracebacks immediately, even in threaded or weird environments.
import faulthandler
import sys
def fail_fast():
faulthandler.enable(file=sys.stderr, all_threads=True)
Call it once at startup.
Why this matters: In production, partial logs are useless. This makes crashes honest.
disk_cache() – Persistent Caching Without Redis
Sometimes you want caching… but not infrastructure. shelve gives you a persistent key-value store using nothing but files.
import shelve
from functools import wraps
def disk_cache(path):
def decorator(fn):
@wraps(fn)
def wrapper(*args):
key = str(args)
with shelve.open(path) as db:
if key not in db:
db[key] = fn(*args)
return db[key]
return wrapper
return decorator
Usage:
@disk_cache("prices.db")
def get_price(symbol):
return expensive_api_call(symbol)
Tricks That Make My Code Feel "Alive"
Great Python code doesn’t just execute — it communicates.
- It tells you what it’s doing.
- It reacts.
- It breathes.
Here are Python tricks I use to make my code feel alive — the kind of things you rarely see in tutorials but end up using everywhere once you discover them.
Let’s get into it.
Logging That Reads Like a Story (Not a Crime Report)
Most people either:
- print() everything
- or use logging so dry it feels like reading tax law
You want narrative logging — logs that explain intent, not just events.
import logging
logging.basicConfig(
level=logging.INFO,
format="🕒 %(asctime)s | %(levelname)s | %(message)s",
datefmt="%H:%M:%S"
)
logging.info("Booting data pipeline")
logging.warning("Cache miss - recomputing expensive results")
logging.error("API responded with pure disappointment")
- Why this matters:
- Your future self is your main user
- Logs are documentation that never goes out of date
- Emoji + intent = instant comprehension
Fact: Studies on observability show developers spend over 30% of debugging time just understanding program state. Good logs slash that.
Progress Bars That Adapt Themselves
- Most people know tqdm.
- Very few people use it properly.
Here’s the trick: update progress based on actual work, not loops.
from tqdm import tqdm
import time
tasks = [0.1, 0.3, 0.05, 0.4, 0.2]
with tqdm(total=sum(tasks), unit="sec") as bar:
for t in tasks:
time.sleep(t)
bar.update(t)
- Why this feels alive:
- The bar reflects real time, not iteration count
- It adapts automatically if tasks change
- Your terminal stops lying to you
Once you do this, fake progress bars become unbearable.
Live Status Lines (Without Spamming the Terminal)
Ever seen tools like pip update the same line? That’s not magic. That’s carriage returns.
import time
import sys
states = ["Loading data", "Cleaning data", "Training model", "Finalizing"]
for state in states:
sys.stdout.write(f"\r⚙️ {state}...")
sys.stdout.flush()
time.sleep(1)
print("\n✅ Done.")
- Why this works:
- Zero log spam
- Instant feedback
- Looks professional with almost no effort
Pro tip: Combine this with logging and your CLI suddenly feels like a real product.
Animated Spinners for "Unknown Duration" Tasks
Progress bars fail when you ’t know how long something will take. That’s spinner territory.
import itertools
import sys
import time
spinner = itertools.cycle("⠋⠙⠹⠸⠼⠴⠦⠧⠇⠏")
for _ in range(30):
sys.stdout.write(f"\r{next(spinner)} Working...")
sys.stdout.flush()
time.sleep(0.1)
print("\r✅ Finished.")
- Why this feels alive:
- Constant motion = reassurance
- Users know the program didn’t freeze
- Your script suddenly feels handcrafted
Small touch. Massive perceived quality jump.
Self-Explaining Exceptions (Your Users Will Thank You)
Default exceptions are technically correct — and emotionally useless.
Compare this:
int("abc")
With this:
def safe_int(value):
try:
return int(value)
except ValueError as e:
raise ValueError(
f"Expected a number, got '{value}'. "
"Check your input source."
) from e
- Why this matters:
- Error messages are UX
- Clear errors reduce GitHub issues
- You debug once instead of fifty times
Fact: Python’s exception chaining (from e) preserves tracebacks and clarity. Most devs never use it.
Audible Feedback (Yes, Seriously)
When running long jobs, sound is underrated.
import sys
def beep():
sys.stdout.write('\a')
sys.stdout.flush()
# Long task
beep()
print("🔔 Task completed.")
- Why this works:
- You ’t need to stare at the terminal
- Perfect for overnight jobs
- Feels oddly satisfying
Once you use this, silent scripts feel… lonely.
Real-Time Metrics Without a Dashboard
You ’t need Prometheus for quick feedback.
import psutil
import time
while True:
cpu = psutil.cpu_percent()
mem = psutil.virtual_memory().percent
print(f"\rCPU: {cpu}% | MEM: {mem}%", end="")
time.sleep(1)
- Why this is gold:
- Zero setup
- Instant insight
- Great for spotting runaway processes
This trick has saved me from melting laptops more than once.
Making Scripts Talk Back to You (Interactive Confirmation)
Automation without feedback is dangerous.
def confirm(action):
reply = input(f"⚠️ About to {action}. Proceed? (y/n): ")
return reply.lower() == "y"
if confirm("delete 10,000 files"):
print("🔥 Doing the dangerous thing.")
else:
print("😌 Crisis averted.")
- Why this matters:
- Prevents catastrophic mistakes
- Adds a human pause
- Makes scripts feel cooperative, not reckless
Good automation doesn’t remove humans — it respects them.