I Tried Block Compressed Sensing on Real Photos. Here’s My Take.

You know what? I didn’t expect to like block compressed sensing this much. I used it for a small camera project and for some test images on my laptop. It’s not magic. But it’s handy. And a little weird in a good way.

Quick idea, plain words: it chops a photo into small blocks (like 16×16 or 32×32). For each block, it grabs a few smart numbers instead of the whole block. Later, a solver guesses the full block back. That’s the heart of it.
For a deeper dive into where block-level sensing slots into the wider universe of data reduction, swing by DataCompression.info for clear primers and tool rundowns.
If you’d like the full mathematical treatment, this concise arXiv note lays out the core theory in detail (read it here).

If you’d like to see my raw, day-by-day notebook from the very first time I pushed block compressed sensing on field shots—including all the code stumbles I cut from this summary—you can open the full lab diary here: my detailed BCS experiment log.

I’ll share what I ran, what worked, what didn’t, and a few numbers I wrote down.

My Setup (nothing fancy)

  • Laptop: MacBook Air M1, 16 GB RAM.
  • Tiny rig: Raspberry Pi Zero W with a small camera.
  • Tools I used: MATLAB with BCS-SPL and TV-based solvers, Python with PyTorch models (ISTA-style and a ReconNet port), OpenCV for blocking/unblocking. For denoise, I used BM3D and a light DnCNN model.

For anyone hunting a broader survey of block-CS algorithm choices and reconstruction tweaks, this open-access overview in Algorithms is an excellent starting point (link).

Block sizes I tried most: 16×16 and 32×32.
Rates (how much I keep): 5%, 10%, 15%, 25%, 30%.

Solvers:

  • OMP (fast, greedy, does okay on edges),
  • TV-based (smooths noise but can blur),
  • A plug-and-play loop with BM3D (cleaner look),
  • A learned model (ISTA-Net-ish) for speed.

Color handling: I got better results sending the Y channel heavy and Cb/Cr light (YCbCr). When I sent RGB plain, I saw color speckle on low rates.

After hammering on TIFF scans all week I found a few tricks—predictive tiling, palette hacks, and delta-RLE combos—that beat my old go-tos; the blow-by-blow is in this TIFF-centric deep dive if you ever need truly lossless workflows.

Real Photos I Tested

1) Sunset trees by my street (512×512 crop)

  • Setup: 32×32 blocks, 25% rate, Bernoulli matrix, OMP rebuild, BM3D light pass
  • Notes: Roof lines looked sharp. Leaves? Kind of mushy. The sky got slight banding. The tree trunk was solid though.
  • My quick metrics: PSNR ~29.6 dB, SSIM ~0.88
  • Feel: Nice if you don’t pixel-peep. If you do, you’ll see the grid a bit.

2) Brick wall with my old bike (lots of texture)

  • Setup: 16×16 blocks, 10% rate, TV solver
  • Notes: Tough scene. Bricks made a checker vibe. Edges looked okay near the bike frame, but brick detail broke down. Noisy grout turned smooth and odd.
  • Metrics: PSNR ~24.1 dB, SSIM ~0.78; with BM3D after, SSIM nudged to ~0.82 but got softer
  • Feel: This shows the weak spot—fine texture at low rates.

3) Park from a small drone (downscaled to 512×512)

  • Setup: 32×32 blocks, 15% rate, partial Hadamard (fast), TV solver on laptop
  • Time: ~1.6 s per frame on CPU
  • Notes: Paths and benches looked clean. Grass looked patchy. Clouds smeared a bit, but not awful.
  • Metrics: PSNR ~27.8 dB, SSIM ~0.86
  • Feel: For scouting and saving bandwidth, I’d keep it.

4) My cat on the couch at night (ISO noise galore)

  • Setup: 32×32 blocks, 30% rate, PnP with DnCNN
  • Notes: Fur kept shape. Whiskers held up better than I thought. Low light noise dropped. The lamp glow kept its mood.
  • Metrics: PSNR ~31.3 dB, SSIM ~0.91
  • Feel: This one made me smile. Looked natural.

5) Raspberry Pi cam, slow Wi-Fi at my aunt’s place

  • Setup: 32×32 blocks, ~15% rate, fast Walsh-Hadamard on the Pi, send blocks as packets; rebuild on my laptop with OMP + BM3D
  • Bandwidth: cut to about one-sixth of a JPEG at the same visual level (for these scenes)
  • Quirk: A few packets dropped. Only those blocks got hit. The rest stayed fine. I kind of liked that—damage stays local.
  • Feel: For spotty Wi-Fi, this was the win.

What I Liked

  • Capture side is cheap: the Pi just did a few fast transforms and sums. No heavy math there.
  • Works well on smooth stuff: skies, walls, skin tones (unless you push the rate too low).
  • Block-local faults: lose a packet, lose a block. The whole photo doesn’t go down.
  • You can go fast with Hadamard: no huge random matrices to store.

What Bugged Me

  • Blocks can show: grids pop up at low rates, even with post-cleaning.
  • Fine texture suffers: leaves, bricks, fabric weave—these are hard.
  • Rebuild time adds up: on CPU, the nicer solvers are slow. Learned models help, but they need training.
  • Color can get weird: if you compress RGB straight, low rates get color noise. YCbCr helped.
  • Tuning matters: block size, rate, solver, and denoise all fight each other a bit.

When I pushed aggressive settings—aiming for sub-15 kB social-media shots—this separate project on shrinking photos to 15 kB taught me a few survival tricks that carried back into my BCS presets.

My Settings That Landed Well

  • 32×32 blocks for general use. 16×16 if memory is tight or motion is small.
  • 25–30% rate for “looks good” photos. 10–15% for scouting or low-stakes shots.
  • Partial Hadamard for sensing. It’s fast and light.
  • Rebuild with PnP + BM3D when I want clean lines, or a small learned model for speed.
  • For color, sense Y heavier than Cb/Cr, then balance in the end.

Small Digression: Why I Even Tried This

I needed to send yard photos over slow Wi-Fi, so my kid could check if the dog was back at the door. JPEG was okay, but the Pi choked when the signal dipped. With blocks, when a packet got lost, only that square turned soft. The dog’s face stayed clear. That sold me more than any chart.

On a totally different note, the same low-footprint tricks that keep my Pi stream usable also apply when you’re hanging out on bandwidth-starved webcam platforms looking to keep things playful. If that’s you, swing by this practical guide to navigating free chat sites for no-nonsense advice on etiquette, safety, and tech tweaks so you can focus on chemistry instead of troubleshooting.

Prefer to swap the pixels for real-world company instead? Triangle-area locals can browse the carefully vetted companions over at Fuquay escorts for up-front info, photos, and booking details that make arranging an in-person meetup as effortless as clicking send.

For moments when I feel like embracing the opposite extreme—shuffling pixels into wild symbol art—here’s the recap of my Unicode image experiment; it’s pure fun but surprisingly instructive about perceptual redundancy.

Who It Fits

  • Edge cameras, drones, low-power rigs.
  • Folks who like tinkering with math and images.
  • Not the best pick for art prints or heavy texture work. You’ll see the grid.

Curious about codecs that flip the compute balance—lightweight encoders but beefier decoders? My field notes on **[asymmetric gained deep image compression with continuous rate adaptation](https://datacompression.info/i-tried-asymmetric-gained-deep-image-compression-with-continuous