He Made $64K Searching GitHub With A GENIUS Trick (using open source only)


​

He Made $64K Searching GitHub With A GENIUS Trick
(using open source only)

​

This issue is brought to you by:
​

TestSprite is the Easiest AI Agent for

Software Testing

​
Ensure End-to-End Confidence in Your Software Quality.  

​

​

This, is the story of how one individual, "Mr. B," leveraged a deep understanding of Git's less-explored features to uncover secrets in public repositories, earning over $64,000 πŸ’°.

His "genius trick" wasn't about finding new tools, but about using existing Git functionalities in ways most developers and even many security professionals overlook.

​

The real goldmine often lies beneath the surface

We're used to interacting with Git through commands on our current working directory and recent history.

However, Git is designed to never lose data easily.

Mr. B's approach highlights that secrets committed, even to branches that are later deleted, or files removed from history via commands like git commit --amend or git rebase, don't just vanish.

The files that are harder to find, can persist as "dangling" or "unreachable" objects within Git's internal object database (.git/objects) for a period before potential garbage collection (usually 14 days).

These are invisible to standard git log or file browsing but are recoverable if you know where and how to look.

To put these learnings into action, especially in today's environment where AI can empower even junior attackers, requires a shift in mindset.

The big problem is the persistent risk of sensitive data: API keys, cloud credentials, private tokens being committed to Git repositories.

Once committed, even if "deleted" from the main branch or recent history, these secrets can become ticking time bombs.

​

What most people do

Most people, and many automated tools, try to solve this by scanning the current version of the code or perhaps the visible commit history.

They might assume that if a secret isn't in the latest commit or if a branch containing it was deleted, the risk is mitigated.

This approach often doesn't work because Git's architecture is (WAY) more complex.

Git stores all file versions and directory structures as objects.

When history is rewritten or branches are deleted, the old objects aren't immediately purged!

They become "loose" or "dangling" objects.

Eventually, Git might "pack" these objects into compressed packfiles, still within the .git directory, before they are eventually garbage collected, but this process isn't instantaneous.

Standard scanning methods miss these orphaned artifacts.

​

A new approach

Mr. B's successful strategy, was to dig into Git's internals.

This involves:

Exploring the Object Database: Using Git plumbing commands like

// fsck: identify objects that are no longer referenced by any commit, branch, or tag but still exist
​git fsck --full --unreachable --dangling

​Unpacking Archives:

// decompress packfiles, potentially revealing older, hidden objects
​git unpack-objects

Full History Traversal: Systematically going through every commit in the repository's history using

git rev-list --all

and examining the diff for each commit against its parent.

This can uncover files that were added and then quickly removed, a common pattern for accidental secret exposure.

​

Targeted Scanning:

Once these potentially sensitive historical artifacts and dangling objects are recovered (even if they are just raw blobs without filenames), use a robust open-source scanner like TruffleHog, configured to scan the raw file content for various secret patterns. Mr. B turned it on with switches to run on the file system with all detectors:

trufflehog filesystem --only-verified --print-avg-detector-time --include-detectors="all" ./ > secrets.txt

This methodical deep dive into the repository's underlying metadata, rather than just its surface, is what allowed Mr. B to find secrets that thousands of others, using conventional methods, missed.

It’s a reminder that understanding the fundamental workings of our tools is paramount for robust security.

Our hero also mentioned he:

  1. Only "attacked" organizations with official bug bounty programs that allow it
  2. How he built a home lab and a bunch of cloud servers to run his scans
  3. A "vibe coded" cool UI to look into the files found
  4. A list of bounties amounting to $64K USD!

​

Personally, I applied the same process, scripting it, and looking for my own target.

So I cloned and scanned and cloned and scanned, until one repo POPPED.

I found a few admin keys in a Kubernetes open source used by thousands of people and I was excited!

​

My advice to you: learn the methods, at least be familiar with Git's strange metadata (and actual content) processes, and build a similar process embedded into the release process to avoid future storms!

​The full blog post is here.

​
Thank you for reading.
Feel free to reply directly with any question or feedback.
Have a great weekend!

Whenever you’re ready, here’s how I can help you:

​

​

ESPRESSO FRIDAYS

Every once in a while I send hand picked things I've learned. Kind of like your filter to the tech internet. No spam, I promise!

Read more from ESPRESSO FRIDAYS

Google's Git Killer Is INSANELY Better (and it's open source) You saw the title. Bold claim, right? "Insanely better"? Than Git? Git is the foundation of modern software development. It started back around 20 years ago, when Linus tried to build his first Linux kernel, and had enough of SVN. So Linus being Linus, he just went ahead and built his own. But what if the way we've always done version control isn't the only way, or even the best way anymore? That's the core idea behind Jujutsu (jj)...

You Need To Learn Docker Swarm! Ever felt like you're overcomplicating your container deployments? You might be. Today, we're diving into a Docker orchestrator that's likely already on your machine (run `docker service` for a second will ya?), but you're probably overlooking: Docker Swarm. The Underdog Orchestrator For years (for me, the past 11 years to be exact), the path has seemed to be either simple Docker Compose or the more, WAY MORE complex, Kubernetes. Compose is great for local...

LimaVM Is Probably The Best MacOS Virtual Machine I've Ever Used If you're tired of Docker Desktop bogging down your machine, or simply don't like fuff of mapping ports, mounting volumes when all you need is a small virtual environment, this one's for you. I recently discovered LimaVM, and it's a game-changer for local development. It lets you spin up Linux VMs with ease, offering a faster, lighter alternative to Docker and other VM managers for many tasks, but especially for development....