About This Series
How I Got In is a collection of real-world stories from red team and social engineering engagements we’ve done over the years. Every story you read here is based on an actual test, with names and details changed to protect confidentiality. These episodes show how attackers think, how real breaches unfold, and how trust can be exploited in ways that are often invisible until it’s too late.
The goal isn’t to sell services or pitch tools. The goal is to give you a front-row seat to how it really happens, and to tell the kinds of stories that remind us the biggest risks aren’t always technical.
Each episode lives in a larger volume. As we publish more, the collection will grow. You can jump in anywhere.
Episode 1: The Raffle Phishing Campaign
How I Got In – Red Team Files, Volume 1
The Mission Brief
This company wasn’t new to risk. They operated in the payment recovery industry, and their job required them to engage with people who didn’t always want to be found, contacted, or reminded they owed money. They weren’t strangers to threats, scams, or attempts at retaliation. What they wanted to know now was simple:
Could someone actually get in?
We were hired to find out.
The engagement wasn’t about testing firewalls or running vulnerability scans. The client wanted us to simulate how a real attacker might operate if their goal was to infiltrate systems, harvest access, and cause damage. Not with code, but with conversation through a phishing campaign. This was a focused social engineering attack engagement. No physical intrusion. No badge cloning or office visits. Only electronic vectors were allowed, but we had permission to go as deep as we could within that scope.
They wanted to know:
- Where were their weak points?
- Who was vulnerable to phishing or phone-based impersonation?
- What security policies didn’t match real-world behavior?
- How far could a single attacker go, starting with nothing?
We weren’t given credentials, names, email addresses, or internal documents. Just a company name and a green light.
We started with a blank slate.
Phase 1: Blank Slate Recon
Finding a way in always starts the same way: research. In this case, it wasn’t quick or easy.
The company operated in the payment recovery space. That’s an industry that sits at the intersection of finance, privacy, and public frustration. Their digital footprint was more muted than most, which made sense. Fewer blog posts, fewer employee spotlights, fewer press releases. This wasn’t a retail brand. They weren’t advertising themselves to the world. They were trying to avoid unnecessary attention.
But to breach a system, you don’t need attention. You need access. And access starts with understanding.
So, we dug.
We hit LinkedIn first, but it was thin. Only a few employees listed the company as their current employer, and many of the roles were generic. There wasn’t much to work with at the start.

After this, we expanded the search.
We scraped their website looking for team mentions, contact emails, hiring pages, or bios. We cross-referenced domain names against archived conference attendee lists. We searched job boards for recently posted positions to see how they referred to their IT stack and internal structure. We combed through local business directories and state-level corporate registration filings. We even checked industry-specific databases to see if they had listed contact points for data partners or affiliates.
Slowly, pieces started to form.
A few names. A couple of job titles. Then we found two third-party vendor relationships mentioned on obscure news sites. Those vendors had team members tagged in shared press releases, and those team members had LinkedIn connections to our target. That helped us fill in the edges of the org chart and start building internal structure from the outside.
Eventually, we found enough clues to identify the company’s email pattern which was first initial[dot]last name. From that, we built a list of about 30 to 40 probable staff emails and started grouping people into categories: IT and systems staff, finance, executives, support.
That was when we found the CEO’s LinkedIn profile.
It stood out immediately. Where the other profiles were vague or absent, his was polished and confident. He listed his board involvement with a national faith-based athletic group, along with personal hobbies, photos from events, and a few public endorsements. And then, right inside the additional info was his personal cell phone number. Visible to anyone with a regular LinkedIn account.

In all our years of doing this work, we’ve rarely seen that. A C-level executive posting a direct cell number to a public LinkedIn profile?
And in that moment, we knew where we were starting.
Phase 2: Building the Pretext
Now that we had the CEO’s personal number and a strong feel for who he was, it was time to build the story.
This is where most people misunderstand social engineering. It’s not about trickery in the Hollywood sense. It’s not about fast-talking or aggressive sales pitches. It’s about trust. And trust only works when it feels familiar.
His LinkedIn profile had already given us the theme. He wasn’t just a business executive. He was involved in a national faith-based sports organization.

That detail meant everything. It told us about his values, his volunteer work, and how he saw his role in the community. It also gave us a reason to talk to him without triggering suspicion.
So we created a backstory.
I became a youth coach from Denver. The team name was real, the league structure was real, and the tone was familiar. I made sure I could talk about local games, other board members, and even the seasonal events that this organization regularly held. If he challenged me on anything, I’d be ready.
Luckily, he didn’t challenge me.
The call was short. Something around three minutes. I thanked him for his ongoing support of the organization and let him know that as a board member, his name had been entered into a raffle for donors and volunteers. I made it sound simple and routine, like it was part of the group’s annual outreach program.
He was polite. Friendly. Thanked me for the call. Said it was nice that we were doing something like that. He asked for my name and what team I coached. I gave both without hesitation. The names were real, and I delivered them naturally.
He never questioned the call.
I told him we’d be doing the drawing the next day and that if his name came up, I’d give him a quick call back.
That was all.
The goal here wasn’t to get him to click something. Not yet. The goal was to plant the seed, make the next call expected, and frame our future interaction as a follow-up rather than an intrusion.
He hung up thinking he’d just been thanked for his service to a cause he believed in. I hung up knowing that the door was open.
Phase 3: Crafting the Bait
With the seed planted, it was time to turn the pretext into payload.
This part of the operation always requires balance. The bait has to be technically functional, but also visually convincing. You can have the best payload in the world, but if the target doesn’t click, it dies in your inbox.
So we kept it simple.
We designed a fake Excel document titled [Client_Name]_Raffle_[ORG].xls. It was meant to look like a prize confirmation form, something that would make perfect sense as a follow-up to our earlier call. The top-left corner of the spreadsheet featured the logo of the sports nonprofit. Right beneath it, we placed a friendly message on a yellow background:
Congratulations on the raffle. To view prize details, click “Enable Content” on top.

We kept it vague on purpose. No prize amount, no item name. Just enough to spark curiosity. The kind of curiosity that overrides hesitation, especially when the document comes from someone you’ve already spoken to and trust.
The trick wasn’t in the visual layout. It was in the expectation. When someone believes they’re just completing a process they started earlier, they are far less likely to question the next step. Especially when that step is as routine as clicking a yellow “Enable Content” bar in Microsoft Office.
Behind the scenes, we wrote a basic but effective VBA macro.
It used the Auto_Open() function to trigger on file launch. The macro used a hidden PowerShell call to decode a base64-encoded reverse shell payload and send the connection back to our listener over port 443. That port is used for encrypted web traffic, which made it unlikely to be blocked or noticed without deep packet inspection. Even most antivirus tools back then weren’t catching this unless the payload matched something on a signature list.
This specific attack chain which used macros to execute PowerShell, used to be devastating. But over the years, Microsoft began tightening restrictions. By now, macros in downloaded Office documents are disabled by default. Opening a file like this today requires multiple manual bypasses, which most users would hesitate to complete. At the time of this assessment though, it only took a single click on “Enable Content.” It was a common corporate blind spot. Many companies didn’t realize how dangerous macros could be if they allowed them to run without control.

We would never recommend this exact technique in modern engagements without adapting for current protections. But back then, it worked. It worked well.
And that was the moment where everything came together. We had a believable call. We had a target who was expecting a follow-up. We had a payload dressed up like a thank-you message.
Now we just needed him to open it.
Phase 4: The Callback
The next morning, I called him back.
I waited until mid-morning when things would be busy enough that the call wouldn’t seem intrusive but early enough that he’d likely still be at his desk. Same tone. Same voice. Calm. Friendly. Familiar. I reminded him who I was and said I had some good news.
His name had been drawn. He won.
No hesitation. No confusion. Just a laugh and a little disbelief. He said, “You’ve got to be kidding me. I love golf. That’s incredible.” Then he added, “It’s like it was meant to be.”
That right there is the point of no return. He was already emotionally bought in. I didn’t have to sell him anything. I was just finishing a story he had already started writing in his head.
For our malicious macro, we not only made it where it would call back to us, but we had it where it would actually load the image of the “golf clubs he won” along with additional information about them. This way, the button to enable the macro would actually do something from his point of view.
I told him we had a short confirmation form for him to review, nothing formal. It was just a quick Excel document with a description of the club and a field for preferred shipping method. I made a passing comment about the formatting sometimes looking weird in Outlook. I said, “You might see that yellow bar at the top. If it doesn’t render right, just hit Enable Content and it’ll fix it.”

He said he had it open already. I could hear him clicking. A second later, he said, “Yep, got it.”
And then, just like that, the shell connected.
The reverse connection popped quietly on my listener. The beacon was stable. The connection was clean. No errors, no delay, no signs of any interruption or security event. I was sitting on the CEO’s internal workstation.
The payload executed exactly as designed. It reached out through port 443, which meant to any firewall or IDS that wasn’t using deep inspection, it just looked like encrypted web traffic. No antivirus alert. No endpoint detection alarms. Just a silent little handshake between his machine and mine.

As we wrapped up the call, he threw in one last detail.
“If you need to send them out,” he said, “I can give you my FedEx number. That’ll make it easier.”
He rattled it off over the phone. Then, not five minutes after we hung up, he followed up with an email. His home address, shipping details, and a note that read, “Please use the following FEDEX account number #[number].”

From his side, it was helpful. From our side, it was a bonus information we didn’t even ask for.
And now, with the call ended and the workstation compromised through our phishing campaign, the real work could begin.
Phase 5: Inside the Machine
With the CEO’s workstation compromised, the real work began.
Our point of contact had made one thing clear before the test started. “If you manage to get in,” he said, “see how far you can go. We want to know what kind of damage a real attacker could do if they landed a shell on one of our machines. Show us what’s at stake.”
So we did exactly that.
The reverse shell connection was solid. It remained stable throughout our interaction and gave us full access to the CEO’s computer. We confirmed something important early on. He had local administrator rights on his own workstation. That opened the door to deeper access.
We began with light reconnaissance. File system access, network share enumeration, scheduled tasks, browser history, screenshots. From there, we enabled keylogging to observe his activity and turned on the webcam silently to confirm the physical environment. Everything was functioning. There were no user notifications, no popups, nothing that would cause suspicion.
He had no idea we were there.
With elevated privileges on the endpoint, we were able to plant persistence and begin a full memory dump of LSASS, the process responsible for handling Windows logins.
That’s where it got interesting.
From that dump, we recovered a cleartext version of the CEO’s password. It was stored in memory as a result of how the system had been used. Nothing out of the ordinary, but revealing nonetheless. That password became a key we could test against other machines.
We started trying it on different endpoints inside the same network segment.
It worked on several.
Later, we learned the reason why. Over time, IT had granted the CEO local admin rights on dozens of systems. It had happened gradually. One request at a time. Maybe he needed access to install something. Maybe he didn’t want to wait for helpdesk tickets to get approved. Either way, the pattern became clear. When the CEO asked for access, he got it. No centralized policy. No cleanup afterward. Just individual approvals given out of convenience or pressure.
With that level of sprawl, we now had a foothold on more than just one system. We pivoted from machine to machine using a mix of his credentials and re-used local admin hashes. From there, we harvested more credentials, dumped more memory, and examined login histories to identify high-value targets.
Eventually, we landed on a system used by one of the internal IT administrators.
Their credentials were still in memory.
We captured the password, validated it, and passed it to a domain controller. The login succeeded. We were now operating with full domain admin privileges.

This entire chain from initial call to full domain control was built on access that no one thought to question. There were no technical vulnerabilities exploited. There were no zero-days. There was no malware written specifically for this test. There was just one document, one voice, and one decision to trust a stranger.
It was simple. It was quiet. And it worked.
Phase 6: Casting the Net
Once we had access to the CEO’s workstation and full visibility into the domain, we pivoted to the next objective: the rest of the organization.
Our goal wasn’t just to break in. The client wanted to understand how their people would respond if they were targeted by a motivated attacker with some real preparation behind them. Not just a random phishing blast. Something believable. Something that looked like it belonged.
This phase was all about reach.
We started by identifying our targets. Between open-source recon and internal enumeration, we had a list of about 30 confirmed employee identities and roles. We grouped them by department such as finance, HR, support, IT, and a few key business units.
Then we built the delivery mechanism.
We registered a domain that looked just close enough to pass inspection: webmail-outlook.com. That domain was purchased a week before we used it. We didn’t just sit on it either. We set up a simple warmup site with placeholder content and generated light but real-looking traffic from multiple IPs. The goal was to give the domain a reputation. Fresh domains often get flagged. A warmed-up domain blends in.
Once that had baked for a few days, we built a subdomain that mimicked their actual structure. If the company’s real portal was something like mail.clientname.com, our phishing domain became [clientname].webmail-outlook.com. The URL looked long, corporate, and familiar.
Then we cloned their real Outlook Web App login page.

We made sure everything matched. Fonts. Buttons. Load behavior. If someone compared our page side-by-side with the real one, they’d barely see the difference. On top of that, we cloned their actual error message too. This let us capture credentials and immediately redirect the user to the real login page. To them, it just felt like a mistyped password. They entered it again, this time on the real portal, and it worked. No red flags. No reported issues.

The phish email itself was boring by design.
It told users they needed to reauthenticate for a security update to the mail platform. There was no panic language. No dramatic countdown. Just an internal-sounding notice. The kind that gets auto-clicked in most organizations without a second thought.

We sent the phishing campaign.
The results came quickly. Within the first hour, people were clicking. By the end of the day, 14 employees had submitted their usernames and passwords. A few of them did it more than once. The redirect to the real portal convinced them the first try had just failed.
No one reported it. No alerts were triggered. We had credentials for over a third of the company by dinnertime.
The phishing portal was decommissioned within 48 hours, the data stored offline and handed over to the client during our final reporting. But the impact was clear.
One email. One lookalike site. Fourteen sets of credentials. And zero suspicion.

Phase 7: One More Call
By this point, we had already exceeded expectations.
We had compromised the CEO’s workstation, obtained domain admin access, and collected credentials from a large portion of the company through phishing. But there was one more angle we wanted to explore.
Specifically, they wanted to know what would happen if someone called in pretending to be part of the company. Could a single voice on a phone line gain access without ever touching a keyboard?
We picked a likely entry point: the field representative login portal.
During early recon, we had found a legacy login page for field reps. It was buried deep in the public site, accessible from a subdomain, and completely separate from the main corporate portal. There was no multi-factor authentication. No modern protections. Just a basic login form asking for a CID, username, and password.

The form felt forgotten, which made it the perfect target.
Using data gathered during earlier phases, we identified a real field rep. This was someone listed in an old employee document tied to a client partner. We had a name, an office location, and a loose idea of their role.

So, we made the call.
We dialed the main support line and asked to speak to someone who could help with portal issues. A male employee answered and introduced himself as a receptionist. He was polite and patient. I told him I was Jim, a field rep, and that I was having trouble logging in. I said I had tried everything. CID, username, password. Nothing was working.
Then I asked casually, “Can you just confirm my CID and maybe my username? I’ve typed it so many times at this point I’m probably messing something up.”
No hesitation. He gave me both.
Then he added something we didn’t expect.
“You’re probably just entering the wrong password. Want me to reset it or do you want me to read it out?”
I stayed silent for a second, just long enough to let the offer hang.
He read it out loud.
Just like that, we had working credentials to a production login page. No pretext beyond a frustrated user. No ID verification. No callback. Just a helpful employee doing what he thought was right.

We verified the credentials offline and documented everything for the report.
That call, like everything else in this engagement, proved the same point. You don’t need technical exploits to breach a system. You need a voice that sounds real, a problem that feels common, and a moment where no one is watching too closely.
Wrap-Up: Why It Worked
This test started with a simple question.
Can someone get in?
The answer was yes. But the more important answer was why. Because it wasn’t a technical flaw that let us in. It wasn’t a misconfigured firewall or a forgotten patch. It was people, policies, and assumptions.
In just a few days, we compromised the CEO’s workstation through a phone call and a fake Excel file. We extracted his credentials, gained admin access on multiple systems, and escalated to domain administrator by following a trail of password reuse and unmanaged access. We harvested 14 user credentials through a phishing campaign that mirrored their real login portal. And we secured a working login to a production field rep portal from a single phone call by impersonating an employee.
No fancy exploits. No malware. No scanning. No brute force.
Every success in this test came from one of three things:
- Trust
- Repetition
- A lack of challenge
The CEO clicked a file because he expected to. Employees entered their credentials because they saw a familiar layout. A receptionist gave up a password because the caller sounded real and had a believable problem. These weren’t edge cases. These were normal moments inside a normal company.
What made them dangerous is that no one questioned them.
The client took the phishing campaign results seriously. They used the findings to retrain key staff, improve internal policies, and lock down systems that had been left too open for too long. For us, it was another reminder that when it comes to security, the hardest part isn’t getting in. It’s convincing people they’re vulnerable before someone else proves it the hard way.
FAQ: About This Engagement
We used a VBA macro embedded in an Excel file to launch a PowerShell reverse shell. The connection came back over port 443 to a listener we controlled.
–
Not in the same way. Microsoft now disables macros by default in downloaded files. Most modern environments would block this unless the user manually enabled it through multiple steps. Today, we would adapt the technique based on updated protections.
–
No. The engagement was performed quietly with internal coordination from the client. He was never told the details directly.
–
Less than a week from start to finish. Initial access came within 48 hours. Lateral movement and privilege escalation took another day or two.
–
User training. Proper local admin controls. Endpoint monitoring. And most importantly, a security culture where people are encouraged to verify even the most believable requests.

