Microsoft dropped research yesterday that confirms what a few of us in the identity security space have been nervously sketching on whiteboards for the past couple of years. Threat actors are using OAuth’s error-handling behavior, the redirect that fires when an authorization request fails, to route phishing victims through login.microsoftonline.com and accounts.google.com on their way to attacker-controlled infrastructure.
The campaigns hit government and public-sector orgs. Post-redirect, the payloads fork: one path runs EvilProxy to hijack the AitM session, the other drops a DLL side-loading chain that leads to hands-on keyboard activity. Microsoft killed the specific malicious OAuth apps they found, but the technique itself is not something you can patch.
Over the past two months, the Guardz ITDR and Research team has been tracking AiTM trends and campaigns, identifying numerous techniques through our AiTM detections. This matters because at Guardz, the MSPs and SMBs we protect are exactly the organizations that get wrecked by this class of attack.
This blog post explores how OAuth redirection abuse enables phishing and malware delivery, drawing on our detection experience and proactive threat hunting efforts.
No Slack account needed.
How the Redirect Actually Works
Here is what the attacker builds. They register an application in an Entra ID tenant they control, point its redirect URI at a domain they own, then construct an authorization URL that is designed to fail:
?client_id=<attacker_app_id>
&response_type=code
&scope=<invalid_scope>
&prompt=none
&state=<encoded_victim_email>
A few parameters are doing the heavy lifting, and their interactions are what make this work.
/common/ is the multi-tenant authorization endpoint. Targets every Entra ID tenant. The attacker does not need to know which tenant the victim belongs to.
prompt=none tells the authorization endpoint to attempt silent authentication. No login screen. No consent dialog. The IdP evaluates the request against the existing browser session, and if silent auth cannot succeed, it returns an error and redirects to the registered redirect URI. The victim never sees a Microsoft UI element. Their browser touches Microsoft infrastructure for a fraction of a second and moves on.
scope=<invalid_scope> is the deliberate poison pill. An invalid scope guarantees that the request will fail, forcing the error path. The attacker does not want a token. They want the redirect.
response_type=code matters more than it looks. It triggers the full authorization code flow, meaning the authorization endpoint evaluates the request through its full authentication logic before returning the error. If you used response_type=token instead, the implicit flow’s different validation path might reject the request earlier, before the redirect fires. I have not tested whether that is consistently true across all error conditions, but response_type=code is what every observed sample used, and I suspect that is not accidental.

The Google Workspace variant works similarly but uses different parameters to force the same outcome. They set prompt=none and auto_signin=True, which requires interactive login but explicitly prevents the interactive prompt, creating an unresolvable contradiction that triggers the error redirect:
?prompt=none
&auto_signin=True
&access_type=online
&state=<encoded_victim_email>
&redirect_uri=<attacker_url>
&response_type=code
&client_id=<app_id>.apps.googleusercontent.com
&scope=openid+https://www.googleapis.com/auth/userinfo.email
The invalid scope is not even the only way to trigger the redirect. Microsoft documented at least five conditions that produce the error-and-redirect behavior: the user is not logged in, the browser session cannot be retrieved for silent SSO, the application lacks a service principal in the user’s tenant, the scope is invalid, or Conditional Access policies block the sign-in. The attacker does not care which one fires. They all produce the same outcome.
Note: Many SMBs either do not use Conditional Access or fail to maintain it properly, as a result, these types of attacks tend to be more impactful and disruptive, as we have observed and detected.
And here is the part that should bother everyone. The result is a URL that starts at Microsoft or Google infrastructure, passes through their authorization endpoint, and silently drops the user on the attacker’s infrastructure.
Email security gateways see the URL login.microsoftonline.com. URL reputation engines evaluate a request to a known-good domain. Browser protections check the initial domain, not the redirect destination.
What the Error Response Tells the Attacker
The authorization failure isn’t just a redirect. It also gives attackers useful reconnaissance information. After the Entra ID redirect happens, the attacker’s server can end up receiving something like:
?error=interaction_required
&error_description=Session+information+is+not+for+single+sign-on
&state=<encoded_value>
The sign-in fails with error code 65001 (AADSTS65001), which indicates that the user or administrator has not consented to use the application. Combined with the interaction_required error, this tells the attacker:
- The account exists
- The tenant enforces interactive auth (probably MFA)
- The user has an active session that cannot be silently reused
- No token was obtained
The attacker now has confirmed account existence and authentication posture for free. Standards-compliant reconnaissance.
The 65001 error can also serve as a forensic breadcrumb that defenders should prioritize. In the Entra ID sign-in logs, this appears as a failed non-interactive sign-in from a multi-tenant application that the target tenant never registered. Most SOC teams filter on successful authentications. I cannot overstate how common this blind spot is.
Last year, we investigated a customer environment and found many 65001 failures from an app ID nobody recognized, quietly sitting in the sign-in logs. That application was doing exactly this kind of redirect probing. By the time we found it, the redirect domains had already been rotated twice.
The state Parameter Trick
Most of the coverage we have seen on this campaign glosses over a detail that matters a lot operationally with the state parameter weaponization.
OAuth’s state parameter is supposed to be a random nonce for CSRF protection. These campaigns stuff the victim’s email address into it, encoded using plaintext, hex, base64, or a custom substitution scheme where 11=a, 12=b, and so on. After the redirect lands on the phishing page, JavaScript reads the state from the URL, decodes the email, and pre-fills the login form. The victim sees their email address already populated on what appears to be a legitimate sign-in page.
That is not a minor social engineering improvement. It is a conversion rate multiplier. A blank login form prompts suspicion. A form that already knows your email address feels like a continuation of the authentication flow you just started. The context created by clicking a login.microsoftonline.com URL and then seeing your own email on the next page is genuinely convincing.
For detection, the state parameter is the most reliable signal in the email layer. Take the value from any OAuth authorization URL in an inbound email, run it through base64 decoding, hex decoding, and plaintext inspection. If it resolves to a valid email address, you are looking at this technique. A legitimate OAuth implementation will produce a random value that decodes to gibberish.
After the Redirect: Two Kill Chains
The post actions is redirect activity splits, and the split matters for the detection strategy because the signals are completely different.
The AitM Path
Some campaigns routed victims to EvilProxy or similar AitM frameworks after the OAuth redirect. EvilProxy reverse-proxies the real Microsoft login page in real time, capturing credentials and session cookies as the victim authenticates through the proxy. CAPTCHA challenges and Cloudflare Turnstile gates sat between the redirect and the proxy page, filtering out automated scanners.
The detection signals after the AitM succeeds are familiar: session cookie replay from a new IP or device, impossible travel between the legitimate sign-in and the first API call using the stolen session, and user-agent string changes mid-session. Guardz’s ITDR detections already catch this post-compromise pattern. But the initial delivery evaded the email gateway because the URL was Microsoft’s own infrastructure. That is the gap, and it exists industry-wide.
The Malware Path
Other campaigns redirected to /download/XXXX paths, where a ZIP file was automatically downloaded. The execution chain after extraction is worth walking through because every step is a detection opportunity:
The ZIP contains a .LNK shortcut that fires PowerShell. PowerShell runs ipconfig /all and tasklist for host reconnaissance, then uses tar (yes, tar on Windows, from PowerShell, which should immediately look wrong to anyone reviewing process telemetry) to extract three files: steam_monitor.exe, crashhandler.dll, and crashlog.dat. PowerShell launches steam_monitor.exe, a signed binary from Valve Corporation. Windows’ DLL search order loads crashhandler.dll from the same directory before checking system paths. The malicious DLL decrypts crashlog.dat in memory and executes the final payload, which establishes C2.
Every piece of this is deliberate. steam_monitor.exe is a legitimate signed binary that passes application whitelisting. The DLL side-load avoids creating a suspicious new process. The in-memory execution of the decrypted payload means the final stage never touches disk, evading static scanning. The use of tar from PowerShell is unusual enough to detect but legitimate enough that it will not be blocked by default policies.
The PowerShell stage targets .zip + Get-ChildItem + .fullname + ::OpenRead + .Length; + .Read( + byte[] + Sleep + TaR in a single command line. That compound pattern is specific enough to be actionable. I would add tar invocation from powershell.exe as a parent process as a broader, lower detection method that might catch variants where the exact PowerShell syntax changes.
The RFC Already Knew
RFC 9700 (January 2025), Section 4.11.2, “Authorization Server as Open Redirector,” documents exactly this pattern. The RFC acknowledges that attackers can deliberately trigger authorization errors to force redirects through trusted endpoints to attacker destinations. It recommends that authorization servers authenticate users before any redirection and only redirect automatically to trusted URIs.
But here is the problem that the RFC cannot solve. Entra ID and Google serve millions of tenants, and they cannot vet every redirect URI on every application in every tenant. The trust model assumes tenant admins control their own app registrations. When the attacker IS the tenant admin, that assumption is worthless.
The multi-tenant /common/ endpoint exists to enable legitimate cross-tenant authentication, and the error-redirect behavior exists because the RFC defines it. The attacker is not exploiting a flaw. They are exploiting an architectural trade off between security and interoperability that the protocol designers made deliberately. There may not be a clean fix that does not break legitimate OAuth flows.
MITRE ATT&CK
| Technique ID | Technique | Campaign Application |
| T1566.002 | Phishing: Spearphishing Link | Utilization of meticulously crafted OAuth URLs within phishing electronic mail correspondence. |
| T1078 | Valid Accounts | Registration of a legitimate OAuth application within the adversary’s tenant environment. |
| T1204.001 | User Execution: Malicious Link | End-user activation by clicking the OAuth authorization Uniform Resource Locator (URL). |
| T1557 | Adversary-in-the-Middle | Credential interception facilitated by the EvilProxy reverse proxy framework. |
| T1539 | Steal Web Session Cookie | Acquisition of the session cookie through the Adversary-in-the-Middle (AitM) framework. |
| T1574.002 | Hijack Execution Flow: DLL Side-Loading | Loading of crashhandler.dll via the steam_monitor.exe process. |
| T1059.001 | PowerShell | Post-compromise execution of commands and host-based reconnaissance activities. |
| T1140 | Deobfuscate/Decode Files | In-memory decryption of the crashlog.dat file. |
| T1082 | System Information Discovery | Execution of system information enumeration commands (ipconfig /all, tasklist). |
| T1071 | Application Layer Protocol | Implementation of Command and Control (C2) communications. |
The T1078/T1566.002 combination is the interesting one here. The “valid account” is not a compromised credential. It’s a legitimately registered OAuth application in an attacker-controlled tenant.
That is a meaningful distinction for organizations that scope T1078 detections to credential compromise scenarios. This technique uses valid accounts in the infrastructure sense, not the credential sense.
Where to Focus
I am going to concentrate on the identity signals here, because that is where the highest-fidelity detections live and where most organizations have the biggest gap.
The primary signal is the 65001 error in Entra ID sign-in logs from application IDs you have never seen before. If you are ingesting sign-in logs into BigQuery, Sentinel, Splunk, or any analytics platform, this is the detection rule you should build first.
The filter on resultType = 65001 excludes application IDs in your known good inventory, and alerts on anything that remains. The false positive rate will be low because legitimate applications do not typically trigger 65001 errors across users they have never interacted with.
Spike detection adds another layer – if the same unknown application ID triggers 65001 errors against multiple users within a short window, that is almost certainly a campaign. Legitimate applications do not mass-probe tenants they have no relationship with.
Correlation is where it gets powerful with error code 65001 from an unknown app, followed by the same user visiting a newly-registered domain within 5 minutes, which is a high confidence indicator. If you can parse the OAuth authorization URL from email telemetry and match the state parameter (decoded) to the target user’s email address, that is about as close to a deterministic detection as you will get in this space.
On the email layer, look for login.microsoftonline.com or accounts.google.com URLs in inbound messages that contain prompt=none. Not every prompt=none URL is malicious (many legitimate SSO flows use it), but prompt=none combined with a scope value that does not match known Microsoft Graph or Google API permissions is a strong signal.
On the endpoint, the signals are more conventional but still worth codifying. PowerShell spawned by .LNK execution inside an extracted ZIP, tar invoked from PowerShell on Windows, steam_monitor.exe running from Downloads or Temp or AppData, crashhandler.dll loaded from the same non-system directory. The compound PowerShell pattern (Get-ChildItem + ::OpenRead + byte[] + TaR) is specific enough for a high-confidence rule.
What This Means for MSPs and SMBs
I am going to be direct about this because I think the practical implications are being underplayed.
The single most impactful thing you can do right now is restrict user consent in Entra ID to verified publishers only, or disable it entirely and require admin approval. We regularly track MSPs managed Microsoft 365 tenants, and the majority still have user consent set to the default, allowing any user to consent to any multi-tenant application.
That default is actively dangerous in this campaign. If users click on a phishing link, the OAuth redirect fires silently, and the redirect URI on the attacker’s app is all that matters. If you restrict consent to verified publishers, the attacker’s app cannot obtain a service principal in your tenant, which changes the error behavior. It does not eliminate the redirect risk entirely, but it reduces the attacker’s ability to tailor the attack to your tenant’s specific configuration.
The harder problem is what to do about login[.]microsoftonline[.]com URLs in email because your email gateway trusts them. Probably cannot untrust them without breaking legitimate SSO flows. But a URL pointing to the OAuth authorization endpoint with prompt=none and an unrecognizable scope value is not the same thing as a SharePoint sharing link, and your email security should not treat them identically. This will require email security vendors to develop more granular URL analysis for OAuth authorization endpoints, and I do not think most of them are there yet.
For a comprehensive list of related indicators, including malicious OAuth client IDs, suspicious redirect URIs, endpoint patterns, and behavioral signals, see the OAuth Abuse IOC List for Threat Intel.
The latest Microsoft 365 security baseline can be a good starting point for organizations looking to strengthen their security posture: https://staging.guardz.com/blog/baseline-security-mode-for-smbs-why-it-matters/
Want to check if your Microsoft 365 environment contains known IOCs? Run the following tool:
https://github.com/guardzcom/security-research-labs/blob/main/AI-Cloud-Tools/M365-Tools/IOCs-Check/Check-OAuthIOCs.ps1