SASECompare
deep-dive10 min read

The WebSocket Blind Spot: Why 7 of 8 SASE Vendors Can't Fully Inspect ChatGPT Traffic

ChatGPT, Copilot, and Claude use streaming protocols that most SASE proxies weren't built to inspect. Our data from 23 GenAI DLP checks across 8 vendors reveals a critical gap.

SASECompare Research
|

The WebSocket Blind Spot: Why Most SASE Vendors Can't Fully Inspect ChatGPT Traffic

Your SASE vendor says it protects against ChatGPT data leaks. But when an engineer pastes proprietary source code into ChatGPT and the response streams back over a WebSocket connection, does your DLP actually see it?

For 7 out of 8 major SASE platforms, the answer is: not fully.

This is not a theoretical concern. It is a documented gap in our 23-question GenAI DLP comparison across Cato, Check Point, Zscaler, Netskope, Palo Alto, Cisco, Fortinet, and Cloudflare. And it stems from a fundamental architectural mismatch between how modern GenAI tools communicate and how most SASE proxies inspect traffic.

How ChatGPT Actually Talks to Your Browser

To understand the blind spot, you need to understand how GenAI tools transmit data -- because it is fundamentally different from the web traffic that SASE platforms were designed to inspect.

Traditional HTTP: Request and Response

Classic web applications use a simple pattern. Your browser sends an HTTP POST request containing the data (a form submission, a file upload, a search query). The server processes it and sends back a complete HTTP response. The connection closes.

This is the model that most SASE DLP engines were built around. The proxy sits between the client and server, intercepts the request, scans the body for sensitive data (credit card numbers, source code patterns, PII), applies a policy, and either allows or blocks it. The response comes back the same way: a complete payload that can be scanned before delivery.

Streaming Protocols: A Different Animal

ChatGPT, GitHub Copilot, Claude, and Gemini do not work this way. They use streaming protocols to deliver responses token by token, in real time. The primary mechanisms are:

Server-Sent Events (SSE): The client opens a long-lived HTTP connection. The server sends data fragments continuously over that single connection. ChatGPT's web interface primarily uses SSE over HTTP/2 to stream tokens as they are generated.

WebSocket: A full-duplex communication channel over a single TCP connection. Unlike HTTP, data flows in both directions simultaneously without the overhead of new requests. Microsoft Copilot relies heavily on WebSocket connections, and many GenAI tools use WebSocket for interactive features.

HTTP/2 Multiplexing: Multiple streams share a single TCP connection. Data from different requests and responses interleave on the same connection, making it harder for a proxy to isolate and inspect individual data flows.

The critical difference: in traditional HTTP, there is a clear request body that a proxy can buffer, scan, and block before forwarding. In streaming protocols, data arrives as a continuous flow of small frames. A DLP engine that waits for the "complete" payload will wait forever -- the connection stays open for the duration of the conversation.

Why This Breaks DLP

Consider what happens when an employee pastes 200 lines of proprietary source code into ChatGPT:

  1. 1.The browser sends the prompt, possibly as a standard HTTPS POST (this part most proxies can inspect)
  2. 2.ChatGPT's response streams back via SSE or WebSocket over HTTP/2
  3. 3.The response may contain echoed fragments of the original source code, analysis of proprietary algorithms, or AI-generated code derived from the input
  4. 4.Each token arrives as a tiny frame in a continuous stream

A proxy that handles step 1 but not step 2 has a one-directional blind spot. A proxy that cannot parse WebSocket frames or SSE event streams has a complete blind spot for tools that use those protocols.

Worse, some GenAI desktop applications establish WebSocket connections that bypass the proxy entirely if the proxy does not support WebSocket interception. The traffic appears as an opaque encrypted stream that the SASE platform passes through uninspected.

What the Data Shows: 8 Vendors, 1 Clear Leader

In our GenAI DLP comparison, we asked every vendor a specific question about WebSocket and streaming protocol inspection. The results are stark.

WebSocket / Streaming Protocol Inspection

VendorRatingDetail
ZscalerYESFull WebSocket inspection, HTTP/2 streaming inspection through proxy gateways
NetskopePARTIALRequires feature flag for WebSocket; RBI does not support HTTP/2 (ChatGPT known to break)
Palo AltoPARTIALHTTP/2 inspection supported, but SSE streaming DLP documentation is limited
CatoPARTIALStandard HTTPS inspection; no explicit confirmation of WebSocket frame-level DLP
Check PointPARTIALCan inspect standard HTTPS, but streaming/WebSocket gaps documented
FortinetPARTIALStandard TLS inspection; no explicit WebSocket DLP confirmation
CloudflarePARTIALAI Gateway can buffer streaming responses, but inline proxy WebSocket DLP is limited
CiscoUNKNOWNNo documentation confirming or denying WebSocket/HTTP/2 DLP support

Only Zscaler scored a full YES. Seven out of eight vendors either have documented limitations or lack documentation entirely for this capability.

This is not a minor gap. This is the protocol that ChatGPT, the most widely used GenAI tool in enterprises, relies on for every single conversation.

The Bigger Picture: Overall GenAI DLP Scores

The WebSocket gap exists within a broader landscape of GenAI DLP capabilities. Here are the overall scores from our 23-question assessment:

RankVendorYES (of 23)ScoreKey Strength
1Zscaler20 (87%)LeaderOnly vendor with full streaming inspection
2Netskope19 (83%)StrongExcellent CASB integration and 29-language support
3Palo Alto19 (83%)StrongAI Access Security with deep app controls
4Cato18 (78%)SolidBest-in-class mobile GenAI DLP
5Cisco18 (78%)SolidStrong AI Guardrails, bidirectional scanning
6Cloudflare18 (78%)SolidOnly vendor with full desktop app coverage
7Fortinet17 (74%)AdequateCompliance reporting strength
8Check Point17 (74%)AdequateGenAI Protect module with prompt-side focus

The scores at the top are close. But the capability gaps that separate vendors are not evenly distributed -- they cluster around the hardest-to-solve problems: streaming inspection, desktop app coverage, and mobile DLP.

The WebSocket inspection gap does not exist in isolation. It compounds with other capability gaps to create scenarios where GenAI data leakage goes completely undetected.

Desktop GenAI App Coverage

ChatGPT, Claude, and Copilot all have native desktop applications. These apps may use certificate pinning, which prevents your SASE proxy from decrypting traffic for inspection.

VendorDesktop App Coverage
CloudflareYES -- WARP agent performs deep packet inspection regardless of certificate pinning
All other 7 vendorsPARTIAL -- desktop apps with certificate pinning can bypass inline DLP

If a desktop app uses WebSocket over a pinned TLS connection, you have two blind spots stacked on top of each other. The proxy cannot decrypt the traffic and would not know how to inspect the WebSocket frames even if it could.

Mobile GenAI DLP

Employees use ChatGPT on their phones. Most SASE vendors have limited mobile DLP enforcement for GenAI tools specifically.

VendorMobile GenAI DLP
CatoYES
CloudflareYES
All other 6 vendorsPARTIAL

Mobile adds a third layer to the problem. A streaming GenAI conversation on a mobile device, through a native app, over a cellular connection -- this is a scenario that most SASE platforms handle poorly or not at all.

Output / Response Scanning

Even if your DLP catches sensitive data going into ChatGPT, can it inspect what comes back? AI responses can contain leaked training data, echoed proprietary information, or malicious code suggestions.

VendorOutput Scanning
Zscaler, Netskope, Palo Alto, Cisco, Cato, CloudflareYES
Check Point, FortinetPARTIAL

Output scanning is directly tied to streaming inspection. If your proxy cannot parse the streaming response, it cannot scan the output for sensitive data, malicious URLs, or code injection patterns. The six vendors that score YES here have invested in buffering or real-time analysis of streaming responses -- but for most of them, this only works when the streaming connection is one they can actually intercept (bringing us back to the WebSocket problem).

Why This Gap Exists: The Proxy Architecture Problem

Most SASE platforms are built around forward proxy architectures designed for HTTP/1.1 and basic HTTPS inspection. The proxy terminates the TLS connection, inspects the decrypted HTTP request and response, applies DLP policies, and re-encrypts.

This works well for:

  • Web browsing (standard HTTP GET/POST)
  • SaaS application usage (REST APIs)
  • File uploads and downloads

This works poorly for:

  • WebSocket connections (long-lived, bidirectional, frame-based)
  • Server-Sent Events (long-lived, server-push, event-based)
  • HTTP/2 multiplexed streams (multiple interleaved data flows)
  • gRPC (binary protocol over HTTP/2, used by some AI APIs)

Adapting a proxy to handle WebSocket requires fundamentally different inspection logic. Instead of buffering a complete request body, the proxy must parse individual WebSocket frames, reconstruct the application-layer message from potentially fragmented frames, run DLP classification on each reconstructed message, and do all of this with minimal latency so the streaming user experience is not degraded.

This is an engineering challenge that most SASE vendors have not yet fully solved. The market moved to GenAI faster than proxy architectures could adapt.

How to Test This Yourself

Do not take any vendor's word for it -- including ours. Here is how to verify whether your SASE platform actually inspects GenAI streaming traffic.

Step 1: Create a Canary DLP Policy

Configure a DLP policy in your SASE platform that triggers on a unique, unmistakable string. Use something like CANARY-DLP-TEST-2026-WEBSOCKET -- a string that will never appear in legitimate traffic.

Set the policy to log and alert (not block) so you can see whether the detection fires without disrupting the test.

Step 2: Test Standard HTTP First (Baseline)

Open ChatGPT in your browser. In the message box, type: "Please repeat the following string exactly: CANARY-DLP-TEST-2026-WEBSOCKET"

Check your SASE DLP logs. If the policy fires on the outbound request, your proxy can at least inspect standard HTTPS POST bodies. This is your baseline.

Step 3: Test the Streaming Response

Now check whether your DLP fires on the response. ChatGPT will stream back the canary string token by token via SSE. If your logs show the DLP triggering on both the request and the response, your vendor handles streaming response inspection.

If it only fires on the request, your proxy cannot inspect the streamed response.

Step 4: Test WebSocket Directly

Open Microsoft Copilot (copilot.microsoft.com), which uses WebSocket connections more heavily. Repeat the canary test. Open your browser developer tools (F12 > Network tab > filter by WS) to confirm the conversation is using WebSocket.

Check your DLP logs. If the policy does not fire at all -- neither on request nor response -- your proxy is not inspecting WebSocket traffic.

Step 5: Test Desktop and Mobile Apps

Repeat the canary test using the ChatGPT desktop app and the ChatGPT mobile app. These are the highest-risk vectors because they may bypass your proxy entirely.

If your DLP fires in the browser but not in the desktop or mobile app, you have confirmed the blind spot.

Step 6: Document and Escalate

Record your findings in a simple table:

TestDLP Fired on Request?DLP Fired on Response?
Browser (ChatGPT)
Browser (Copilot WebSocket)
Desktop app (ChatGPT)
Mobile app (ChatGPT)

Any "No" in this table represents a confirmed data leakage path. Bring this to your vendor with a specific ask: when will WebSocket and streaming protocol inspection be generally available for GenAI DLP?

What CISOs Should Do Now

1. Do not assume GenAI DLP works for streaming traffic. Run the test above. Most security teams have never validated that their DLP policies fire on WebSocket or SSE traffic.

2. Layer your defenses. If your SASE proxy cannot inspect streaming GenAI traffic, compensate with endpoint DLP (clipboard monitoring, keystroke detection for sensitive patterns), CASB API-based scanning (post-hoc inspection of GenAI SaaS logs), and network-level controls (block WebSocket connections to GenAI domains as a stopgap).

3. Prioritize streaming inspection in your next vendor evaluation. Add specific questions about WebSocket, SSE, and HTTP/2 inspection to your RFP. Ask for a live demonstration, not a slide deck.

4. Watch for vendor updates. This is an area of active development. Vendors scoring PARTIAL today may close the gap in the next 6 to 12 months. But you need protection now, not next year.

5. Consider the full GenAI attack surface. WebSocket inspection is one of 23 checks in our assessment. Desktop app coverage, mobile DLP, BYOD enforcement, and compliance reporting all matter. A vendor that solves streaming but fails on mobile has just shifted the blind spot.

The Bottom Line

GenAI data leak prevention is only as strong as the weakest protocol your SASE platform can inspect. Today, that weak link is streaming protocols -- the very protocols that ChatGPT, Copilot, Claude, and Gemini depend on for every conversation.

Only one vendor in our comparison (Zscaler) has documented, full support for WebSocket and HTTP/2 streaming inspection in their DLP engine. The other seven vendors are working on it, but working on it is not the same as protecting you.

Until your SASE platform can parse WebSocket frames and SSE event streams with the same depth it applies to standard HTTP, your ChatGPT data leak prevention has a hole in it. And your employees are streaming data through that hole right now.

View the full 23-question GenAI DLP comparison to see how every vendor scores across all checks, with detailed evidence and source links.


Methodology: All findings are based on the SASECompare GenAI DLP comparison, which evaluates 8 SASE vendors across 23 specific checks using official documentation, knowledge base articles, and community sources. Ratings reflect documented capabilities as of March 2026. YES indicates a fully documented, generally available capability. PARTIAL indicates the capability exists with significant limitations. UNKNOWN indicates insufficient documentation to determine.

genaidlpchatgptwebsocketstreamingdata-leak-preventionenterprise-securitysaseciso
Share

Worried about GenAI data leakage? Get a custom analysis of how your SASE vendor handles AI tool traffic.

Get Your Custom Report
Feedback

Help me make this better

This is a one-person project. Your input directly shapes what gets added, fixed, or prioritized next.