Skip to main content

I, um, broke stuff

Firewall Violations

While I had a lot of fun putting that comic panel together, I'm not planning on getting into the webcomic business. I keep getting told to "keep my day job" as if I'm not hilarious or something. The comic does provide a nice lead-in to a more serious topic whereby firewalls are able to be circumvented in unusual, hard to detect ways.

This story begins with "needing" the ability to connect two clients together. Since I don't generally operate in web browser land, I'm not constrained to whatever features are only available in a web browser. Since that's not a restriction, I could use whatever protocol I came up with/wanted to use. Since ports 80 and 443 are most commonly accessible through even the craziest of firewall setups and I had previous experience writing my own HTTP and WebSocket clients and servers, I knew very well how those protocols worked. My main issue with just using WebSocket is that it transitions into a framing protocol, which works well for some things but not others. I really just wanted raw TCP/IP after Upgrading a connection.

I ended up creating WebRoute: A brand new Internet protocol that connects two separate clients together that provide the same unique ID to create a TCP/IP passthrough over port 80 or 443 (or both). Say, for example, you wanted to directly communicate between two web browsers over your own custom protocol. Well, WebRoute makes that possible, assuming web browsers actually implemented WebRoute (they currently don't).

WebRoute utilizes the same "Connection: keep-alive, Upgrade" mechanism that WebSocket uses, but, instead of a framing protocol at the end of the upgraded connection, a plain ol' TCP/IP socket is the result that simply passes data between two linked clients.

So far so good. And therefore you'll ask, "But how does this circumvent firewalls?"

I ask: What is a server?

TCP/IP, fundamentally, is just a protocol to send and receive data packets. In traditional client/server architecture, we view a client as something that connects to a server where a server is something that just sits there having used bind() to bind to an open port and accept() to accept new connections. Firewalls simply block incoming connections from the big bad Internet or they route packets to the correct internal host on a controlled, limited basis and all is well.

What if a server connects out to another server outside the firewall and uses WebSocket instead of bind()'ing to a port and also uses the aforementioned WebRoute protocol instead of accept()'ing connections? The traditional server then becomes a client.

Introducing Remoted API Server, which implements the above. Also, introducing two integrations in real products that can connect to Remoted API Server: Cloud Storage Server and Cloud Backup.

Remoted API Server can certainly be viewed as a product that circumvents most common firewall setups. It is also be easily adapted into existing software products via its included SDK(s). Assuming correctly written software, only a few lines of application code have to be changed to support Remoted API Server. That means that a slightly retrofitted standard TCP/IP server connects into Remoted API Server via a web server operating on port 80 or 443, which a firewall will generally allow. Then slightly retrofitted clients connect to the Remoted API Server over WebRoute and packets are routed to and from the target server. In the process, packets in both directions completely pass through all firewalls unhindered.

Deploying Remoted API Server (or anything similar) on a public-facing web server and then having a firewalled server connect to Remoted API Server in order to handle requests from clients is fraught with troubling security issues previously thought to be mostly unreasonable. For example, with a few minor modifications, the server side can even connect out through a HTTP CONNECT proxy to a public-facing server running Remoted API Server to handle requests with all end-to-end traffic being encrypted (i.e. the proxy can't see the traffic unencrypted). Would the long-running connections through the local proxy server raise eyebrows? Sure. But the server managed to get on-site hosted content with access to internal resources out through a proxy server and through the firewall even in the face of a system specifically designed to prevent that from happening.

Whatever theoretical or practical defense scenario anyone might come up with can most likely be circumvented with some additional ingenuity. For example, with some extra effort, Remoted API Server could be modified to allow multiple Remoted API Servers to be chained together at different firewall layers. The only saving grace here is that the Remoted API Server can't currently be chained even though the included client SDK already supports such a feature. However, that doesn't mean someone won't make chained remoted servers a reality.

The only reason I didn't implement the various proxy and chaining features I just mentioned is because I don't need them. By the way, it took me over halfway into the project before I made the realization that what I was building would be able to be used for a number of nefarious purposes. I could have stopped right then but someone else would have built it eventually.

Remoted API Server and the WebRoute protocol needed to be built though. Not to cause panic but rather to attempt to solve a few real, legitimate problems that I'm encountering for which there are no viable alternative solutions. It should be obvious that the aforementioned problems have to do with Cloud Backup and Cloud Storage Server, of which the former sends encrypted backup data and the latter stores it (still encrypted). However, the broader scale and scope of Remoted API Server should be of some concern and all deployments should be done carefully.

Finally, Remoted API Server and the WebRoute protocol are likely to be viewed by many system administrators as a zero-day exploit for TCP/IP to circumvent firewalls to allow unauthorized Internet traffic onto the internal network. If that is your view, then I've ruined your Monday. However, every tool is just a tool. How each tool is used is what matters.


  1. Someone is bound to ask, "Why not just use a single WebSocket connection and avoid going through the trouble of creating a whole new protocol?" The answer lies in the fact that retrofitting existing software to support WebSocket is generally much harder, if not impossible. That is, it depends on whether or not the support libraries for an application or the application itself has abstracted reading/writing content on a socket. Not all libraries do that. In addition, tracking channels over a single connection involves a lot of fiddly error-prone tracking bits. Writing a simple injection shim that controls when raw TCP handles are delivered to the application allows for nearly transparent integration from an application server's perspective even if it costs some extra time to set up such connections. I was able to retrofit Cloud Storage Server and Cloud Backup with support for Remoted API Server in about an hour. It would probably have required a week or two to try to take any other approach. The resulting shortened development cycle to support Remoted API Server is the appeal of the approach I took.


Post a Comment