Skip to main content

Extending the block size of any symmetric block cipher

Extremely important update: The technique described in this post has been directly confirmed to extend the block size of any symmetric cipher to any desired length by Bruce Schneier himself - he is one of the leading crypto experts in the world. Ladies and gentlemen, we have a winner! On the other hand, the other aspects of this post remain untested, so keep that skepticism handy. However, this post is good news in the event of a major cryptanalysis breakthrough that breaks multiple widely-used and trusted algorithms. Stay tuned...hopefully more good news to come!

Before I begin, I need to preface this with the fact that I don't consider myself to be a cryptanalyst. Coming up with a new cryptographic algorithm that is deemed strong is hard to do and really takes a team of people. I know enough to be dangerous. Therefore, what is presented here is to be viewed as merely a theory to extend the block size of any trusted symmetric block cipher without modifying the core algorithm of the cipher itself. What follows should not be assumed to be more secure than what is available today until this theory has undergone appropriate cryptanalysis.

About 20 years ago, before the Internet had entered my life, I was toying around with "encryption" and what I came up with involved some of the XOR logic stuff we see today involving block ciphers, but I also did multi-bit rotations (e.g. rcl/rcr). After the Internet entered my life, I encountered my first block cipher. I was a bit surprised to see little tiny chunks of data (8 to 32 bytes - aka "block size") with dinky keys (16 to 56 bytes) being encrypted purely with XOR, addition, and byte swapping and subsequently declared secure.

While I've generally accepted the current state of cryptography, there is a nagging problem with symmetric block ciphers and it has to do with the fact that they are primarily based on the whole XOR with byte swapping thing and further isolated into fixed-length chunks. People working with block ciphers discovered early on that what is known today as the Electronic Code Book (ECB) "mode" has problems with patterns. Initialization Vectors (IVs) and other modes (CBC, CTR, OFB, etc.) were introduced to avoid patterns in the data from appearing, but IVs and block modes are not considered to be secure. Nor do they ultimately increase the strength of the underlying algorithm and actually cause end-user confusion in this regard.

Using an IV and a non-ECB mode just makes it harder to determine the underlying data based on simple observation. For example, in CBC mode, if someone correctly guesses the encryption key but gets the IV wrong, all of the data will be correctly decrypted except for the first block. Because CBC mode is quite popular, this means that the IV can be thrown away and the key focused on - after all, the total data loss is 8 to 32 bytes, which will likely just exclude some minor header or introduction information that can be accurately guessed later if it is needed. What I'm trying to say is that IVs and modes don't matter - longer keys and larger block sizes do. Do keep in mind that cryptographically strong algorithms have been chosen very carefully, so someone can't just go in and directly alter any algorithm to accept bigger keys or block sizes, because doing so would likely introduce weaknesses.

This is the problem that I see: Symmetric block ciphers are intended to work on streams of data and do their operations as quickly as possible. Those reasons are why the amount of data worked on at any given time is only a few bytes and also why the number of rounds - that is, the number of times the core algorithm is executed per block - is generally limited to 20 or fewer. However, long-term, these same benefits will introduce weaknesses into these algorithms and finding new algorithms is a complex task that can take years of effort. The AES competition, for example, took approximately 4 years to complete before gaining approval by the NIST. We're already seeing how algorithmic speed is a significant problem in other realms (e.g. password storage). It really is just a matter of time before someone starts to find weaknesses in small block sizes and cipher speed in the realm of data encryption (assuming it hasn't been done already). Losing trust in the data encryption algorithms we rely upon results in major setbacks in data security and, as far as I know, there are no contingencies available for dealing with the potential losses - cryptanalysts tend to aim to just break things rather than come up with a strategy in advance such as, "okay, if we break this, what are the options available for those using this algorithm?" as part of a comprehensive and responsible analysis. As such, I propose a simple solution that operates on today's proven algorithms without any changes to the underlying algorithms that MAY vastly improve their long-term strength. The steps for extending block size of any trusted symmetric block cipher are defined as such:

  1. Encrypt data as we do now with a trusted block cipher.
  2. After collecting some encrypted output, move the last byte before the start of the first byte.
  3. Encrypt the result from step 2 using the same algorithm but with a different key and IV.

As an example, and to clarify the above steps, let's say I encrypt a data stream. Instead of letting the data go through after encrypting only a few bytes as is done now, I accumulate enough encrypted output to occupy 4KB. The size doesn't actually matter - it could be 1KB or 1MB, but the size should be larger than and a multiple of the algorithm's default block size but still small enough to fit into a processor cache to avoid cache misses. 4KB is a good size for today's hardware.

Next, I move the last byte to the start of the string. For example, the bytes 1234xyz5678 would become 81234xyz567. This is technically rotating all the bits of the data by 8 if you visualize all the bytes as a stream of bits. For lack of a definition, this is what I call an "8-bit rotation across the bytes". For this example, the 8-bit rotation only requires 4KB + 1 byte of RAM with storing the first encryption result starting at the second position in the buffer. Moving a single byte at the end of a buffer to the beginning of a buffer is a very trivial operation. My original idea for this step called for a one-bit rotation across the bytes, but then I realized that every byte of data would have to be modified - it was still a trivial operation, but kind of pointless despite breathing some life into the under-utilized rcr and rcl (rotate with carry right/left) features of many processors.

Obviously, the second step is easily reversed. The last step is to encrypt the data again. The same algorithm may be used but should utilize a different key and IV.

The theoretical resulting effect here is that, instead of just an 8 to 32 byte block size encryption algorithm, it has been stretched to be a 4KB block size encryption algorithm with an effective doubling of the rounds and possible doubling of key size. This is made possible because of the bit rotation, which acts like digital glue across the algorithm's smaller blocks while only taking twice as long. Rotating eight bits instead of one bit is more efficient, but if there are weaknesses in the algorithm along byte boundaries, then rotating one bit might mitigate (or it could exasperate) such weaknesses whereas eight bits will do nothing for or against such weaknesses by comparison. Of course, such weaknesses would be cause to not trust the algorithm in the first place and only trusted ciphers should be used, but this method of extending block size could be used as an emergency stop-gap measure if a serious breakage occurs across a set of widely-used encryption algorithms to buy sufficient time to come up with a real solution.

Doing all of this may, and I repeat MAY, have a side effect of effectively increasing cryptographic strength of the underlying algorithm itself. By what factor, I'm not sure, if any. Questions have to be answered first: Is there an increase? A decrease? No effect? This is the part where someone who likes doing cryptanalysis comes in. I'll err on the very safe side and say that this does absolutely nothing to make any algorithm used with this stronger cryptographically until someone with more chops than me proves otherwise. I vaguely remember reading somewhere that encrypting twice does nothing to increase the strength and may do the exact opposite, but also seem to remember that the article had to do with using the same algorithm twice back to back with the same key and IV. At any rate, don't fool yourself into thinking that this is somehow stronger than a base algorithm used until someone who isn't just dabbling in crypto evaluates this theory.

Comments