-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chacha20: 64-bit counter support #334
Comments
I have some seemingly working code for this, but it builds on the Rng PR with |
Let's try to land #333 first |
I took another look at my 64-bit counter code, and it isn't working correctly on the NEON backend atm. It uses a const bool in The largest use-case for a 64-bit counter (that I can think of) is file system encryption for mobile devices. I've ran some benches, and I will keep working on it a little, but I'd probably prefer a solution that doesn't need to modify the backends, if that's possible. Because the way it is now, every new backend would also need to have specific code for updating the counter |
On second thought, I feel like more control over a larger nonce would be favorable over having a larger counter |
@nstilt1 not sure what you mean by that? Are you suggesting making the construction generic over the nonce size rather than a larger counter? |
@tarcieri I'm suggesting that 64-bit counters are unnecessary, contrary to what I've said previously. I think having 96 bits for a nonce is more practical because it is easier to prevent nonce reuse with finer control over the value. What do you mean about making the construction generic over the nonce size? |
I wasn't sure what you meant by "more control over a larger nonce". I agree practically larger counter support probably doesn't matter which is why the current implementation avoided dealing with it. I'm not sure the added complexity is worth it, though it could be considered a compatibility bug with the legacy implementation which is somewhat surprisingly pervasive. |
Here is my use case which currently panics, but would not panic (I think) with 64 bit counters: Encrypting a stream of files to backup a file to S3 object. It is reasonable that I might have a stream that is >256GiB long. I did not write the backup program yet. It is in progress. Here is something I tried out if I imagine continuing an in progress upload at 500GiB: let key = [0x42; 32];
let nonce = [0x24; 12];
let mut cipher = ChaCha20::new(&key.into(), &nonce.into());
const SEEK_AMOUNT: usize = 536_870_912_000;
cipher.seek(SEEK_AMOUNT); But currently the I am new to file encryption so let me know if my use case is invalid and there is something else I should be using other than ChaCha20 (I don't need protection against the data being modified, I just want the files to not be readable without the key). |
You could change the nonce every so often, such as increment a portion of the nonce once every 256 GiB of encrypted data. But I could also revisit #359. I'll probably use a different PR since that one is kinda far behind. The only problem is that the code would be slightly different if #380 were to be merged. I could put the counter support in #380, or #380 could just be closed and I could put the counter support in a new PR. Also regarding this:
The changes made in #359 worked. If I'm going to revisit this issue, the backends will look almost exactly like they do in that PR. If desired, I could make some adjustments, such as making an |
@ChocolateLoverRaj are you worried about encrypting a single file that is 256 GiB, or the sum total of multiple files being larger? If it's the latter, you should split up the encryption so each file is encrypted under a different key/nonce. |
My plan is to create a stream of all the file changes (including the contents of all new files). So every file will be <256GiB but all of the files together might result in a backup >256GiB. I will be splitting the stream up into 5GB chunks (because AWS limit), but I would prefer to have a single cipher for the entire big stream and split up the encrypted stream rather than have to create multiple ciphers for different chunks of the stream. If I do split it up the stream and use multiple ciphers, can I use the same key and increment the nonce (like using a nonce of |
Yes though for large files it would be better to use a unique key per file |
Why? |
256 GB is the data volume limit of e.g. Poly1305 as an authenticator, which you should be using in tandem with ChaCha20 in the combined ChaCha20Poly1305 AEAD cipher to prevent chosen ciphertext attacks |
256GB is also generally quite a bit beyond what most computers can hold in RAM, and really you should be working with authenticated AEAD messages you can hold in RAM. |
I wonder if it would be possible to make a newtype wrapper which implements the djb variant in terms of the IETF variant by encoding a portion of the counter in the nonce and handling incrementing it where necessary |
I don't think I know how to do that. If we aren't going to be revising the backends to achieve this, then I suppose I ought to go ahead fix #381. I was thinking about changing the Btw, I have a big school project due at the end of the month, so I may be a bit less active |
If everything were to use a 64-bit counter under the hood, we might also want to adjust some things in
Here is a high-level overview of /// A wrapper for the `stream_id`.
///
/// The following types will overwrite the upper 32 bits of the 64-bit counter
/// * `[u32; 3]`
/// * `[u8; 12]` or a
/// * `u128`
///
/// The following types will preserve the upper 32 bits of the 64-bit counter
/// * `[u32; 2]`
/// * `[u8; 8]` or a
/// * `u64`
pub struct StreamId([u32; 3]);
/// A wrapper for the `block_pos`.
///
/// Using a 32-bit value will preserve the upper 32 bits of the stream ID.
///
/// Using a 64-bit value will overwrite the upper 32 bits of the stream ID.
///
/// Block pos accepts:
/// * u32
/// * u64
/// * [u32; 2]
pub struct BlockPos([u32; 2]);
/// Sets the stream ID. Providing a 64-bit value will preserve the upper 32 bits of the
/// stream ID (and the upper 32 bits of the counter)
///
/// Providing 96-bit values will overwrite the upper 32 bits of the counter.
pub fn set_stream<S: Into<StreamId>>(&mut self, stream: S) {
let bytes = core::mem::size_of::<S>();
let stream: StreamId = stream.into();
if bytes >= 12 {
// write 96 bits to the nonce
} else {
// write 64 bits to the nonce
}
// continue
}
// do something similar with pub fn set_block_pos() I know it adds a little more boilerplate, and it could certainly introduce a bug if a user were to only use 32-bit values for setting the counter, and then the user exhausts the 32-bit counter stream and tries (and fails) to reset the counter using a u32... but realistically, the only way a user will exhaust the stream (surpass block pos = |
The idea is all backends could remain 32-bit. The IETF variant repurposes 32-bits of the counter space to extend the nonce to 96-bits. So, when the 32-bit IETF ChaCha20 counter is exhausted, another way to implement a 64-bit counter would be to increment another counter stored in the lower 32-bit portion of that 96-bit nonce, then the newtype can re-initialize the inner cipher with that new nonce containing the incremented counter, and keep doing this in 256GB chunks of keystream.
The main problem with this is it opens up potential bugs where the 32-bit counter of the IETF variant isn't properly respected, which could potentially lead to nonce reuse if the counter ever overflows 32-bits. |
The cipher panics when it overflows though... I know it's kind of silly for the internal counter to be 64-bit while restricting the end user to a 32-bit counter, but there wouldn't be any extra To me, it just seems a little bit tricky to regulate the counter from outside of the backends, aside from panicking the way it does now. It would need to be able to handle the edge case where the |
I suppose that that approach might work if there was a new wrapper that used |
The
ChaCha20Legacy
construction, i.e. the djb variant, is supposed to use a 64-bit counter but currently uses a 32-bit counter because it shares its core implementation with the IETF construction which uses a 32-bit counter.This results in a counter overflow after generating 256 GiB of keystream. Compatible implementations are able to generate larger keystreams.
I'm not sure how much of a practical concern this actually is, but it did come up in discussions here: rust-random/rand#934 (comment)
We can probably make the counter type generic between
u32
/u64
in the core implementation if need be.The text was updated successfully, but these errors were encountered: