15

(I realize this is yet another Argon2 "how do I configure" question, but the existing questions I've found don't really help. If I've missed one, happy to have this closed).

Deploying Argon2 (with the Argon2id variant) into a server environment to hash passwords and process site logins - the question we're grappling with is "how much memory to allocate". Now, I'm aware it's "as much as practically possible" but I haven't found any guidance on a lower limit (e.g., something that says something like "below 16MB it's not worth it").

This question suggests 32MB is good enough for most applications, and this one suggests '0.75 * (memory/users)' as a guideline, but not sure if this is reasonable?

Obviously, as iterations can be tuned upwards, we can maintain hardness in that dimension, but at what lower limit of memory would be considered "bad" or at least "not good" for Argon2? If this is pure to help avoid cache hits in a GPU then this would suggest allocations as low as 16MB may be okay, but am I missing something? If it's a case of "16MB okay, 32MB recommended, above 48MB there's nothing gained anything at the current state-of-the-art" that's also fine. I've seen people talk about 1GB allocations - this almost seems overkill?

R1w
  • 1,952
  • 4
  • 20
  • 45
Callie J
  • 253
  • 2
  • 7
  • This question may be hard to answer without knowing how much memory you have available on the server(s) that are computing the hashes. If you are running a single server with 4GB of RAM from out of your home you'll probably need a different answer than if you have a pool of 10 machines each with 64GB of RAM dedicated to password hashing. Also, the expected number of users you have may play a part as well. – Ella Rose Sep 04 '18 at 15:06
  • 2
    The standard recommendation is: Pick the memory usage as high as you can tolerate. – SEJPM Sep 04 '18 at 17:58
  • 1
    Also the smallest value the Argon2 paper (section 8) mentions is 256MB with the standard recommendations being 1GB-6GB "aka" as much as you can afford. – SEJPM Sep 04 '18 at 18:01
  • 1
    @SEJPM - I had seen this, but sec 8 though only talks about Argon2d and Argon2i - it doesn't reflect how the hybrid Argon2id affects this. Although yes, 250MB is the lowest value given in all the examples. Does this mean the suggestion of "32MB being reasonable" that I linked from another question should be treated with care? – Callie J Sep 04 '18 at 18:42
  • 3
    @Ella - I was hoping the advice would be rather generic, rather than needing to provide server config. Other than "it's not as strong", are there any practical attacks that are opened if 32MB is selected over 256MB (but with iterations increased so that computation time is roughly equivalent). – Callie J Sep 04 '18 at 18:49
  • The practical attacks are ones where an attacker can get away with cheaper hardware. – forest Sep 04 '18 at 20:11
  • using jna wrapper, the default argon2 memory cost is 65536 kibibytes <> 67 Mb ... – user1767316 Jul 02 '20 at 09:46

2 Answers2

7

An RFC for Argon2 has been published now, which provides several recommendations. Unfortunately, the situation is still not clear cut and seems unrealistic.

Their two generally recommended options are:

  1. If a uniformly safe option that is not tailored to your application or hardware is acceptable, select Argon2id with t=1 iteration, p=4 lanes, m=2^(21) (2 GiB of RAM), 128-bit salt, and 256-bit tag size.

  2. If much less memory is available, a uniformly safe option is Argon2id with t=3 iterations, p=4 lanes, m=2^(16) (64 MiB of RAM), 128-bit salt, and 256-bit tag size.

However, directly above that they state:

Argon2id is optimized for more realistic settings, where the adversary can possibly access the same machine, use its CPU, or mount cold-boot attacks. We suggest the following settings:

Backend server authentication, which takes 0.5 seconds on a 2 GHz CPU using 4 cores -- Argon2id with 8 lanes and 4 GiB of RAM.

Key derivation for hard-drive encryption, which takes 3 seconds on a 2 GHz CPU using 2 cores -- Argon2id with 4 lanes and 6 GiB of RAM.

Frontend server authentication, which takes 0.5 seconds on a 2 GHz CPU using 2 cores -- Argon2id with 4 lanes and 1 GiB of RAM.

Most of these memory suggestions sound unusably high to me. In contrast, the libsodium documentation says there's no 'insecure' memory size but obviously encourages more when possible.

For servers, you should probably opt for server relief to avoid DoS attacks. This involves both client-side and server-side hashing, potentially allowing slightly larger parameters on the client. However, the client-side parameters might have to be limited due to things like mobile devices.

I personally wouldn't go below 32 MiB and would consider 64 MiB a better minimum. However, it should be adjusted based on the hardware/clients for the best security. More memory is better than more iterations.

Many companies (e.g. online password managers) still seem to be avoiding Argon2 and sticking with PBKDF2 over efficiency concerns and JavaScript challenges.

Update - January 9th 2023

Steve Thomas, one of the Password Hashing Competition (PHC) judges, maintains a page with minimum settings for multiple password hashing algorithms, including Argon2. At the time of writing, this is more up-to-date than OWASP recommendations.

Also, in a server context, consider using bcrypt (ideally hmac-bcrypt) instead of Argon2. bcrypt is minimally cache-hard, so it offers better protection at shorter runtimes than Argon2, which is more suited to key derivation like scrypt.

Royce Williams
  • 223
  • 2
  • 8
samuel-lucas6
  • 1,783
  • 7
  • 17
  • 1
    bscypt > argon? – Paul Uszak Jan 09 '23 at 22:45
  • 1
    @PaulUszak: IMHO, argon2 ≳ scrypt > bcrypt ≫ PBKDF2. Using PBKDF2 for key stretching in a new application without strong interoperability requirement is incompetence or/and bowing to the desire of authorities to be in a position to break the system if they see fit. – fgrieu Jan 10 '23 at 14:10
  • 1
    @fgrieu Exactly what I was driving at :-) The linked conclusion(s) seem at odds with the majority opinion on this site. And bscrypt(best(sic)) with 256 KiB doesn't seem particularly memory hard. – Paul Uszak Jan 10 '23 at 16:00
  • 1
    @PaulUszak: while bscrypt is apparently a (rather new) thing, I don't know what it is, and it's not in my list. – fgrieu Jan 10 '23 at 16:17
  • 1
    @fgrieu That's what I used to think too, but the evidence suggests bcrypt is stronger than Argon2 at short runtimes (e.g. 100ms like for signing into a website) because cache-hardness is more important. bscrypt is even better. With larger delays for key derivation (e.g. 1 second for file encryption), Argon2 is better. In terms of ease of use, Argon2 has no weird length limit and can be used as a KDF, but the parameters are more confusing than bcrypt. Agreed on PBKDF2. Not sure I see much reason to use scrypt now compared to Argon2. Here is the bscrypt talk. – samuel-lucas6 Jan 10 '23 at 17:25
  • 1
    Continuing... The point I'm making is that bcrypt and bscrypt (if it gets adopted) are more ideal for password hashing. Argon2 and scrypt are more ideal for key derivation. You don't want a memory-hard algorithm for password hashing; you want a cache-hard algorithm. Similarly, you don't necessarily need to care about cache-hardness for key derivation; you want a memory-hard algorithm. The data/advice I've seen doesn't seem to support universally recommending Argon2 if we want to slow down attackers as much as possible. However, Argon2 has less pitfalls than bcrypt. – samuel-lucas6 Jan 10 '23 at 17:33
  • 1
    @samuel-lucas6: I don't yet clearly see the pros and cons of cache-hardness vs memory-hardness. That would make an interesting question. – fgrieu Jan 10 '23 at 17:58
0

In the time since I asked this back in 2018, OWASP have published guidance on Argon2 at https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html

At time of writing (Jan 2023), this says:

Argon2id should use one of the following configuration settings as a base minimum which includes the minimum memory size (m), the minimum number of iterations (t) and the degree of parallelism (p).

  • m=37 MiB, t=1, p=1
  • m=15 MiB, t=2, p=1

Both of these configuration settings are equivalent in the defense they provide. The only difference is a trade off between CPU and RAM usage.

Callie J
  • 253
  • 2
  • 7