112

I'm setting up a home HTTP server which can send and receive JSON data to/from different clients (Android and iPhone apps).

I'd like to allow access only to certain users and I'm considering using a simple username/password mechanism, as setting up client certificates seems a bit of an overkill for this small project.

Of course I can't send clear passwords from the client to the server on plain HTTP, otherwise anyone with wireshark/tcpdump installed could read it. So, I'm thinking about the following mechanism:

  1. The HTTP server can be set up as HTTPS server
  2. The server also has username/password database (passwords might be saved with bcrypt)
  3. The client opens the HTTPS connection, it authenticates the server (so a server certificate is needed) and after exchanging the master key, the connection should be encrypted.
  4. The client sends the username/password in clear to the server
  5. The server runs bcrypt on the password and compares it with the one stored in the database

Is there any problem with this kind of configuration? The password should be safe since it's sent on an encrypted connection.

Spooky
  • 107
  • 4
Emiliano
  • 1,223
  • 2
  • 9
  • 6
  • 6
    Just to add to the answers you've received already, in this kind of set-up I'd recommend looking at Certificate Pinning which helps mitigate MITM attacks... – Rory McCune Aug 04 '14 at 15:12
  • Maybe consider hashing the password instead of (or in addition to) encrypting it. – user541686 Aug 07 '14 at 02:53
  • 2
    Even though it might be safe if you use https. One problem which I read in other questions is that the username & password will get written to the server logs if it's directly passed in the urls. It can be dangerous if the logs security get compromised. – Krishnadas PC Aug 25 '17 at 03:00

9 Answers9

85

Yes, this is the standard practice. Doing anything other than this offers minimal additional advantage, if any (and in some cases may harm the security). As long as you verify a valid SSL connection to the correct server, then the password is protected on the wire and can only be read by the server. You don't gain anything by disguising the password before sending it as the server can not trust the client.

The only way that the information could get lost anyway is if the SSL connection was compromised and if the SSL connection was somehow compromised, the "disguised" token would still be all that is needed to access the account, so it does no good to protect the password further. (It does arguably provide a slight protection if they have used the same password on multiple accounts, but if they are doing that, they aren't particularly security conscious to begin with.)

As MyFreeWeb pointed out, there are also some elaborate systems that can use a challenge response to ensure that the password is held by the client, but these are really elaborate and not widely used at all. They also still don't provide a whole lot of added advantage as they only protect the password from being compromised on an actively hacked server.

AJ Henderson
  • 42,081
  • 5
  • 65
  • 112
  • 6
    @AJHenderson +1, having the client send the hash would definitely be bad, as the point of the authentication protocol is to ensure that the visitor knows the correct password, not the correct password hash. If a malicious party could obtain the hash (stolen password database, man in the middle, whatever) and the system relied on the visitor sending the hash instead of the password, any notion of real security would be out the window. – Craig Tullis Aug 04 '14 at 20:04
  • 9
    There is sometimes benefit to hash at client and then hash some more at the server. This can help ensure the password is obfuscated in any clear text logs at the client, server, and on the network. – Andy Boura Aug 05 '14 at 07:41
  • 2
    @Craig Letting the client compute the expensive salted hash is perfectly fine, as long as you apply a cheap unsalted hash on the server before storing it (or some other kind of one way function, like modular exponentiation). – CodesInChaos Aug 05 '14 at 12:29
  • @AndyBoura - if you are designing the system such that you can support client side hashing, you have control over the network and server behavior for parts that would have access to the clear text. True, you aren't harmed by hashing it on the client (other than lost time), just so long as you it doesn't significantly impact the amount of hashing you do on the server. – AJ Henderson Aug 05 '14 at 13:16
  • @CodesInChaos - I'd only go for it being slightly less bad. Even using a complex hash to make a "longer" input, it is only key extension and an attacker is going to be able to produce a lot of cheap hashes very easily. Practically, it might not matter if the client side hash output is long and there are no vulnerabilities in either hash that limit the expansion from simple values. It's also important to note that in such a case, you would need to salt both the client and server side hashes. I'm still a bit dubious of the cost vs the benefit though. – AJ Henderson Aug 05 '14 at 13:18
  • 1
    @AJHenderson There are several advantages to client side hashing: 1) Longer time limit since the client doesn't need to handle many logins at the same time. 2) Avoids DoS since the server doesn't need to compute an expensive hash. 3) Server doesn't learn the password. | The main downside is that clients often have a slower CPU than the server. | I don't see why the server side hash should use a salt when the client side hash already uses one. – CodesInChaos Aug 05 '14 at 13:32
  • 2
    @CodesInChaos - Because nothing client side is trusted. An attacker can skip the entire slow process by bypassing the client entirely. You need to make sure you can't build a rainbow table of cheap, fast hash values. It isn't feasible if you salt each and provided the intermediate hash is long enough. Otherwise, you just make a rainbow table of hash values which can be generated at a very, VERY fast rate if they are cheap. – AJ Henderson Aug 05 '14 at 14:08
  • @AJHenderson Obviously the intermediate value needs to be large enough to avoid enumeration, at least 128 bits preferably 160-256 to account for multi target attacks. Since using a proper hash like SHA-256 is so easy, I see no reason to add unnecessary complications like a salt. – CodesInChaos Aug 05 '14 at 14:15
  • @CodesInChaos - yeah, if you are using a sufficinently large intermediate hash and are sure it behaves well (truly random distribution) then it should be ok as long as the initial hash is salted and takes long enough to force use of a full enumeration of the intermediate hash space (or at least near full.) – AJ Henderson Aug 05 '14 at 14:42
  • 1
    At the risk of stating the obvious, make sure you are transmitting via POST, and not via GET :-) GETs will be in the clear. – Eric Patrick Aug 05 '14 at 16:15
  • 2
    Also, while we're talking issues like POST vs GET, be sure to implement forward secrecy so your secure sessions can't be decrypted later on by somebody who obtains your server's private key. – Craig Tullis Aug 05 '14 at 16:20
  • Password/hash conversation continued in chat. – AJ Henderson Aug 05 '14 at 18:02
  • 1
    Especially with a Client-Side application, I would probably still use some hashed version of the password over the line, since I think it is possible, that for some reason an unencrypted connection could be created (by some programming error, faulty app-update, wrong settings of the underlying library, so a redirection to a http-server is accepted...) * All for the case of same password for multiple accounts, furthermore you are safe from someone with access to your server from obtaining cleartext-passwords... – Falco Aug 06 '14 at 14:04
  • @Falco - and a programming error could result in the hash being insecure too. You have to validate behavior of security related code, not just hope it works right. The more complexity you add to a system, the harder it is to validate and the more likely an error or vulnerability is to appear. You have to judge how much benefit you get for a given amount of additional effort. Eventually the added risk of error in implementation is not worth the minimal gain in added resistance. – AJ Henderson Aug 06 '14 at 15:06
  • @AJHenderson: You state "Yes, this is the standard practice." Do you have a reference to make that statement? I'm looking for best practices from a well-known authority source, like maybe NIST or even an O'Reilly book. – stackoverflowuser2010 Apr 22 '15 at 21:06
  • Does this "standard practice" also ensures complete security if we have 'goto fail' or 'HTTPS freak' as part of our communication? And if two SSL versions are broken, are we sure that the recent version is safe? – h22 Oct 27 '15 at 12:43
  • @h22 I suppose we can't be 100 percent sure, but if it isn't, passwords are the least of our concerns. Also, we're a heck of a lot more sure about TLS than we are about some home grown client side hashing or encryption setup. If say that falls under the "don't roll your own" practice. – AJ Henderson Oct 27 '15 at 13:12
  • "Home grown encryption" does not sound good indeed but there are standard cryptography libraries as well. – h22 Oct 27 '15 at 13:21
  • @h22 the home grown encryption doesn't just apply to algorithms or libraries, but also protocols. There are a lot of ways a password exchange can break down. For example, if you did client side encryption without a different challenge sent to the client every time, then the encryption would do nothing at all as it would only be obscuring the means of generating a password that would only be protected by ssl (since the client generated value would be the same all the time, making it the "real" password) There are all kinds of potential pitfalls, both obvious and non obvious. – AJ Henderson Oct 27 '15 at 15:05
  • The fact remains that there is far more testing and analysis of protocols like ssl than anything you make up as your own suite. If your own stuff is layered on top of other stuff it may produce side channel attacks. The fact it has taken so long for issues to be discovered in ssl should be a confidence builder. It means even with a ton of eyes, vulnerabilities are hard to find. Additionally it means issues are found and announced by researchers that look in to this kind of thing. – AJ Henderson Oct 27 '15 at 15:10
  • Is this still a good answer? I would expect not; some places install extra certificates and do HTTPS MitM attacks, for example. I've seen some discussion elsewhere of what should be a better scheme, but not enough for me to judge whether it was by people who actually understood security. – Daniel H Mar 06 '19 at 21:25
  • @DanielH if someone has access to install root certificates, they have access to key log. If you don't trust a computer, don't enter your password, period. Protecting against compromised clients requires an entirely different mechanism such as smart cards that can authenticate without revealing the secret, but that requires extra hardware for the client and isn't practical for most applications. – AJ Henderson Mar 06 '19 at 22:35
  • @AJHenderson There are ways to get a client to have a compromised certificate that don’t let you install a keylogger, but I withdraw that part of my objection anyway because they’d also let you change the login screen and few if any clients will check that every time for changes. This would only protect against partial-but-not-total server compromises (which may or may not be plausible depending on architecture and corporate security practices) or attacks that allowed passive evesdropping but not active interference with HTTPS (probably can exist, but very rare). – Daniel H Mar 07 '19 at 00:34
  • @AJHenderson Although most of what convinced me is that several companies I trust to have better-than-average security all seem to send the entered password in a way unencrypted beyond HTTPS. I am surprised; I would have expected a method which doesn’t do that to have become standard for the perhaps-small improvement it does offer, especially given that fighting against password reuse is a losing battle. – Daniel H Mar 07 '19 at 00:37
  • @DanielH how would you install a trusted root certificate without having admin access to the box for at least a short time? If you have that level of access, then you could be nefarious in any number of other ways as well if you wanted to be. And as you rightly pointed out, just having MitM capability means any client side mechanism could be defeated unless it was built in to an app rather than a website. – AJ Henderson Mar 07 '19 at 02:55
  • @AJHenderson You could compromise a certificate, either without being noticed or carrying out the attack before it's revoked. You could target clients who still have the Superfish certificate, or an equivalent from one of the multiple other utilities that do something similar. You could sell a content-filtering middlebox or middleware which would have a valid reason for MitMing, and then either fail to guard the certificate or use it for nefarious purposes. You could trick a CA into issuing you a certificate for a site you only partially control. Etc. – Daniel H Mar 07 '19 at 03:02
  • @DanielH ok, that's a completely different context than your initial comment talking about places that install certs for MitM proxying. The cases you just described are far less common and most have either mitigations in place now or are very rare cases or only apply to systems that are otherwise vulnerable to attack against the client itself. – AJ Henderson Mar 07 '19 at 03:22
21

Not necessarily.

You also need to ensure the following:

  1. Your site is protected against cross-site request forgeries. Use Synchronizing Token Pattern.

  2. Your site is protected against session fixation attacks. Change session id on login.

  3. If using session cookies, that your entire site is HTTPS, not just the login URL, and that your session cookie is marked as secure and http only (no JavaScript access). Browser will send session cookie unencrypted if user types http://yoursecuresite (in same browser session).

  4. You are using a recent protocol. SSL 1 and 2 are broken, and 3 might be too. Try to use TLS 1.3.

  5. You are using a strong cipher.

  6. You are not using HTTP compression (GZip) or TLS compression. If your site displays user input (like a search input), then I can figure out your CSRF tokens and bank account number if you're using compression.

  7. Your server does not allow insecure client re-negotiation.

  8. You are using a 2048-bit RSA key (or the equivalent for an EC key), and that no one else knows your private key.

  9. You are using HSTS so browser goes direct to https even if user types http

  10. You are using perfect forward secrecy so your historical communications are secure even if your private key is leaked

kelalaka
  • 5,559
  • 4
  • 25
  • 48
Neil McGuigan
  • 3,429
  • 1
  • 18
  • 22
  • if the entire site is https, do I Still need cookie marked as secure? They would still be encrypted "for free" due to the TLS layer, right? – Emiliano Aug 04 '14 at 22:11
  • 2
    Can you explain point 6? Why compression would make the connection less secure? – Emiliano Aug 04 '14 at 22:13
  • Ya you still want secure cookie, http only, as its still stealable if not. See xss. Compression kills encryption. See BEAST and CRIME attacks. – Neil McGuigan Aug 04 '14 at 23:17
  • 2
    @Emiliano: even your site is HTTPS only, an attacker can setup a man in the middle attack and setup a fake server that uses plain HTTP, performing what is known as SSL stripping. The browser will send to the fake server the user's cookie. To mitigate this, you need Secure cookie and HSTS policy. – Lie Ryan Aug 05 '14 at 00:01
  • Employ PFS to mitigate the risk of somebody (for example some evil government) stealing your private keys.
  • – FooF Aug 05 '14 at 12:12
  • that's the whole point of HTTPS... 1. not relevant for mobile client, it is unlikely that there are cookies and sessions. 6. what's wrong with compressions? - typically a server serving json services would require all requests to be authenticated.
  • – njzk2 Aug 06 '14 at 20:54
  • @njzk 1. URLs will still be browsable in a browser, so it is relevant. 3. Depends on his implementation, worth mentioning anyways. 6. see BEAST and CRIME attacks. – Neil McGuigan Aug 06 '14 at 21:21
  • @NeilMcGuigan : Is it possible to guess the password length if it is send in clear over HTTPS? – user2284570 Aug 07 '14 at 13:30
  • @user2284570 "In the clear" and over "HTTPS" are opposites. – Neil McGuigan Aug 07 '14 at 17:51
  • @NeilMcGuigan : I wanted to mean by not transmitting them using hash or extra-encryption. – user2284570 Aug 07 '14 at 19:12