3

I recently found out that in java it can be very dangerous to deserialize data. See https://github.com/frohoff/ysoserial

In my application I'm saving the current configuration using serialization and deserialization. I did a test and modified the config file, and it indeed was able to start a process upon reading the config.

That config file is created by my application on the users harddrive. Can I consider it safe? I mean, it's trusted data because it was created by me. Why should a user hack him/herself?

So do I need to change my deserialization code, or can I leave it as it is?

2 Answers2

1

The meaning of trust

The meaning of trusted is strange in computer security. If something is trusted, then by definition it cannot be harmful in any way. What you are probably asking is instead if data you create should be considered trusted or untrusted. Answering this would depend heavily on your particular situation. Specifically, what matters is not whether or not you would "hack yourself", but whether or not there is anything an attacker can gain by compromising the process that is insecurely processing the data.

One way to think about it is, would it be safe if your configuration file had an exec=some_command option that would automatically execute the command on application startup (which is not particularly uncommon)? If it would be safe, then there is no immediate security reason to safely handle input.

Read up, write down

There is a security model called the Biba Integrity Model. It is characterized by the phrase read up, write down. This means that a lower integrity level (lower privilege) should not be able to modify data that has a higher integrity level. In other words, you can read more privileged data, but you can only write to data that is less privileged than you already are. This integrity model is specifically designed to prevent a privileged process from trusting data that a less privileged process may be able to modify. Through this, you can see that the model permits a given user to modify configuration files of applications run by the same user, but it would not allow a lower user from modifying files operated on by a more privileged user. If data is considered trusted, it is exempt from restrictions. According to this integrity model, is your data trusted? Is there the potential for any undesired write up?

There are three properties governed by the Biba Integrity model. Taken from Wikipedia:

  1. The Simple Integrity Property states that a subject at a given level of integrity must not read data at a lower integrity level (read up).

  2. The * (star) Integrity Property states that a subject at a given level of integrity must not write to data at a higher level of integrity (write down).

  3. Invocation Property states that a process from below cannot request higher access; only with subjects at an equal or lower level.

In other words, is the program parsing your data any more privileged than a program would need to be in order to modify the "trusted" data? If it is no more privileged, than an attacker cannot modify the data to elevate their privileges. The most they can do is get the privileges that they already have. This is why /etc/passwd is not owned by your user, but your media player configs are!

Caveats

There are a few other things you should think about before declaring that this is a fine idea. You must ask yourself if, philosophically and practically, writing knowingly insecure code is a good idea. If it is not excessively difficult to process securely, you must ask yourself a few things.

  • Will you remember, in the future, that your codebase is insecure, or might you re-purpose it?
  • Will anyone else use your code? Will they have the exact same threat model as you?
  • Will you ever run the application privileged, with configuration files writable by lower privileges?
  • Could the insecurely handled data result in confusing bugs if it is accidentally corrupted?
  • Do you feel comfortable getting used to writing insecure code? Is it a good habit to get into?
forest
  • 66,706
  • 20
  • 212
  • 270
0

All depends on what you call safe, and what you call secure data. If one application serialize some data, and later this application or another one deserialize it while it cannot have been tempered in the meanwhile, there is no special security problem. An use case example of that is session serialization in clustered web applications. If the data has been tempered here, you really have more serious problems than the mere deserialization 1.

Now imagine you store serialized data for quite a while on a personnal computer, and try to deserialize it after a time. Many things can have happened in the meanwhile, and malwares or physical incidents could have changed the file without the user noticing it. Or the application could have changed to a different serialization format. In that case bad things are likely to happen at deserialization time.


1 anyway, when you read input data that you (a single application) has not immediately produced you should always be prepared to an error in it and ensure that your code cannot break everything around it. Throwing an exception is generally fine, but writing junk in a database is not.

Serge Ballesta
  • 26,693
  • 4
  • 44
  • 89