13

Since I download from time to time example files from blogs, wikis etc. I would like to know how I can avoid execution of malicious code.

Is it enough to disable the autoexec switch and see what is in the text editor?

stacker
  • 38,549
  • 31
  • 141
  • 243
  • I don't know about "enough", but AFAIK it's about the only thing you can do. – gandalf3 Jul 31 '13 at 04:52
  • 2
    The most likely thing a virus will do is change a bunch of files. You can check for this by scrolling through and making sure that don't open any files that don't make any sense (i.e. system files, config files, personal folders). Even so, they could do something else such as download malware, so it's not 100% (I prefer to just stick with the official add-ons/scripts. There are a bunch and you don't have to worry about virus's then). – CharlesL Jul 31 '13 at 11:52

1 Answers1

12

Yes, It is enough to open a blend file with Trusted Source disabled. *

But looking the the text editor is not the only place that can contain code. Animation driver expressions can also contain code too.

Take care, even when the file is not trusted you could inadvertently run a script by...

  • Starting the game engine.
  • Rendering with freestyle.
  • Executing a command in the Python console.

If you are in a situation where you need to load blend files you don't trust (an online render-farm for example). Suggest to sandbox the environment Blender runs in (see containers).


* There is the possibility of crafting a file which makes use of a buffer overrun exploit, these are much more involved then writing malicious Python scripts.


For reference, this is a known pain-point, see these threads:

ideasman42
  • 47,387
  • 10
  • 141
  • 223
  • I just wonder now if we could extract all embeded files to check if the blend file is safe. I wonder if python scripts are limited (like in a sandbox) or if they can do things outside of the blend file, like in the harddrive or anything else.. And if blend file access to websites could simply be prevented... I read more things about it here – Aquarius Power Jan 06 '16 at 03:32
  • the default is set to disabled at 2.76b! to check: (info)menu: file / user preferences / file / [_] auto run python scripts – Aquarius Power Jan 06 '16 at 03:45
  • 1
    @Aquarius Power, Its not so difficult to extract embedded text, (though that deserves its own Q&A). And sand-boxing CPython isn't supported, and very difficult unless you also cripple Python at the same time making it unusable for valid use-cases too. - see: http://stackoverflow.com/questions/3068139/how-can-i-sandbox-python-in-pure-python – ideasman42 Jan 06 '16 at 06:07
  • I thought exactly on that, cripple python! its libraries to have file and internet access should require permission to be accessed! but... from the link you provided, it seems the attacker could simply create their own tiny specific lib to provide these accesses... :(, may be the best is really avoid scripts running at blender, unless we trust who created the blend file... – Aquarius Power Jan 06 '16 at 06:15
  • Right, thats why its disabled by default. (so you have to explicitly trust the file). The main issue is with rigs which often use py-drivers, and aren't useful unless you enable scripting. As for requiring permission - Then we would have to add a python-permission system with some way for users to control it. (also quite involved, and likely not hard to work-around). – ideasman42 Jan 06 '16 at 08:31
  • May be, restrict what users can do, like in their python code cannot have certain things that can be considered an attempt to workaround the user's allowed permissions. If they are used to code in that uber-complex way, that kind of code would be flagged as potential threat. I would like to only have python scripts for things that do not go outside blender, do you believe a python script checker could work? – Aquarius Power Jan 07 '16 at 02:40
  • Also, concerning permissions, I dont know if it could be compared to android where apps require permissions to access certain functionalities (see how "automate" works, each permission requires another app lib download), but with the "code uber-complexity restriction" I thought above, may be, the permissions could work. – Aquarius Power Jan 07 '16 at 02:43
  • @Aquarius Power, restricting CPython isn't practical at the moment, so not sure its useful to plan a permission system. – ideasman42 Jan 07 '16 at 05:33
  • What about instead of restricting, create security warnings? So users could review "the way the code is implemented" and not the code itself (as many do nowadays).I thought yesterday on a modified python interpreter that when it is running,will stop at specific code constructs that could be used in an attack, and inform the user what is happening in a newbie language. Basically, this means, only less capable scripts would receive no warning flags. Or people that are used to implement using such code constructs, would be required to use more safe coding to let their scripts be considered clean. – Aquarius Power Jan 10 '16 at 16:39
  • 2
    @Aquarius Power, this isn't really the right place to discuss security implementations, a lot of discussion on this has already taken place over the years, added links in the answer. – ideasman42 Jan 11 '16 at 04:23