A reddit comment claimed that you would require $e$ random numbers between $0$ and $1$ to add up to $1$. So I wrote a python program to verify this. Conducted the experiment with 100 thousand times 200 times. The Standard deviation of the results was $0.00271225$ ( from $e$, not from their mean ).
I Have two questions, first, while conducting the experiment, the result was undoubtedly over 1 most of the times. What should I do to the number of random number required in these cases. The above $SD$ does NOT take in account those excess over 1. It treats them as if the total was $ = 1$
Second, how to prove this?
Here's the program if you can understand Python
(Sorry, cant post image directly)