I am struggling with the following maximisation problem. Take a vector $\mu\equiv (\mu_1,..., \mu_M)\in \mathbb{R}^M$ such that $\mu_k\in [0,1]$ $\forall k=1,...,M$ and $\sum_{k=1}^M \mu_k=1$.
I want to show that
$$\max_{r\equiv (r_1,..., r_M) \in \mathbb{R}^M} \sum_{k=1}^M \mu_k r_k - \log\Big(\sum_{k=1}^M \exp(r_k) \Big)= \sum_{k=1}^M \mu_k \log(\mu_k)$$
I write first order conditions $$ \begin{cases} \mu_1-\frac{\exp(r_1)}{\sum_{k=1}^M \exp(r_k)}=0\\ ...\\ \mu_M-\frac{\exp(r_M)}{\sum_{k=1}^M \exp(r_k)}=0\\ \end{cases} $$ $$ \Updownarrow $$ $$ \begin{cases} r_1=\log(\mu_1)+\log(\sum_{k=1}^M \exp(r_k))\\ ...\\ r_M=\log(\mu_M)+\log(\sum_{k=1}^M \exp(r_k)) \end{cases} $$ and assuming that this is the maximum, we can put it in the objective function and get the result.
Now I need to ensure that we have a maximum and at this point I'm lost in the computation of the Hessian because I get many zeros. Could you help me with that part?