One way to look at this problem is from a bounding perspective, although it only gives insight into the optimal distance $\|x^*-z\|_2$, and not necessarily localization information of $x^*$ itself in general.
In particular, note that we can define a lifted variable $X=xx^\top$. Then the left side of the constraint can be rewritten as
\begin{equation*}
x^\top Dx = \text{tr}(x^\top Dx) = \text{tr}(Dxx^\top) = \text{tr}(DX).
\end{equation*}
Similarly, the objective can be written as
\begin{equation*}
\|x-z\|_2^2 = x^\top x - 2z^\top x + z^\top z = \text{tr}(X)-2z^\top x + z^\top z.
\end{equation*}
Therefore, the projection problem is equivalent to the following:
\begin{equation*}
\begin{aligned}
&\underset{x\in\mathbb{R}^n,X\in\mathbb{S}^n}{\text{minimize}} && \text{tr}(X)-2z^\top x + z^\top \\
&\text{subject to} && \text{tr}(DX)=1, \\
&&& X=xx^\top.
\end{aligned}
\end{equation*}
Under this reformulation, the objective is affine, and the old equality constraint is also affine. However, the nonconvexity has been absorbed into the new constraint $X=xx^\top$. If you relax this constraint to $X\succeq xx^\top$, the problem becomes convex, since $f\colon\mathbb{R}^n\to\mathbb{S}^n$ defined by $f(x,X)=xx^\top-X$ is cone-convex with respect to the positive semidefinite cone. Indeed, using Schur complements, we can further rewrite the condition that $X-xx^\top\succeq 0$ as
\begin{equation*}
\begin{bmatrix}
1 & x^\top \\
x & X
\end{bmatrix} \succeq 0.
\end{equation*}
Since we've introduced a relaxation of your original problem, we conclude that the following (convex) semidefinite programming problem lower bounds your original problem:
\begin{equation*}
\begin{aligned}
&\underset{x\in\mathbb{R}^n,X\in\mathbb{S}^n}{\text{minimize}} && \text{tr}(X)-2z^\top x + z^\top \\
&\text{subject to} && \text{tr}(DX)=1, \\
&&& \begin{bmatrix}
1 & x^\top \\
x & X
\end{bmatrix} \succeq 0.
\end{aligned}
\end{equation*}
Note that in the case the final constraint is active at optimum, i.e., $X^*=x^*x^{*\top}$, you can conclude that $x^*$ solves the original nonconvex problem.
For the other side of things, you can upper bound the optimal value by looking at the eigenvalues of $D$. In particular, the eigenvalues of $D$ are precisely the diagonal elements of $D$ (since it is a diagonal matrix per your assumption). Without loss of generality, let us assume $d_1\ge d_2\ge \cdots \ge d_n$. Then the eigenvector associated with eigenvalue $d_i$ is $e_i$, the $i$th standard basis vector. Let $x=\frac{1}{\sqrt{d_i}}e_i$. Then we remark that $x$ is feasible for your original optimization problem, since
\begin{equation*}
x^\top Dx = \frac{1}{d_i}e_i^\top De_i = \frac{1}{d_i}e_i^\top (d_i e_i) = e_i^\top e_i = 1.
\end{equation*}
The corresponding objective value is
\begin{equation*}
\|x-z\|_2^2 = \left\|\frac{1}{\sqrt{d_i}}e_i - z\right\|_2^2.
\end{equation*}
This value trivially upper bounds the optimal objective value of the minimization problem. Since this holds for all $i\in\{1,2,\dots,n\}$, we conclude that the following upper bounds the optimal value of the problem:
\begin{equation*}
\min_{i\in\{1,2,\dots,n\}}\left\|\frac{1}{\sqrt{d_i}}e_i - z\right\|_2^2.
\end{equation*}
With a bit more work, it may be possible to tighten these bounds, or even reformulate your problem differently so as to find an exact solution. I hope this helps give you some ideas.