I posting here my problem, perhaps somebody can point me how to proceed further :)
[The challenge]
I have an electronic system that can be modeled as a Wiener process with a drift $\mu$:
$ X_t = \mu t + \sigma W_t $
$E[X_t] = \mu t$ , $Var[X_t] = \sigma \sqrt t$
where the value of the drift $\mu$ is unknown and can be either positive or negative. The index $t$ can be seen as the time evolving form the initial time $t_0$, where $X_0 = 0$.
In my physical system I can only observe the $X_t$ evolution over time $t$.
My goal is to minimize the amount of time $t_{detection}$, required to estimate the sign of the drift $\mu$, with a given confidence interval $CI$.
[The current (naive) approach]
The system signal-to-noise-ratio is: $SNR(t) = \frac{\mu \sqrt t}{3 \sigma}$ for a CI of $3 \sigma$ = 99.7%
Intuitively, if I wait long enough, I could reach any arbitrary SNR level as the signal power is proportional to $t$, whereas the noise power only to the $\sqrt t$.
However, in my scenario my $t_{detection}$ is constrained, and with the current values I am off by a factor of x100 outside specification with this approach.
[The question]
Is there a way to reduce the $t_{detection}$ time for the same $CI$ confidence interval?
I am not familiar with Kalman filter, but I suspect it may help here? Or perhaps by looking at the derivate of $X_t$?
[Remarks]
You are talking to an analog designer. Please bear with me... and thank you for any help!