I am trying to understand a subtle (sorry if this is obvious) difference in the computed output power after a DDC process. Here is the story:
Suppose I generated a 10 MHz signal from a microwave source at -10 dBm and feed it to the a digitizer. This corresponds to a peak-to-peak voltage of 200 mV as measured by an oscilloscope for 50 Ohm lines and termination.
With the digitizer and by performing DDC, I am getting around +4 dBm as a result, 14 dBm above the expected -10 dBm of the source. Why is that?
I tried to go through the math step-by-step:
With the 200 mV input peak-to-peak signal, by multiplying this signal with a cos (for I) and -sin (for Q) with amplitude equals 1, I will obtain (suppose the signal has zero phase) I_DC = 50 mV and Q_DC = 0 mV after filtering out the high frequency part, just by trigonometry.
The power is then computed as P=I^2+Q^2= 2.5 mW. Taking log, we get P_log = 10*log(2.5mW) ~ -26 dB = 4 dBm. This is around 14 dBm higher than what I originally generated (-10 dBm).
If I wasn't missing something in the calculation, it seems I need a level correct factor of 14 dBm. But where did this gain come from? I guess by duplicating the signal to generate the I and Q there is a 6 dB gain. Then where is the remaining 8 dB from?
Appreciated if you can shine some light on it.
Edit1: (Answer?) Following the first reply, I might need to consider the 50 Ohm, so it will bring the power down by a factor 10*log(1/50), and it gives now P_dBm = 10*log(2.5e-3*1000/50) = -13.0103 dBm. I am now 3.0103 dB off, and it seems it comes from the trigonometry factor of half. By putting it back, I come back to the -10 dBm as desired.

I do think I missed the 50 Ohm
– Sandbo Aug 16 '18 at 15:21