2

I am about to run an online user experiment to compare different strategies of the recommender system. I will compare 18 strategies and each strategy produces five recommendations, thus I have to ask subjects to evaluate 90 recommendations in total. However, the strategies make duplicated recommendations (i.e., some recommendations appear in several strategies).

(i) In the current design, each page provides five recommendations made by one strategy. After a user inputs evaluations for all of them, another page shows up, where five recommendations by another strategy are given. Thus, a user sometimes sees recommendations which he already evaluated. I think this design is widely used in previous works, although how to handle duplicated recommendations has not been discussed extensively.

(ii) Or is it reasonable to first extract all unique recommendations, split them into different pages, and show them in the random order? I think this design makes the experiment shorter. But this design has not been used, to the best of my knowledge.

If you have a suggestion or know a paper related to it, please give it to me.

Benben
  • 121
  • 1
  • I believe it may be very interesting to see how people respond to the same recommendation by different systems. You could, for instance, analyse whether they are rated similarly across the systems. I would not remove them since the context (the four other recommendations) may be different between system, which may affect the one's ratings. Just randomize as much as you can if you don't want the subjects know that they rate items several times. – Robin Kramer-ten Have Jul 01 '16 at 15:00

0 Answers0