Notas del curso de Human-Computer Interaction – Evaluación Vol.2

En este post continúo transcribiendo algunas notas que tomé durante el curso de Human-Computer Interaction del Interaction Design Foundation, cuyo profesor es Alan Dix.

El curso, que recomiendo fuertemente, está en

Este post está relacionado a la temática de la Evaluación, específicamente acerca de mitos de usabilidad en relación a la estadística y otros conceptos.

Observación: ¡Las notas las redacté en inglés porque el curso es en inglés!

Espero que les sirvan 🙂

Myths of Usability

Are five users enough?

This myth comes from a study from Nielsen and Landauer (1993). There were many assumptions in this study; it is not as general as it appears to be.

Something is true, though: Each extra user gives less new information; the value you get from testing decreases with each new users you test with. The are decreasing returns with each new user.

Cost-benefit relationship: In 1993 it was more expensive to test than what it is now. If your fixing cycle is cheap, then one person per design cycle might be enough! For robust statistics you would need many more than five users!


Points of Comparison

You need to compare within a product or between products.

¿If you get an average satisfaction of 3.2 out of 5, is that good or bad? You must compare with another product in order to know.

The dips in the UX are the interesting points in tests.

What constitutes a ‘control’? It is not obvious for many experiments, you must be careful and choose the right comparison.



If you want to find some problem that you want to fix, then you don’t need statistics!

If you want to know

  • how frequently the problem arises
  • if many users experience it or not
  • if you have found most of the problems or only some of them

then you do need statistics.


Statistical Significance

Statistically significant does not mean that the effect is large, that the impact of the result is big.

Non-significant means not proven, it does not mean that the opposite is true! It is common to incorrectly infer conclusions when a test throws statistically non-significant results.

Leave a Reply

Your email address will not be published. Required fields are marked *