The way I read the API the FE_READ_BER should give you the
bit error _rate_, in some known units.
In the case of the stv0299 I notice that the chip is instead
being configured to _count_ Viterbi bit errors. Indeed, 0x93 is
written to register 0x34, the error control register:
bit descr
7 1: Error Count
0: Error Rate
[5:4] 00: QPSK bit errors
01: Viterbi bit errors
10: Viterbi byte errors
11: packet errors
[1:0] NOE bits [sic] in count Period in bytes (NB)
00: 2^12 bytes
01: 2^14 bytes
10: 2^16 bytes
11: 2^18 bytes
To get a correct measurement of bit error rate, I think we
should configure the stv0299 to calculate Viterbi bit
error rates at 2^18 __bytes__. That is we should write 0x13
in register 0x34. If we get number X from FE_READ_BER then
the BER in dimensionless units is
x/(8*2^18).
Opinions anyone? Does anyone know what the other frontends
are doing?
not all frontends support real bit error rate measurements, so this
ioctl is defined to return the number of absolute bit errors between 2
FE_READ_BER ioctls. It's up to the application to measure the time
interval between the ioctls or to sleep a defined time between the
monitoring ioctl calls.