Andrew de Quincey wrote:
right, that's why the TIMEOUT event had been removed in the days of the NEWSTRUCT branch before it was reintroduced again. All other status bits report real hardware states, the timeout bit pure esotericism.Well, the DVB API as specified by Nokia says that FE_SET_FRONTEND should start tuning, and the result (failure or success) is relayed to the application by means of an event with FE_GET_EVENT. I.e. one FE_SET_FRONTEND -> one FE_GET_EVENT. But it's also possible for the frontend to generate additional events, e.g. when the cable is unplugged and replugged. The frontend code should do what it can get a lock, and if that fails, report the failure. IMHO this is the job of the driver, because the timeouts depend on the type of frontend (or even on the frontend making and model). But Holger thought differently, and said the application should do its own timeouts, because the driver cannot know how long it wants to wait before assuming failure. The code also generated superflous events during tuning. After some debate he put in the current FE_TIMEDOUT cruft (with an arbitrarily chosen timeout), which essentially means "your tuning attempt already timed out, so the following tuning-failed event is superflous and can be discarded". Or something like this (I never quite understood it). It's possible that a frontend reports failure after tuning, and some time later report a success event (because the zig-zag scan never gives up).
Yeah, that is pretty much my thinking. I don't see how the Nokia API could actually work in the real world. Unless we were prepared to break with the Nokia API, and have apps do timeouts, I don't know if theres any point in me attempting to fix it, because whatever I do, it will be a hack, and not work in all situations. The driver just doesn't have enough information on what the app is wanting it to do.
If there _isn't_ a problem with breaking the Nokia API, I would be glad to have a look.;) just remove it, it's bogus anyway.
If there is, all I have to do to fix my stuff is ignore the FE_TIMEDOUT bit.
all the crap in dvb_frontend.[ch] is only to work around the flaws in ancient hardware, modern demodulators don't need any code from these files at all - don't spend too much energy there.My thinking of what the code should do: - if someone does SEC/DiSEqC stuff, put the frontend thread to sleep until the next FE_SET_FRONTEND (the order of FE ioctls is undocumented, but this is the only order that makes sense) - initial tuning: set frontend paramters, wait for things to settle down before querying FE status (time depends on FE HW driver);
If I remember correctly in the old DVB code the drift value was persistent and never reset, the reset at tuning was introduced for some reason in the dvb-kernel release, don't know off-hand why - check the logs. The reset should probably go into the SEC/DiSEqC command handler...if it fails, try zigzag, if that fails retry slower zigzag, if that fails, report failure and keep trying with low prio and a limited scan range around the given frequency (i.e. wait for user to plug in the cable); things could be more complicated if one has to do autoprobing for inversion etc. in software - lost signal: there are two possibilities: 1. cable unplugged or a bird sits on your dish -> do not zigzag, just wait for signal 2. sun heated up the LNB until the freq drift got too large -> zigzag to correct drift Obviously there's no way to decide between 1. and 2., so one has to try zigzag always. I don't know how likely 2. is in practice. Also, if one has found a LNB offset during zigzag, it would be the same for all frequencies. But since one doesn't know whether one is correcting an LNB offset or whether the frequency given to FE_SET_FRONTEND was wrong, there is no way to tell -> LNB offset must be determined from scratch at every zigzag scan. (One could use a heuristic to retry the last LNB offset for the new freq, but only if there was no DiSEqC seq which changed the LNB in between, but I doubt it's worth trying...)