Hello All,
I am curious how vdr's tuning algorithm, in general, works. With all the refactoring that has gone on with how vdr tunes, locks, retunes, and otherwise tries to anticipate various forms of interference, storms, or other activities that could hinder vdr from performing its job adequitely, please describe, in pseudocode, how the tuning algorythm actually works.
Some of my questions are:
1) What do all the tuning timers mean? There are some constants in vdr code, what are the meanings of each tuning state? (i.e. the switch statement in dvbdevice.c. 2) Given that we understand what all the timers do, which ones depend on eachother? What are safe limits for each timer? For example, one timer is the DVBS_TUNE_TIMEOUT that traditionally is set to 9000 (miliseconds) but at times it takes my dvb card upwards of 18 seconds to actually tune the channel (so I might wish to set this to 19000 instead of 9000). What other timers need to be adjusted if any to account for this? 3) Does vdr care or know anything about a rotor setup where the channel isn't always present the moment the diseqc commands are sent? 4) There has been some talk in the past about refactoring this process some, do we think the current approach is the best approach? Does vdr-1.5.x plan to offer new and improved tuning algos?
Best Regards.
Hi,
Stone wrote:
I am curious how vdr's tuning algorithm, in general, works. With all the refactoring that has gone on with how vdr tunes, locks, retunes, and otherwise tries to anticipate various forms of interference, storms, or other activities that could hinder vdr from performing its job adequitely, please describe, in pseudocode, how the tuning algorythm actually works.
Some of my questions are:
- What do all the tuning timers mean? There are some constants in vdr
code, what are the meanings of each tuning state? (i.e. the switch statement in dvbdevice.c.
Let me start with the switch statement which implements some sort of state automaton. The state tsIdle is the initial state.
When a tuning request arrives at the tuner, the state switches to tsSet. The automaton (= tuner thread) then sets up the frontend (e. g. sends the DiSEqC message and sets the frequency to tune to) and enters state tsTuned (or -- in the unlikely case that the driver doesn't like the set parameters -- goes back to tsIdle).
The next loop iteration enters at case tsTuned (and falls immediately through to the code at case tsLocked) where the frontend's status is checked (which was read as one of the loops first instructions). I'll now concentrate on the common case FE_HAS_LOCK: when the frontend signals this status, it has successfully tuned to the transponder. So the state switches on to tsLocked.
From that time on, the frontend status is continuously monitored to
detect when the frontend looses FE_HAS_LOCK. In such a case the state falls back to tsTuned and hopefully (e. g. just by a short distortion of the signal) the frontend regains FE_HAS_LOCK without further intervention, so the state switches once again to tsLocked.
In the case where the frontend doesn't regain FE_HAS_LOCK on its own (e. g. consider a powerfailure at the multiswitch) the state falls back from tsTuned to tsSet so that for example the DiSEqC message is sent again to the multiswitch and hopefully we'll shortly see the transition to tsTuned and tsLocked afterwards.
Now to the timings: in the next to last paragraph I wrote about a short distortion. "short" means in this case "up to DVBT_LOCK_TIMEOUT". It is used to filter away spurious lost lock situations where there is no need to intervene. In other words: if the lock is lost at least for that time, we need to retune.
The other constant is DVBS_TUNE_TIMEOUT. It is used in the opposite direction, i. e. how long do we wait from setting up the frontend until retuning. Consider the case, where at tsSet a DiSEqC message is sent to the multiswitch, but for any reason the message gets damaged, so the multiswitch doesn't switch. As the state tsLocked isn't reached in time, we need to retune and hopefully the message doesn't get damaged this time.
- Given that we understand what all the timers do, which ones depend on
eachother? What are safe limits for each timer? For example, one timer is the DVBS_TUNE_TIMEOUT that traditionally is set to 9000 (miliseconds) but at times it takes my dvb card upwards of 18 seconds to actually tune the channel (so I might wish to set this to 19000 instead of 9000). What other timers need to be adjusted if any to account for this?
Both constants are independent, as one is used for the transition from tsSet to tsLocked and the other from tsLocked to tsSet (the state tsTuned just serves as a central point at which the timeout is monitored).
When I contributed the code during 1.3.x development, DVBS_TUNE_TIMEOUT was at about 1500 ms, as my TechniSat receiver repeated the DiSEqC message at that period. But it was simply to fast for some DVB-T devices/drivers, so that's the reason why there are different constants for DVB-S/-C/-T.
Then there were some complaints from users with rotor setups, so DVBS_TUNE_TIMEOUT was increased to the same value as for DVB-C/-T. That's why all three DVB variants use the same timeouts at the moment.
From your writing I assume, that you use a rotor setup too. I further
assume that your rotor stops for a short time when it receives the repeated DiSEqC message. I don't think that you want to suppress the tuning timeout log messages by setting the timeout to 19000 ms.
Anyway, a larger timeout (e. g. 19000 ms) should be possible but keep in mind, that the timeout must also expire for a retuning to happen when the dish doesn't need to be moved. For example when zapping on the same satellite and the initial DiSEqC message for a multiswitch gets damaged you'll see a black screen for 19000 ms. The only way out of this is to tune to a different transponder and back by your remote control.
Concerning other timers: consider the worst case where the initial DiSEqC message gets lost and after 19000 ms (at the first repetition of DiSEqC message) the dish starts moving for 18000 ms. In this scenario, it will take 37000+ ms until VDR receives the stream. In the case of a recording, this is simply to long as recorder.c defines a MAXBROKENTIMEOUT of 30 seconds. VDR's recorder considers a stream to be broken when it doesn't receive a PES packet for that time. So I'd suggest to double this timeout.
I further assume that you've disabled EPG scans as your dish would start moving like mad. The reason is that VDR switches to a different transponder every 10 seconds which is to fast when your dish needs up to 19 seconds for positioning. I didn't have a closer look into this code so the transponder list used for EPG scanning might even not be sorted by satellite position which let's the dish move more often -- but this is just a guess.
- Does vdr care or know anything about a rotor setup where the channel
isn't always present the moment the diseqc commands are sent?
From my writing above, I don't think that VDR cares about a rotor setup.
When I contributed the code I've bought a USALS compatible DiSEqC 1.2 rotor just for testing and returned it after testing. I didn't put any rotor specific DiSEqC commands into VDR's diseqc.conf but used the rotor-plugin instead. Retuning while the rotor was moving didn't seem to disturb the rotor but maybe this was an effect of not putting rotor specific commands into diseqc.conf.
- There has been some talk in the past about refactoring this process
some, do we think the current approach is the best approach? Does vdr-1.5.x plan to offer new and improved tuning algos?
I don't know whether Klaus (or maybe somebody else) has already a concept of how things should be changed.
Bye.
hi,
Reinhard Nissl writes:
it will take 37000+ ms until VDR receives the stream. In the case of a recording, this is simply to long as recorder.c defines a MAXBROKENTIMEOUT of 30 seconds. VDR's recorder considers a stream to be broken when it doesn't receive a PES packet for that time. So I'd suggest to double this timeout.
Do I understand then correctly that defining MAXBROKENTIMEOUT to for example 1h would fix the problem of broken recordings due to expired keys (so that VDR would actually be patient enough to wait until the stream contains the right authorization information for the Conax module and the stream would start and the vicious restarting cycle would disappear)?
yours, Jouni
Hi,
Jouni Karvo wrote:
it will take 37000+ ms until VDR receives the stream. In the case of a recording, this is simply to long as recorder.c defines a MAXBROKENTIMEOUT of 30 seconds. VDR's recorder considers a stream to be broken when it doesn't receive a PES packet for that time. So I'd suggest to double this timeout.
Do I understand then correctly that defining MAXBROKENTIMEOUT to for example 1h would fix the problem of broken recordings due to expired keys (so that VDR would actually be patient enough to wait until the stream contains the right authorization information for the Conax module and the stream would start and the vicious restarting cycle would disappear)?
The use of MAXBROKENTIMEOUT is a last resort of getting a recording recorded when for example the driver or hardware is in a state where only reloading the driver can help out of this situation. Prior to 1.3.x even a lost DiSEqC message while tuning could lead into such situation where the recorder didn't see any input.
Increasing MAXBROKENTIMEOUT might help in your case, but I would be careful to set it simply to 1 hour. Consider the case where your hardware/driver runs into trouble, then you would loose 1 hour of a recording. Luca Olivetti posted a patch to this thread which initially uses a 10 times higher MAXBROKENTIMEOUT. Maybe this could be an acceptable solution for you, too.
Recently there was a discussion in another thread on this ML concerning the restarting cycle in bad weather conditions, especially when you have more than one recording running on different devices or when you are going to replay a recording. Currently I have no idea whether there should be a rather complex detection logic for real driver issues (given that such a detection logic is feasible) or whether the current detection should simply be dropped. But that's off topic in this thread.
Bye.
hi,
I agree this is a off-topic... (but I hope on topic for the mailing list anyway)
Reinhard Nissl writes:
The use of MAXBROKENTIMEOUT is a last resort of getting a recording recorded when for example the driver or hardware is in a state where only reloading the driver can help out of this situation. Prior to 1.3.x even a lost DiSEqC message while tuning could lead into such situation where the recorder didn't see any input.
I understand that, and I also understand that some people have unstable hardware/drivers. I just wonder if this is the right way to deal with it. As I told in an earlier post, I have removed the driver reloading stuff from my runvdr script and had no problems whatsoever.
Increasing MAXBROKENTIMEOUT might help in your case, but I would be careful to set it simply to 1 hour. Consider the case where your hardware/driver runs into trouble, then you would loose 1 hour of a recording. Luca Olivetti posted a patch to this thread which initially uses a 10 times higher MAXBROKENTIMEOUT. Maybe this could be an acceptable solution for you, too.
The problem is that it can take quite a long time before there is a new authorization packet in the programme stream. As far as I understand, they are customer (or smart card) specific and tell the card which streams it is allowed to allow decrypting, and naturally it is not very useful to spend a very high percentage of the stream for these authorization things.
Another solution that has come to my mind is a patch that would tune free cards to the muxes that contain CA stuff. This way they could receive the authorization information and could possibly have new key info whenever it is changed instead of just trying their luck when they should start a recording. But I do not know whether this would work. And it would not be a very beautiful solution in my mind.
Btw. I just concluded from this mailing list a couple of weeks ago that I am not the only one with this CA problem. Of course I might have made the wrong conclusion, though.
And, my current solution is not to record from encrypted channels. It works quite well.
Recently there was a discussion in another thread on this ML concerning the restarting cycle in bad weather conditions, especially when you have more than one recording running on different devices or when you are going to replay a recording. Currently I have no idea whether there should be a rather complex detection logic for real driver issues (given that such a detection logic is feasible) or whether the current detection should simply be dropped. But that's off topic in this thread.
I would guess that if your driver hangs it might be difficult to distinguish from the case of just waiting for the worst shower to pass or the encryption authorization to arrive.
yours, Jouni
En/na Stone ha escrit:
- Does vdr care or know anything about a rotor setup where the channel
isn't always present the moment the diseqc commands are sent?
Well, vdr isn't aware the dish is moving, so it doesn't wait for it. This isn't a big problem for live view, but it is a potential problem for recordings (it could cause an emergency shutdown). My stopgag measure has been to increase the timeout but only at the beginning of a recording:
http://ventoso.org/luca/vdr/patches/steerable-vdr-1.3.32.diff
The ideal solution would be for vdr to wait for the dish, maybe querying the plugin(s) responsible for moving it.
Bye