adsl line attenuation vs sync speed

Discussion in 'Networking, Telephony & Internet' started by connection, May 10, 2009.

  1. connection

    connection Member

    Joined:
    Mar 10, 2005
    Messages:
    373
    Location:
    in a house.
    Hi guys

    ive posted this in whirlpool, but well.. whirlpool is whirlpool..but can someone have a look at this please:

    so the stats below say my max attenuation is 21 (reported by the modem, via telnet)... so i went to that adsl2exchanges.com.au to see that result, it says max attenuation of 18... so thats fine.. results are similiar, great/

    However, even according to both that website, and internode itself... even if i take my attenuation at 23, according to their graph, i should be good for 18mbit/s... yet currently, according to my modem im syncing at 10.8mbit/s ... WTF

    download tests from internode servers
    download: 1.0Mbyte/s

    upload test from maxing out 5 torrents
    upload: 67kbyte/s

    hardware: netgear dg632 with V3.6.0C_22 firmware. im aware its not the exact latest but ive had problems with this modem before and im wary of "fiddling with it"

    edit: i updated the firmware to the latest one, and the sync speeds are as follows

    previously: 10.8Mbit/s download
    now: 12.2Mbit/s download

    upload speed has remained constant.
    attenuation figures have remained constant.


    can anyone explain whats going on? might it be my shitbox dg632 modem? or something else (line related?.. but wouldnt attenuation figures show this?)

    cheers guys

    -------------------------------

    -------------------------------
    [DSL Modem Stats]
    US Connection Rate: 620 DS Connection Rate: 10801
    DS Line Attenuation: 21 DS Margin: 9
    US Line Attenuation: 13 US Margin: 13


    -------------------------------

    and asl2exchanges.com.au result

    -------------------------------
    Your Results
    Line of Sight: 793 m
    Estimated Cable: 1269 m
    Estimated Attenuation: 18
    Estimated Maximum Speed: 19000

    note: seems pretty accurate as far as distances is etc, it picked the right exchange and everything..

    -----------------------

    hardware:
    netgear DG632 - no phones or anything connected to the telephone line except this adsl modem.

    internet plan:
    internode naked adsl+ 25gb
     
  2. martino

    martino Member

    Joined:
    Mar 8, 2005
    Messages:
    1,225
    Could certainly be your modem, test with another borrowed modem to see if sync rate improves. Some modem chipsets pair better with certain dslams.

    Is your modem is set to the right modulation type (ADSL2+)?

    Have your ISP applied a stability profile on the service? This artificially increases the noise margin and lowers the modem sync rate.

    I'm pretty sure Internode users log on to their account and toggle "extreme" speeds themselves no?
     
    Last edited: May 10, 2009
  3. caspian

    caspian Member

    Joined:
    Mar 11, 2002
    Messages:
    8,469
    Location:
    Melbourne
    OK, let's understand a few things about DSL. firstly, I want you to repeat this sentence over and over until it sinks in:

    DSL bitrate is not a linear function of distance, and cannot be determined by a simple two dimensional graph.

    don't read on until this fact has been assimilated.

    there is a statistical relationship between the two, but like all statistics, they bear no relationship whatsoever to an individual occurrence, and the statistical average alone does a poor job of displaying the sample set variance from that average.

    now, repeat this next sentance until it's embedded; kind of like a BIOS flash...

    Attenuation has very little to do with DSL performance.

    yep, I mean it. it does at extreme attenuation values, like >70dB, but since other factors will have long since killed the DSL stone dead by that point, attenuation is pretty much irrelevant. so why does everyone focus on it? a couple of reasons:

    (1) it's something their modems can measure
    (2) to a rough degree, attenuation and bitrate tend to have a loose relationship of occurrence - which is very different from saying that the latter is dependant on the former
    (3) they don't understand otherwise.

    what controls DSL bitrate, in the abscence of extreme attenuation, is signal to noise ratio - SNR. SNR is everything.

    DSL works by segmenting the frequency spectrum of the line into (roughly) 4.3KHz 'blocks' or subtones, then using an implementation of QAM to transmit a PSK trellis coded analogue signal matrix at a range of frequencies centred around the middle of the subtone. simplified, DSL works like a combination of a dialup modem, multipled several times over, with the individual signals segregated by using different ranges of frequency, or tonal pitch - think of a choir, the end result is a "wall of sound" style signal, but it's made up of individual signals (voices) delivered at different frequencies.

    inside each subtone, there is a SNR to be dealt with. DSL uses analogue signalling, so it needs to maintain a certain SNR to prevent degradation of each signal waveform from being damaged to the point where it results in rubbish data once it's quantised back to a digital value at the other end of the line. to cope with this, modems start at the best SNRs in each subtone, which means the highest transmit power - relative to the noise floor present at that frequency. they then use progressively lower power levels to obtain additional QAM payload points, until the SNR becomes too marginal to continue. since the maximum transmit level is automatically determined, the variable is the noise floor - this will effective determine the maximum payload of the subtone.

    added together, the sum payload of all the subtones is the bitrate of the line, so better starting SNRs (which is generally and illustration of lower line noise levels) means faster bitrates. more noise = lower SNR = lower bitrate.

    beyond this, DSL modems tend to "load" individual subtones to a target SNR to obtain maximum bitrate, so SNR tends to settle to that target - which is set by the ISP who programs the DSL port. so even SNR isn't much good to us in determining line performance. what you really need is what is called a passive PSD/SELT/QLN test (depending on who makes the instrumentation), which can actually measure the noise floor across the tonal spectrum of the line without a DSL signal present, and calculate the expected bitrate from that.

    the other problem with SNR is that DSL modems calculate SNR as an average of the SNRs present in the subtones that are actually carrying a payload. if the noise floor in a given subtone is so high (bad) that that subtone ends up with no payload at all (not commissioned) then it's not included in the calculations, which gives a somewhat false impression of the state of the line and its performance.

    in your case, the ISP appears to have jacked your targets SNRs up fairly high, presumably in the name of stability at the expense of some performance:

    as martino mentioned, your ISP may call this a "stable profile" or similar - a less stable profile will mean a lower SNR target, which allows the modems to load more payload per subtone, increasing performance. the downside is that if your line suffers from variable noise nevels, it may lead to some unstable performance, but that's another topic. :)

    OK, to close the loop - so why does bitrate tend to drop as attenuation increases? simple - because noise levels tend to increase as line length increases. nothing more. it's quite possible to have a long line that performs way above what our aforementioned statistical-likelihood graph "says" it "should", as long as it's a very quiet line. a short line can also perform like crap if it's noisy. what causes noise? everything.

    • joints in the line
    • some line constructions that aren't terribly well suited to the higher frequencies DSL uses, or different line gauges
    • interference from external sources such as vehicle traffic, overhead lighting, power cabling, radio emmissions and other industrial sources
    • interference from other lines in the same cable bundle carrying signals at the same time, called crosstalk. this includes other DSL lines and lines carrying digital signals, such as ISDN lines. yes, your neighbour's DSL service actually causes yours to degrade to a degree.
    • interference from other lines in the same cable bundle carrying garbage signal from bad fax machines, digital set top boxes, flaky power supplies, light dimmer switches, bad flouro light ballasts, plasma cutters, christmas lights, vinyl welders etc. these are all real-world examples I have seen.

    so, the upshot is - your DSL is simply performing as well as it can for the circumstances, which you really can't see. given there is nothing else except the modem on the line, about all you can address is:

    (1) make sure the line cord is short, and of good quality (ie does not introduce line noise, which we now now lower subtonal payload and thus bitrate)
    (2) ensure there are no star wired sockets inside your house. unterminated sockets result in impedance mismatched signal reflectors, which cause noise.... which drops performance. if there are unterminated sockets, either have an electrician disconnect them, or perhaps surprisingly, have a central filter fitted! yes, I know that conventional "wisdom" is that you only need filters for phones, right? nope. in this case, if you filter the star wired sockets before the wiring even starts, no DSL signal ever gets injected into the wiring, so no reflection and noise occurs. hopefully you don't have star wiring though, it's generally only present in old or sloppily wired houses.

    beyond this, you simply have a line that performs lower that that simple attenuation to speed graph suggests is the average, and my suggestion is to forget it - it is what it is, and there is nothing much you can do about it.
     
    Last edited: May 11, 2009
  4. timace

    timace Member

    Joined:
    Aug 6, 2003
    Messages:
    1,769
    Location:
    Sydney
    Brilliant post. Maybe we should make this thread a sticky.
     
  5. caspian

    caspian Member

    Joined:
    Mar 11, 2002
    Messages:
    8,469
    Location:
    Melbourne
    if it happens, I am happy to flesh out the explanation a bit more, maybe include a few diagrams etc.
     
  6. itsmydamnation

    itsmydamnation Member

    Joined:
    Apr 30, 2003
    Messages:
    9,871
    Location:
    Canberra
    i think it is a very good post, i also think that if you do flesh it out a bit more it should be a sticky! :thumbup:

    edit: im over 5 k away from my exhange and i get around 6.5 -7.5 mbit, but i am using cisco gear with the service internal command set which allows you some control over what SNR is set ( mine is around 2DB down 8 DB up)
     
    Last edited: May 11, 2009
  7. caspian

    caspian Member

    Joined:
    Mar 11, 2002
    Messages:
    8,469
    Location:
    Melbourne
    I was just thinking about something.... particularly relating to graphs like this: http://www.internode.on.net/media/images/internode-adsl2-dist07.jpg

    there's nothing wrong with trying to use these graphs to determine an expected speed, and if your results is near where it should be, then great. my problems with these graphs include the following:

    • most people are unable to determine the length of their line with any real accuracy
    • these graphs assume that all phone lines will have the same attenuation rate for a given distance. this is not the case as it ignores issues such as different cable gauges
    • they make no allowance for different cable constructions, which affect SNRs, but not attenuation - so performance varies widely although attenuation does not
    • they make no allowance for SNRs at all, and assume that noise is a perfectly regular function of attenuation (which they then express as distance)

    I realise that a lot of people are curious, and hey - you have to have some sort of rule of thumb to go by, at least most people use about the same graph so the estimation is repeatable! my point is that DSL transmission is a rather complex subject, and simplifying it that far introduces a lot of assumptions which in turn cause errors in the expected results. so by all means use the graphs, but understand the assumptions on which they are based, and put the results in context - they are not an absolute by any means.
     
  8. Mad Mike

    Mad Mike Member

    Joined:
    Jun 30, 2001
    Messages:
    777
    Location:
    Melbourne
    Great post Caspian, you should post the info in the wiki
     
  9. Whisper

    Whisper Member

    Joined:
    Jun 27, 2001
    Messages:
    8,297
    Location:
    Sydney
    Definitely :leet:
     
  10. TheWedgie

    TheWedgie Salty

    Joined:
    Jun 16, 2002
    Messages:
    2,756
    Location:
    South Australia
    Please do - or provide stuff for someone else to do so.

    I'm hoping to get some time to move most of the stickies in this forum to the Wiki, and have a single sticky with links... it's starting to get a bit clogged up the top end.

    -Nick
     
  11. Mattrix

    Mattrix Member

    Joined:
    Aug 11, 2010
    Messages:
    7
    Location:
    Melbourne
    Hi caspian,
    I realise this is an old thread, but it doesn't seem to have been replaced by a sticky,

    Can you elaborate a bit more about the training and profiles, in Oz.

    I had always assumed that each new train was a "fair negotiation" (neither end is biased by prior trains) between the modem and the DSLAM. Presumably both have a target Noise Margin, but how do they decide the eventual figure, can one (or both) end override the other? This quote seems to suggest the ISP has more power in the negotiation.

    As I understand it, for BT in the UK, if the DSLAM sees lots of retrains it decides the line is unstable and will only train to a higher margin. It will then only train to a lower margin after a number of days of holding sync. ie the DSLAM remembers prior trains, do our DSLAMs have memory?

    Some modems seem to allow tweaking the Noise Margin. Do they have to "lie" to the DSLAM to achieve this (ie true SNR is 7, but tell DSLAM its 14)? How does this "lie" affect later trains?

    Lastly, does the DG632 (of the OP) provide some way to control the Noise margin it trains to (tell it to "lie")?
     
    Last edited: Aug 11, 2010
  12. biatch

    biatch Member

    Joined:
    Jun 18, 2002
    Messages:
    1,679
    Location:
    North Brisbane
    But the website said he's supposed to get a better speed? :confused:
     
  13. martino

    martino Member

    Joined:
    Mar 8, 2005
    Messages:
    1,225
    The ISP will determine the DSLAM profile to be used for each customer depending on the service being provided.

    In Australia we have a mix of "fixed speed" ADSL (Telstra 1.5mbps) where the SNR is allowed to fluctuate, and also "high speed" best effort ADSL where the profile will target a specific SNR (generally 6db) and the sync speed fluctuates.

    This page covers it in a little more detail:

    http://whirlpool.net.au/wiki/?tag=ADSL_Theory_SNR
     
  14. Pugs

    Pugs Member

    Joined:
    Jan 20, 2008
    Messages:
    9,000
    Location:
    Redwood Park, SA
  15. caspian

    caspian Member

    Joined:
    Mar 11, 2002
    Messages:
    8,469
    Location:
    Melbourne
    during a train, the modems sample each (roughly) 4.3KHz subtone "bin" for local noise level. they then determine how much bitload can be applied to that subtone based on:

    - local noise (which is just a power level induced in the cable by external sources)
    - the target SNR the port has been instructed to aim for.

    SNR is just the line noise induced voltage as a differential of the power level signal voltage the modems can produce. the resulting voltage differential is available for signalling via phase shift keyed quantum amplitude modulation - a fancy way of describing encoding binary information via voltage variations in a signal level with a clocking signal acting as a multipler to allow greater bandwidth.

    this happens every 4.3KHz across the frequency spectrum the modems can generate. some subtones will not be usable at all if SNR is <0dB (ie noise is greater than signal). above this a payload capability will be derived.

    when this frequency sweep analysis is complete the modems proceed to showtime, where data transmission is possible.

    you have the gist of it. it is a "fair negotiation" save for the fact that the modems retain a worst-case knowledge of line noise. they "know" that if they have previously experienced a line noise level of xxdBm then it may well occur again, so they reset their SNR target (signal to noise ratio) to be *above* that worst case line noise level. to not do so would forever consign a connection to instability. yes, this comes at the expense of a deteriorating cumulative payload (bitrate) but remember that while DSL is designed to seek the optimal bitrate, it's also designed to self-seek stability. erasing the memory of the client modem (via a powercycle) clears this memory, but ensures that a connection in an environment with a dynamic line noise environment will be unstable until the modem retrains to a realistic worst-case scenario. (unless the initial train happens in a burst of noise, anyway.)

    indeed, the ISP has "the power". they set the target SNR the host modem will accept. users can instruct their modem to seek a higher target SNR, but not lower.

    line event retrains are triggered by two things:

    - usable bitrate deteriorating below a threshold level
    - consecutive errored seconds exceeding a threshold level

    there's no magic timer value other than this. I don't know how BT do their DSL, perhaps they have the ability to build in a time-based threshold before the worst-case line noise observance is reset. we certainly don't, and the idea is of debatable value. on one hand it could be argued that a single transient even can't trigger a significant reduction in bitrate, on the other it means the modems take longer to acheive stability in an environment with varying noise levels. my point of view, for the record, is that the former (the Australian implementation, assuming there is indeed a variation) is superior, as the user still retains the ability to "reset" the process with a simple powercycle. the "UK model" means the user must endure an extended period of instability before acheiving stability - something to be experienced every time the modem loses power.

    with all said and done, the host modem (the port) is only aware of the downstream SNR as reported by the client modem. if it spoofs the reported SNR then the port loses control of the feedback driven stability process. in those conditions I suppose the port will retrain at the same target SNR as before, as it's not been told otherwise.

    it's up to the user. do they want stability, or do they want speed? in a varying noise environment you can't have both. DSL isn't engineered that way.

    I have no idea, sorry. I believe Cisco modems do, and perhaps some Billions. I don't see it as a terribly good idea myself, but then again my point of view of a data connection does tend more towards stability than chasing performance at the expense of it.

    I'd respond to that one with a :rolleyes: combined with an eldritch scream in the background, but I know better of you, biatch. ;)
     
  16. Mattrix

    Mattrix Member

    Joined:
    Aug 11, 2010
    Messages:
    7
    Location:
    Melbourne
    Thanks Martino.

    Lets talk "best effort ADSL", although I imagine this also applies to fixed speed ADSL if you can't maintain the fixed speed (eg 8mb/s).

    I don't think I understood the whirlpool site.
    It seems to suggest that the power should is adjusted by the modem to maintain a constant SNR margin (during a session) and fluctuating speed, which is not what I see.
    And then later that as the modem increases power, to increase SNR, you loose speed; Q: Increased gain but limited max power gives less dynamic range?
    This is not the way I understood it.:confused:

    In terms of profiles/noise margin targets it seems to say these are statically set by the ISP, the modem has nought to do with it, the modem just reports what it sees. So what does tweaking the modem do?

    Edit: I wrote this before seeing caspians reply - which I'm still digesting.
     
    Last edited: Aug 11, 2010
  17. Mattrix

    Mattrix Member

    Joined:
    Aug 11, 2010
    Messages:
    7
    Location:
    Melbourne
    Thanks for the detailed reply caspian.
    couple of questions,

    Once a "bit load" has been agreed, is that maintained until the next retrain?
    With errors being handled by retransmissions. Or is something done dynamically during the session to improve reliability?

    If I understand you, during a session both ends should assess SNR and then once a retrain is triggered the new target margin should be the higher of the 2 ends.

    You asume I am chasing performance. I think its a great idea, I'd like an extra 3 or 4 dB of margin.

    I'm not sure about the BT situation, but if it is like I thought, it actually favours stability, not instability: doesn't it?
     
  18. martino

    martino Member

    Joined:
    Mar 8, 2005
    Messages:
    1,225
    Bitswapping between tones can be performed in real-time to overcome induced noise/crosstalk temporarily effecting certain frequencies:

    http://www.kitz.co.uk/adsl/adsl_technology.htm#bit_swapping

    (That whole page is a good read.. Maybe you've already come across it)

    Your modem's stats would give an idea.. What do they report?
     
  19. caspian

    caspian Member

    Joined:
    Mar 11, 2002
    Messages:
    8,469
    Location:
    Melbourne
    it's dynamic. error handling is done by correction at the transmission layer - ATM is a "connectionless" transmission protocol that doesn't do resends. unrepairable errors resends are handled by TCP. error handling is designed to cope with burst or impulsive error triggers, not deterioration of the signalling medium (relative to that observed at showtime) over time.

    bear in mind that SNR is not being assessed - what is being assessed is the power spectral density of the line noise in each subtone. the modems then set their minimal transmit power spectral density to the target SNR dictated by the device with the highest target SNR, be it modem or port (normally the port).

    the result is that SNR should, other factors aside, always remain the same. that's a software setting, not the result of analysis. what's done during a train is to analyse what power spectral density ration (signal versus noise) is available, which becomes payload bitrate once target SNR is subtracted from the usable payload (ie >0dB PSD ratio).

    I didn't specifally say you were, but to be honest - some people fixate on it. that's why ISPs have a range of profiles generally ranging from "stable" to "gamer". might as well label them "old fogey" versus "|337", eh?

    target SNR is there for a reason, which is that DSL is an analogue technology transmitted across an infrastructure never designed for it, and which will experience varying noise levels. the same goes for other stability measures such as interleaving, but it would just kill some people if their WoW ping went up 12ms.

    these people are normally the first ones complaining to their ISP that their connection drops out inconveniently, and the ISPs tell them to turn the damn thing back to normal design settings.

    I have doubts that BT actually do this; DSL is an international standard, it would be rather unusual to deviate from it to such a radical degree. if anyone actually has any documentation to support the position then I will read it with interest.

    however, no - such a solution is actually fundamentally *less* stable. it means the connection would have to experience multiple dropouts before it "learned" to increase the learned worse-case noise floor and set the target SNR on top of that. the result would be it would take much longer for a connection to acheive stability in the prescence of varying line noise.

    the only "advantage" is that the connection would not train down so easily, so a relatively intermittent noise transient wouldn't result in a deterioration of bitrate - multiple events would be needed. that's unstability in the name of performance.

    my objection to this is that the user therefore can't expect to enjoy an automatically stabilising connection without excess delay, and don't forget the whole process starts again every time the client modem loses power. whereas under the normal model, the modem will attain stability relatively quickly, and for each noise event will forever avoid that same situation again - only a worse event will trigger a retrain. the downside to this is the modem will train down to a lower bitrate more quickly, but hey - the user can control this with a powercycle if they really want to.

    as I said though, for the very reason of fundamentally related instability, I would be somewhat surprised to learn than any major telco has deviated from international standards.
     
  20. Mattrix

    Mattrix Member

    Joined:
    Aug 11, 2010
    Messages:
    7
    Location:
    Melbourne
    but not at the physical DSL level by varying sub-channel power, or altering the bit-load etc (within a session)?

    But isn't this the same in the australian system if the ISP's (static) profile is incorrect for your line, after each power cycle you have multiple retrains before settling (actually its much worse than just dropouts) .
     
    Last edited: Aug 12, 2010

Share This Page