1. OCAU Merchandise is available! Check out our 20th Anniversary Mugs, Classic Logo Shirts and much more! Discussion in this thread.
    Dismiss Notice

Broadband Performance Devices Generate Bad Data

Discussion in 'Networking, Telephony & Internet' started by SiliconAngel, Jun 12, 2019.

  1. SiliconAngel

    SiliconAngel Member

    Joined:
    Jun 27, 2001
    Messages:
    639
    Location:
    Perth, Western Australia
    The Broadband Monitoring Programme seeks to provide real-world data sampled from consumer Internet services so that Retail Service Provider (RSP) products can be compared by an independent body (the ACCC) and the results are published publicly. The ACCC has commissioned SamKnows to conduct this data collection using ‘whitebox’ devices that are pre-configured and are simply connected to the end user’s router.

    As one of the participants of this programme, I quickly noticed the data being collected and reported by my whitebox showed my connection performance was implausibly low. I performed network analysis on the traffic to and from the whitebox on my network using wireshark, then inspected the route to each of the servers the device was communicating with. The results suggest the whitebox is using a method to measure network performance that cannot possibly be accurate or meaningful within the defined context – it’s not measuring the Internet performance between my premises and the RSP, it’s measuring my connection’s usable bandwidth to servers on the other side of the planet.

    If SamKnows have configured all of their whitebox devices like this, the entire Broadband Monitoring Programme’s data is useless. But even if this is only true for some of the participating devices, it still poisons the data set, invalidating the pool of data, which makes it unusable.

    I've detailed my methodology, what I discovered from inspecting the device's traffic, and outlined my conclusions on my blog. I've passed this on to my RSP, who are investigating further - I'm not the only one who has noticed these unusual results from the SamKnows whiteboxes.

    I'd be interested to hear from anyone else who's part of the Monitoring Programme, or anyone else has any insight or feedback to add. I'm not an enterprise or carrier network engineer, I work with business networks and servers, so I'm more than happy to receive feedback :)

    In principle I believe such an independent monitoring programme is a great idea, which is why I offered to be part of it - if the whiteboxes can be reconfigured by a C&C server, then this could be fixed relatively easily. But it does need to be fixed, and there needs to be far more transparency and accountability from SamKnows, at the very least to the ACCC and similar bodies in other countries (even if they don't publicly disclose what's going on). If their 'whiteboxes' are just blackboxes where no one outside SamKnows knows what they do or how they're doing it, then their data simply can't be trusted. Which makes them a poor fit as a partner for such an endeavour.
     
  2. caspian

    caspian Member

    Joined:
    Mar 11, 2002
    Messages:
    12,901
    Location:
    Melbourne
    of course it measures external network throughput... it's just a layer 3 device, it has no more magical access to your ISP than any other device you have in your house, like a smart TV. SamKnows mention how they test on their website. https://samknows.com/technology/test-servers

    nor do you want it reporting lower layer connectivity speeds, because the program is designed to report on user experience, not some theoretical measure. an analogy is measuring traffic congestion by actually measuring the time it takes to travel somewhere, and not by figuring out how fast you could theoretically get there based on your car's top speed. that means very little in terms of gauging your actual experience driving to work.

    there are separate parts of the program that collate lower level connectivity stats into the reporting, the SamKnows boxes are just one part of the overall solution.
     
  3. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,396
    Location:
    Canberra
    A personal throwback: https://forums.overclockers.com.au/threads/network-performance-visualisation-with-wireshark.1163292/

    Whilst your tests are interesting, you haven't delivered a true A:B test demonstrating that there is a true delta, I would suggest ascertaining the test frequency of the whitebox and running a seperate test methodology between these test intervals.

    Once you can demonstrate the gap, then you've got adequate empirical evidence, I would suggest then publishing your findings in a less biased manner, the tone in your blog post conveys far too much emotion.
     
  4. waltermitty

    waltermitty Member

    Joined:
    Feb 19, 2016
    Messages:
    1,565
    Location:
    BRISBANE
    what's the deal with this vs a ripe atlas probe?
     
  5. OP
    OP
    SiliconAngel

    SiliconAngel Member

    Joined:
    Jun 27, 2001
    Messages:
    639
    Location:
    Perth, Western Australia
    Thanks for your feedback caspian :) In terms of 'knowledge' of the network, the SamKnows device is designed specifically to measure end-point bandwidth. Your comparison with a Smart TV isn't really similar, as the Smart TV literally couldn't care less about network topology or bandwidth, it will attempt to perform the tasks you ask of it with no consideration for whether the network has sufficient bandwidth. The SamKnows whitebox is designed specifically to provide data on this, and nothing else.

    In order for these devices to be able to do this job and provide meaningful data, the assumptions made in the design of not just the device, but the type of data it's collecting, how this will be reported, and what the purpose of it is are all important. If we take it as a given that the assumptions made were to provide data about retail customer endpoint bandwidth with a reasonable degree of accuracy and in a way that makes them readily comparable to domestic competitors, then it is entirely reasonable to assume that the devices would be able to determine the address of their locally relevant server and test against that. Or, if you don't want the devices to operate in a fully autonomous mode, you can have them communicate with a C&C server which can configure them so they test in the most relevant and meaningful way for their location.

    Look, I completely agree that the devices should test and perform measurements that are indicative of real-world user experience as much as possible. But it is not reasonable to present data that says a specific endpoint is limited to bandwidth 'x', when you're including data from measurements that are by their nature limited by factors outside the control of any Australian RSP, NBN Co or even the submarine cable operators. To use your analogy, you're trying to measure traffic congestion in a suburb in Perth by driving to Adelaide, Melbourne and Sydney, and then putting your car on a container ship to Europe, driving from Spain to Italy, then coming back the same way, and claiming the entire drive was directly relevant to traffic congestion in Cottesloe. Sure, you did the whole trip on actual roads in the same car on the same journey, but how relevant is the data you've collected to what we're actually trying to measure?

    Let's be clear - if we're trying to measure the performance of ABB, TPG, Telstra, Optus & Vocus so that we can compare their networks, what happens on International backhaul is not useful. It also opens up all sorts of unpleasant possibilities for those carries with control of or influence over traffic prioritisation over those links, so deliberately avoiding them prevents unfair interference. Does that mean the measurements might exclude potential real-world implications? Yes - it doesn't matter how fast your domestic connection is if another network operator is deliberately de-prioritising your service provider's traffic over international backhaul - your experience is going to be impacted. But your service provider has no control over that and it is literally beyond the scope of the data the broadband monitoring survey is specifically interested in. If you want to try to understand international bandwidth performance per RSP that's a different comparison for a different purpose.

    So, the ACCC's Broadband Monitoring Programme is specifically interested in endpoint bandwidth, for the purpose of comparing and contrasting the real-world performance of competing RSP products and networks. As I've just explained, to do this all traffic over international backhaul absolutely must be excluded. But there are other topology considerations. For example, what will a performance comparison that measures latency show for Perth premises testing against servers based in Melbourne or Sydney? Instead of 10ms of latency you'd see if you were in Sydney, all Perth endpoints would have around 50ms of latency. Not because they have 50ms of latency, but because that's the latency you get accessing servers nearly 4,000km away. How do you account for that? If some RSPs have a larger proportion of Perth customers their latency figures will look worse, not because their network is any worse but just because of the limitation of the testing methodology.

    But to be honest, if all were were talking about was a difference of 40ms in latency figures, we wouldn't be having this conversation, because I wouldn't be wasting my time. According the the SamKnows whitebox, my connection's latency figure is in the high 300's, and my connection's overall bandwidth is around 40% lower than the sustainable, real-world bandwidth I can reliably get between different domestic network operators. Sorry, that's just utter nonsense. The latency figure suggests that the majority of the data the device is using to determine this result is generated from testing overseas links. Looking into the network traffic, I have found that this absolutely appears to be the case. And I've made a clear argument why this absolutely should not be the case - it is bad, misleading data that should be excluded from the data set, because its inclusion results in a comparison that is such a long way from what we were trying to evaluate that it tells us nothing about the very thing we were trying to measure. It's like installing a seismometer next to a diesel generator operating 24/7 - it's a terrible idea that will never produce meaningful data.

    Can you elaborate on that? Because AFAIK the SamKnows devices are the only things feeding into the ACCC's dataset - in fact, SamKnows prepares the reports for the ACCC, who then use those to write their reports - the ACCC doesn't do any actual data collection or raw data analysis themselves. I could be wrong and I'm more than happy to be corrected, but that's all that I've read or anyone's told me.

    Again, even if this is the case, it also doesn't change the fact that the data that SamKnows is presenting to the ACCC is based on measurements that are not specifically relevant to the actual objective of the programme. Whether other factors are mixed in later doesn't change the reality that all the data from SamKnows is potentially tainted with the same misleading data caused by poor assumptions in their testing methodology, which makes it unreliable and untrustworthy. I am not a professional data scientist, but I've done work with data analysis and data cleansing and worked in scientific fields including research. If your assumptions are wrong, if you include data that hasn't been accounted for and isn't specifically relevant, you can't draw any useful or meaningful conclusions - you're just wasting your time. That's just fundamental, any first year science student could tell you that. I've presented data that demonstrates the data from the SamKnows box I have is not specifically measuring the bandwidth of my retail Internet service domestically. If it isn't, then its data is not relevant to the ACCC's programme, and it shouldn't be included. If all the SamKnows whiteboxes are operating in the same way, all their data should be excluded. Unless the assumptions (being made by SamKnows) can be revised to actually fit the requirements of the programme, and the system can be reconfigured so meaningful, directly relevant measurements can be made which generate valid data.
     
  6. OP
    OP
    SiliconAngel

    SiliconAngel Member

    Joined:
    Jun 27, 2001
    Messages:
    639
    Location:
    Perth, Western Australia
    Thanks for your feedback Doc, and for providing that link - I've only skimmed over it, but it looks like a great read :)

    I agree that the data I've provided isn't comprehensive or definitive - I got to a certain point and felt it made the argument with what I had. I sent it to a number of colleagues to get their feedback before going ahead. It's fairly clear from the SamKnows reports that the data it is generating is not analogous to the real world performance I am able to achieve over my service, and the latency result makes it clear that it is at least weighted against local networks (if not completely ignoring them). I still have the wireshark capture, and I can capture and analyse further traffic data if required. I definitely take your suggestion onboard though, and if I get enough time I may look into this deeper.

    Regarding test frequency, there is traffic flowing to and from it pretty much constantly. But no, I haven't analysed the data to determine if there are periodic peaks in traffic volume or not.

    Sorry you thought my tone was too emotional and conveyed bias. I'm not really sure how it was biased, as I tried to convey the data as it was - do you mean biased against SamKnows? That doesn't make much sense - I have no feelings about SamKnows one way or the other. All I care about is the reliability and applicability of the data.

    When you say 'emotion', I assume you mean my choice of words like saying the data was useless and the if it can't be rectified the programme should be scrapped, but those are just facts - there's no point mincing words. I do realise I wrote it in a conversational style, but that was a deliberate choice to make it easier to read.
     
  7. caspian

    caspian Member

    Joined:
    Mar 11, 2002
    Messages:
    12,901
    Location:
    Melbourne
    yes, and that's precisely what is required to measure the potential end user experience.

    it's absolutely reasonable to say that, because it simulates real world usage, and SamKnows have dedicated high speed servers all over the world to ensure the test platform itself does not cause a limitation of the test result.

    have a think about how the ACCC are enforcing the current crackdown on "overselling" broadband plans that can't be delivered due to issues like copper line length. how might they be privy to details like the maximum speed a line will sync at, versus the peak information rate of the service sold to run across it?

    yes, it very much is, because the outcome of the programme is to ensure the end user is getting what they are paying for, and testing real world throughput from an internet source is the only way to achieve that. artificial testing from something like the ISP's own network is great for sectionalisation of a possible fault, but Joe Average wants data from the internet, so you absolutely have to test things like the ISP's peering link capacity as well.
     
  8. OP
    OP
    SiliconAngel

    SiliconAngel Member

    Joined:
    Jun 27, 2001
    Messages:
    639
    Location:
    Perth, Western Australia
    Caspian, I understand what you're saying - that, at the end of the day, real-world usability is all that people actually care about, and that's true. Obviously what we'd like to achieve is everyone, everywhere, having great access that isn't restricted by any particular bottleneck.

    But that's sort of the point of investigating something like this. You simply cannot diagnose the cause of a problem without ruling out variables and drilling down to the root cause. You cannot take a bandwidth measurement that spans across twenty networks around the planet, where the internal operation and management of each of them is opaque to you, and from there claim that you understand how just one of those networks operates in isolation. That's ludicrous. If you want to understand how something works, you need to exclude as many external variables as possible. If we do this with consumer broadband connections then we can understand with a great deal of accuracy how those services stack up. If you want to investigate how specific service provider networks perform across international links then you would do separate measurements to various international endpoints.

    Let me give you an example. If a device tests my bandwidth from here to a server in the UK, that traffic traverses over numerous intercontinental backhaul links, and some of those links are prioritised for other traffic which reduces my overall available bandwidth. Is that test result indicative of my connection's performance? Could it be considered broadly applicable to the performance I could expect to experience when using my Internet service? Let's say for the sake of simplicity, I have a 100mbps synchronous fibre connection from my RSP. But testing the available bandwidth to this UK server only says I have 60mbps available. According to the data, my connection is only really usable at 60mbps. But in reality, for the vast majority of the use I put my connection to, it is capable of delivering 98.5mbps. Whether it's to locally peered services serving up YouTube, pulling files off a server in a Melbourne DC, uploading files to my cloud storage accounts or RDPing into management systems in the NT, my connection is bang on 100% of the time. I don't care that the test says it can only achieve 60mbps to a server in the UK that I am literally never going to ever connect to in the real world.

    Here's another example. RSP One is a Retail Service Provider of NBN services, but they also have extensive fibre assets of their own throughout the country and they are part owners of international transit backhaul. RSP Two is also reselling NBN services, but they're much smaller - they don't have their own extensive national fibre network, nor do they have any ownership stake in an international link. So they have to pay RSP One for a slice of bandwidth on their backhaul. RSP Two is really committed to providing a great service to their customers, so they're spending as much as they need to to ensure they are not restricted on CVC or AVC, and they have more than sufficient bandwidth purchased for the international backhaul they need. However, RSP One realises they can use traffic management rules to de-prioritise data packets travelling from RSP Two's network to bandwidth test servers. If RSP Two performs bandwidth tests, whether synthetic or real-world data transfers, they will always appear to have a good connection. The only traffic that has been de-prioritised affects automated bandwidth monitoring services. So RSP Two's results look pretty poor in comparison, making RSP One's results look comparably better. RSP One can claim in their marketing that they have a much higher performing network and they're being more honest and accurate about it (the percentage that measured performance meets the RSPs claims). RSP Two are going through their network with a fine-toothed comb trying to figure out how and why their performance figures could be so low, when all the measurements they do show everything should be working perfectly.

    I'm not saying there is a telco who is doing that. I'm saying it's entirely possible, it's not illegal, it would be difficult to detect, (there's quite a lot of incentive to do it) and attempting to measure bandwidth beyond the RSP's network means they have absolutely no control over what amounts to the vast majority of the networks that such tests are actually testing, and they are being held accountable for.

    As I said in the article, by all means if you don't want to put servers inside the RSP's network, at least put them inside NBN's network so they are located somewhere independent, yet not prejudiced by international backhaul.

    The servers themselves? No, I don't expect the servers would be slow enough to impede the measurements. Having them distributed all over the world is a good thing, if that distribution is utilised in such a way that it assists in understanding what performance limitations there are. But if the vast majority of measurements are performed against servers in the UK which have no relationship to the endpoint service being measured, then having servers distributed globally is meaningless.

    Ok, I see where you are coming from now, but I'm afraid that's not related.

    The Broadband Monitoring Programme is a reporting system designed to provide insight into the actual real-world bandwidth of retail NBN services. The ACCC prepare these reports based on reports they themselves receive from SamKnows. 100% of the data from this programme is provided to SamKnows from their whiteboxes.

    The ACCC and the TIO also respond to complaints made by retail customers about their performance and they require RSPs and NBN Co to provide data on the connections of specific premises. As a result of large numbers of complaints, the ACCC requested records from RSPs. The ACCC was able to see that RSPs had oversold services to hundreds of thousands of consumers whose premises were connected to services that were incapable of actually achieving this bandwidth (such as a FttN connection over a 1.2km run of copper trying to hit anything higher than 25mbps... And I've heard that NBN Co will connect a premises up to 2km from the node.)

    RSPs are aware of the infrastructure used to connect a premises and the copper line length if it's FttN, so they have a good idea what ballpark a premises will be able to achieve even before signing the customer up. If they sign up for a 100/40 plan and the actual connection isn't able to achieve greater than say 35mbps, the RSP knows that from day one - they can see the connection line rate from the moment it goes live. Instead of contacting those customers and advising them that their connection would never be capable of achieving the bandwidth they had purchased and were paying for, they just waited for customers to figure it out themselves and ask to change plans. The ACCC said this was unreasonable, and rightly so. All it took was requesting records from the RSPs and a simple data analysis of the two columns in the table quickly flagged all those connections where customers were paying for plans they could never get any value out of.
     
  9. caspian

    caspian Member

    Joined:
    Mar 11, 2002
    Messages:
    12,901
    Location:
    Melbourne
    TL;DR, sorry. if you think you know better than professional engineers with a combined experience of hundreds of years, convince the ACCC of this and get a job there. I have some issues with their measurement methodology but the use of the monitoring boxes is not one of them.
     
  10. @sia@home

    @sia@home Member

    Joined:
    Aug 6, 2002
    Messages:
    2,524
    I don't see the issue. In fact I would say that is a tough test and a good one. There are plenty of media and file sources located in usa europe etc that you'd hope you could download from at 100 megabit and not be bottle-necked by the isp not having enough intercontinental bandwidth. I have read not personally tested that that is one of the worst things about the low tier isp's like dodo and tpg.
     
  11. Court Jester

    Court Jester Member

    Joined:
    Jun 30, 2001
    Messages:
    3,634
    Location:
    Gold Coast

    I agree that the latency that the box reports is out there and I thought it must have been using some over seas sights to do its testing.

    I dont have an issue with it using server offshore as long as they are not the ONLY servers it uses. After all most of the content, I consume on the internet is based overseas(i.e. netflix, youtube, facebook etc)


    THe issue I ahve with teh same knows box is it does not detect the local network activity properly i.e. when im donlading "ISO\s" via torrents it still runs its test and gives me bad numbers becuase of it.
     
  12. Court Jester

    Court Jester Member

    Joined:
    Jun 30, 2001
    Messages:
    3,634
    Location:
    Gold Coast

    YES!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    it absolutly is indicative of your connection's performance, if you ISP is using poor peering links / backhaul they should be held to account. It is entirely possible for them to change this and peer with another network / link
     
  13. ShadowBurger

    ShadowBurger Member

    Joined:
    Feb 19, 2008
    Messages:
    3,076
    Location:
    Melbourne
    It's important not to assume the report won't be comparing the data against realistic targets - if the data is to be compared against a target of 330ms ping and 40mb/s of bandwidth the results from your device could be considered 'good'. All we know at this point is that the data will be used to offer a comparison of RSPs - but we don't know in which context or how the report will be presented. The latency between AU and UK may be subtracted from the result when a domestic context is required.

    If the best RSPs report 280ms ping and 55mb/s but others report 500ms / 2mb/s, the data will have done its job and highlighted the ones that have infrastructure issues regardless of where issue exists in their network, and in doing so covered a broader scope for potential issues.

    The part of my mind that handles reporting and analytics reminds me it is important to always look at the bigger picture and ensure those receiving the report get the right overall picture - even if it isn't necessarily the one we as technical people want to see. ie, we can't safely assume that ISPs are doing the right thing with their higher level connectivity and it would be dangerous for the ACCC to make that assumption, too. The picture must be presented complete, warts and all, and the connection to international endpoints forms a valid part of the picture that shouldn't just be omitted. If I were you SiliconAngel I would be asking different questions; why are the results from your whitebox to the UK so poor? What's wrong with your ISP's international links? Where in the network is the bottlenecking occurring and what kind of bandwidth are you getting to other internationally-hosted services?

    Imagine how useless a domestic-only bandwidth comparison will be to someone who, for example, works from home for an international IT company and consumes lots of HD media from internationally-hosted streaming providers.

    Lastly, I would typically think a poor result on a good connection is a good thing since they're more likely to add more bandwidth!
     
  14. flain

    flain Member

    Joined:
    Oct 5, 2005
    Messages:
    3,200
    Location:
    Sydney
    If what the OP is saying is true then https://live.cedexis.com already captures this kind of data and would have a lot more of it. The radar community has been doing this for years (microsoft/google/others), however while the data is not free it's probably a lot cheaper to access than what this trial is doing. It works by participants in the radar community embedding javascript tags in their sites that cases the browser to feed back performance metrics, including throughput/latency etc.
     
  15. ViPeR-7

    ViPeR-7 Member

    Joined:
    Jun 28, 2001
    Messages:
    585
    Location:
    Newcastle, NSW, Australia
    +1 to today's replies. Without testing real world scenarios, data collected by these devices is meaningless.
    ISPs cheaping out on their international backhaul is becoming more and more common, since everyone automatically just blames NBNCo at this point. And its very hard for consumers to know about this before signing up with a provider.

    I have a 100/40mbit VDSL2 connection, and on my old provider (amaysim) I achieved 96/34mbit to local servers, and 85/32mbit to international ones (at any time of day). On my new provider (mate) I get 97/34mbit to the same local servers, but to international servers the performance is very dependant on time of day (ie backhaul contention), from 74/18mbit, down to 7/12mbit. For example I watch a lot of twitch.tv streams, where I was used to always watching in 1080p60, now most evenings my connection struggles to stream in 720p. Last night I had to drop all the way down to 360p to get it stable.

    At least for NBN connections, getting a well performing connection between your home and the ISP is not really your ISP's job, its NBNCo's (provided your ISP has sufficient bandwith to NBNCo) - As I understand it thats not what these devices are meant to test, they're testing your ISP, not (only) NBNCo, and so the only test of relevance is available bandwidth to real world servers, outside of the NBNCo network.
     
    Last edited: Jun 14, 2019
    cerberos likes this.
  16. grimwood

    grimwood Member

    Joined:
    Feb 26, 2002
    Messages:
    1,554
    Location:
    Adelaide
    Asking the obvious hopefully - are these tests to overseas servers testing UDP or TCP for bandwidth?
     
  17. caspian

    caspian Member

    Joined:
    Mar 11, 2002
    Messages:
    12,901
    Location:
    Melbourne
    this is why I have a love/hate relationship with real world testing. if the result comes back good, the consumer knows they are good. if the result comes back bad, it doesn't tell you why without further testing. it is difficult for the consumer to do so themselves, unless they're lucky enough to have a next door neighbour with a different ISP to do head to head comparisons with.

    some testing can be done with download tests from the ISP's network versus external data sources (and yes, you have to take it on faith that they are not congested themselves), at least if the former comes back OK you know your own connection is OK. even if the external connection test is good though does not mean your actual use experience will be good... some ISPs differentially route data and underprovision some routes due to being too cheap to do it properly. my current ISP does this with Youtube data - during peak hour I can't watch 720p video without buffering, if I kick in a VPN so they can't tell what is going on, I can run multiple 1080p stream simultaneously. and that's with a VPN endpoint in the same city, so it's the same Youtube CDN server. I've heard feedback from ex-ISP staff that they pull similar tricks with well known speed test servers, to make their results look artificially better.

    people can blame NBN if they like, but they are deluding themselves. I have access to the terrestrial network utilisation reporting and there's no congestion inside the FTTN network, end of story. it's closely monitored and augmentation done well before any potential impact could be discerned by a user.

    when you test at layer 3 and above like they do, they test everything, which is what is relevant to the consumer. if the test result is good, you're good. if it fails, you need to start investigating why, but blaming NBN is a waste of time.

    that's where I don't generally mind the ACCC test methodology. they're not trying to differentiate between congested CVCs, or inadequate peering bandwidth, or whatever the cause of a poor test is. their focus is on whether the consumer is getting what they have paid for, and that's all.
     
    Last edited: Jun 14, 2019
  18. OP
    OP
    SiliconAngel

    SiliconAngel Member

    Joined:
    Jun 27, 2001
    Messages:
    639
    Location:
    Perth, Western Australia
    Thanks to everyone who's taken the time to write feedback, I really appreciate you sharing your insights and criticisms.

    Both UDP and TCP test traffic was captured by wireshark - I noted that in the list of server IPs.

    Caspian or any other network engineers, can you please confirm, are you saying that a bandwidth test to an international server should be more than capable of delivering the full bandwidth of your connection? And the most likely cause of lower performance is the RSP not purchasing enough capacity on intercontinental backhaul? I've known for years that testing bandwidth against international servers results in abysmal performance, so it's something I don't bother with - I was under the assumption that even if the subsea cable links aren't congested, once the traffic traverses international networks there's no guarantee the path won't be congested, and an Australian RSP has absolutely no control over that. Are these assumptions wrong? Can and should the RSP be held to account for poor performance across international networks?
     
  19. grimwood

    grimwood Member

    Joined:
    Feb 26, 2002
    Messages:
    1,554
    Location:
    Adelaide
    Ahh just found those on your blog. Most of the UK traffic looks like it's UDP, which should be fine for the bandwidth tests they are doing, although I agree it would make more sense to test it locally to rule out congestion on the international links.
     
  20. clrobbo

    clrobbo Member

    Joined:
    Jun 3, 2002
    Messages:
    227
    It's very easy for local traffic to be manipulated by ISP routing and as such the international data when compared across multiple ISP's becomes quite accurate. For example when you goto Speedtest.Net do you really think your ISP hasn't routed this traffic for optimal performance? The fact is they have. You won't get the same performance hitting your retailer sites...…………..
     

Share This Page

Advertisement: