Apples & pears – or the perils of achieving meaningful test results

Powerline testing needs more than one metric to give a full picture

As more and more G.hn devices make it on to the market there are also many people who are keen to test how good they are and how they compare to other technologies on the market.

However – there are many testing pitfalls to avoid, both in terms of being able to compare like for like and in choosing the right test parameters for real world applications.

There has been much media coverage recently regarding testing of Comtrend G.hn adapters. This testing appears to have focused on a single metric – that of throughput, so has probably followed a very limited test plan. From our point of view, this testing focuses on one of the less important metrics on a medium like powerline – there are so many other factors that need to be considered.  Throughput is important up to the ability to deliver high definition video streams; beyond that, latency and stability are more important factors.

An inferior test plan might instruct the user to ignore the single biggest impairment of the medium: noisy devices.  A real test – and one that we at HomeGrid undertake in our testing – insists on the presence of exactly the impairments.  Less capable power line technologies might ask the tester to “condition” the line by removing noisy devices.  We believe, in fact, that devices should be tested in the circuit with all its impairments (such as plugging in your phone charger to charge your iPhone, running your paper shredder, turning on your refrigerator, running on a treadmill, etc.) – since the user will see them every day in actual use.

While any purely throughput-based test provides a starting point, they don’t really finish the picture.  Bandwidth in today’s world is irrelevant if it is not allocated properly.  Effective testing needs to measure behavior in a mixed-managed services environment; in other words video mixed with voice and data.  Tests need to show how well bandwidth is allocated and how latency and error rate affects the service – be it gaming, video on your TV, VoIP, or some other service – and all of this among real world situations.

A good example of this is Ethernet.  If you test a home Ethernet network with just throughput testing, it would appear that you have a solid 100Mbps or even 1Gbps of throughput. But I have seen video fall apart on that same 100Mbps wired Ethernet when the video is only 20Mbps, as soon as you put some mixed multi-service, multi-node traffic on that network. Real world use cases are critical to creating test results that are representative of a real user experience.

Any powerline testing should always take the real world and real usage into consideration to achieve meaningful results. It is better to help users condition themselves against the noise of meaningless and useless tests, than for test environments to be conditioned against reality.