October 11th, 2002, 4:20 pm
Which bit would you like clarified, ? This is a large % of my job, so I don't want to bore people senseless with the trivia that is my life We have been asked by clients to adjust latency to compensate for things like the fact they are further away.We can do this. It is actually rather hard though, and we don't do it in any production system.The sort of industrial strength routers we use allow for preferential priority for different packets based upon source, destination, and to anextent content. However the effects are essentially impossble to make deterministic. I would feel very uncomfortable about doing this, since as middlemen we have a reputation to protect.However, there are some grey areas, which if you actually care I can enumerate at a later time.This is of course not how we would do it. I would like to emphasise in case any of our customers are reading this that we don't, but if we wanted to, there are better ways.The first way, which is pathetically easy to code, is a small delay loop in the client software. This could hold packets back for a configurable amount of time, both incomingand outgoing. It would be extraordinarily difficult to detect this, since the latencies we're talking here are threshold for humans, and of course since we're working at the feedlevel, the data would remain consistent. Many ECNs have distribution boxes, taking one feed and "broadcasting" it around the Bank network.These have enough spare MIPS to do all sorts of tricks like this. It may easily be a legitimate "throttling" of the feed. We have one user, who shall remainnameless who hoses data at us as fast as his line will go. fortunately, I had assumed that there would be several such users, so the server complex can cope.If I had erred, we'd probably have to throttle him back.A related way, is actualy a problem we have in that we have (at least) one input stream and an output stream of packets.Which should be the relative priorities of these streams ? What about the the thread that draws the screen ?You can dramtically alter responsiveness and performance that way.Many ECN protocols/servers have bottlenecks where the data is "aimed" at each node in turn. This may be multi-levelled through distribution servers, but the principle holds.The cycle time can be large, certainly a good fraction of a second, and potentially longer. A well written protocol fires the data randomly at next-level hosts. However, it is much easier to just write code that iterates through customers in the same pattern every time.Wannabe first ? DominiConnorI think the orirginal thesis by MP was that ECN adjusts it's network infrastructure setup to compensate for the lag "remote" clients have to give them a fair chance. I read to mean it that it "slows down" the clients that have better connection. This would be quite an odd thing to do (and to advertise), in my humble unqualified opinion.From your post I understood that ECN observes different packet latency, that's quite reasonable. You don't really imply that ECN, seening this different latency, adjusts the parameters of it's own network for different clients, do you?