The current state of NA servers, from an IT perspective, and what you can do to help

Hello, i work for a network security company. I have a degree in network engineering, and i work with other people's networks all day long every day. I wanted to define and explain in more detail some of the common terms thrown around on here like Ping, trace route, and packet loss. I think some people fundamentally misunderstand how this stuff works, so here goes.

WARNING: I know there are neckbeards like myself who will read this and say HEY!! ACTUALLLY... etc. I know. I agree. I'm talking to non-neckbeards, and we must use terms that are not correct or over-generalize certain points. You may even be much smarter than me, i defer to you, i hope you see the value in my post.


Ping is the term for a command. This command instructs your computer to send an ICMP Echo request to the destination IP address. This command is useful for determining basic connectivity to a host and also somewhat for determining latency. Here are some common misconceptions about Ping:

-ICMP echo replies are generated by the software layer (control plane) of a router. This means some additional decisions about the incoming ICMP request are required and if that device is under even a marginal CPU load, this reply could be delayed or dropped entirely. This is customizable, so one hop may report higher latency than another.
-Furthermore many devices implement a configuration called "Control Plane Policing" which will rate-limit ICMP generation. If it's over a certain amount, it gets dropped, period. This is to prevent ICMP flooding.
-The path your request took to get to a hop is not the same as the route it took in return. Latency is calculated by calculating the time the request left to when a reply was returned.
-Do not confuse ping the command with ping the street term. Ping as it is understood by millions of gamers is latency, but they do not necessarily involve each other at all. Most programs use custom methods of calculating latency, usually by examining traffic it is already sending/receiving rather than generating more traffic, or encapsulating a custom latency calculating piece of information in the data. I've just made several points as to why ICMP generation is not a perfect indicator of latency anyway.


Trace route is also a command. In Windows it's "tracert," in Unix it is traceroute. The premise is the same. Tell me what route my packets are taking to get to a destination. How it works is by cheesing the TTL field of an IP header of a packet. TTL = Time to live, which is a value between 0-256 (its 1 byte!) Time to live is a value that is decremented by 1 for every hop a packet passes through. It is the kill switch on a packet, once the value goes to 0, an ICMP TTL exceeded message is supposed to be generated to inform the initial sender.

The way this works is that your computer is generating several packets. It first sends out a packet with a TTL of 1. Once it is received by the first hop along the way, that guy gets it, decrements the TTL field by 1, and now it's 0. When this situation occurs, a TTL time exceeded message is generated and sent back to the original sender. Now, another packet is sent out with a TTL of 2, and this is repeated on and on until the destination replies. This is how the tracert utility determines the hops along the way, each of them will eventually reply by incrementing the TTL field. Here are some misconceptions about Tracert:

-Since this utility relies on ICMP, we run into the same limitations of ICMP generation as before. This time it isn't an ICMP echo reply, its an ICMP time exceeded message. There are several types of ICMP messages.
-Have you ever seen the *** *** *** on a tracert output? Does it mean that guy is unreachable? Does it mean the packet was dropped? Not at all, necessarily. This is complicated. There are several reasons why an ICMP message may not be generated, or why the reply may be delayed, i've discussed many above.
-The list of hops displayed in the tracert output may not be the same list if you run the command a second time, or an hour later. Without writing 10 pages of crap, Routing on the internet is Dynamic. Your data may take one route this instant, and another route the next. The route BACK is probably different too. Your results will vary, it always varies. Routing is like that. So do not trust the output from tracert, i don't particularly care what it is telling me for routing over the internet when i use it as a troubleshooting tool, i only use it for local networks where routing is static and i know thats the path data takes EVERY time.
-Don't trust ICMP generation from public sources. It will lead you down a rabbit hole. You'll be like "hmmm that guy didn't reply there must be a problem" when in fact, nope.

Packet Loss

Packet loss occurs literally all the time. Networking is essentially a "best effort" enterprise. You try to get stuff where it needs to go quickly and over the most efficient route. This is complicated by the fact that routers can only process so much data at a time. This is aided by buffering. Buffering means putting data you've received into memory, or a software "queue." You never receive the same amount of data at the same time, and you can only send so much at one time. Routers can only process one packet at a time, literally one-packet-at-a-time. This is where buffering comes in, and where a major source of packet loss also comes from. The buffer has a limited size, and if data is being received at a faster rate than it is being processed and sent, packets are getting dropped. The router really has no other option than to drop the packets. Buffer full? cya nerd.


Latency is a complicated topic, not because it is complicated that you have latency. It is complicated because there is this whole rainbow of reasons why latency happens. The path data takes from its initial creation in the application layer (there's 7 layers! unless you're a cisco guy then it's 4..ish) pushed down through all the software on your device to get chopped up into little bits and converted into voltage that goes over a wire to some device that interprets that voltage and makes decisions about that voltage and on-and-on. There are a TON of points of failure between you and whatever you are trying to talk to.

Fundamentally, latency is calculated as a round trip time. For League of Legends, they've developed a custom method of determining latency. League of legends uses a transport protocol called UDP. This is contrasted by the "other" transport protocol called TCP. TCP and UDP both chop data up into little bits, but TCP keeps track of what data is sent and will ask for retransmission of lost packets, and re-orders them as they arrive. TCP has flow control to control how much data is sent "at a time" and tries to essentially maximize how much data is sent per second. It also establishes a connection with a 3-way "handshake" a packet with the SYN flag flipped is sent, which is responded to by a packet with the SYN and ACK flag flipped, and then the original sender acknowledges receipt by sending a packet with the ACK flag flipped. Just so you know. :)

UDP is a connection-less protocol. There is no handshake, and normally no method of flow control. This the equivalent of spray-and-pray for data. You hope it gets there, but you don't know if it does or not, nor do you care. UDP is used for time-sensitive data. Real time stuff like voice calls, and league of legends games use UDP because there is no point in asking for a server to send you a dropped packet from 3 seconds ago. It doesn't matter anymore. We need to move on with our lives, and UDP is cool with that.

How league latency is calculated is not with generic ICMP messages, but probably with custom designed stuff encapsulated in that traffic that is designed to do basically the same thing. At least, that's my best guess. I've never inspected league traffic with Wireshark to see what is happening, and its likely i would need additional software to decipher what their custom protocols are trying to say to each other.

TL;DR Networking is complicated. I hope i made you think about whats under the hood a bit. I really just scratched the surface here. It wasn't my goal to confirm/deny anything Riot says, but to help people understand that the latency they experience is really a much larger beast than they think--as well as clear up a few misconceptions that make me cringe when i read these forums.


/r/leagueoflegends Thread