How to Read a Traceroute

This guide will teach you how to read a traceroute to identify potential connection issues or packet loss

Gathering traceroutes

To get a traceroute, open command prompt in Windows or Terminal on OSX and run the following. The first argument takes either a domain or IP address.





Let the traceroute run for 1-2 minutes before assessing

Reading the traceroute

Now that you have your traceroute, you will see something that looks like this.

traceroute to (, 64 hops max, 52 byte packets
 1 (  302.809 ms  10.786 ms  182.727 ms
 2  * * *
 3 (  198.191 ms  33.049 ms  173.817 ms
 4 (  1.832 ms  1.833 ms  1.752 ms
 5 (  2.043 ms  1.957 ms  2.878 ms
 6 (  1.965 ms  1.988 ms  1.919 ms
 7 (x.x.x.x)  3.016 ms  2.810 ms  3.780 ms
 8 (  6.295 ms  3.205 ms  3.110 ms
 9  * * *
10 (  4.217 ms  3.546 ms  3.533 ms
11 (  33.704 ms  32.591 ms  37.759 ms
12 (  32.540 ms  36.926 ms  34.108 ms
13  * * *
14 (  416.777 ms  509.795 ms  432.394 ms
15  * * *
16  * * *

You will notice that by hop 14 we have 400ms+. Could this be TCPShield related? We will analyze this traceroute closely to see what might be causing this.

Typical traceroutes are formatted like so:

<hop #>  <hostname> (IP resolution) probe1, probe2, probe3

Let's outline what each of these mean.



hop #

Packets traverse the internet to their destination in a hop-by-hop basis. The hop number is simply a label identifying which sequence in that path your packets are currently traversing.


The human readable representation of the current hop in the trace. Sometimes you may not get a hostname for every hop, and this is completely normal. It's on network operators to configure this if they choose to do so.


The numerical IPv4 address pertaining to the current hop.


Each traceroute issues 3 ping (ICMP) requests to each of the hops being traversed. The number you see on each of the probes represents the round trip latency between your source machine and the target hop in milliseconds.

Traceroutes are read in-order starting from the first hop (at this point packets usually haven't exited the source network to the public internet yet), and the destination hop. Most hostnames you will see in a traceroute will contain information that may outline what location that router or switch is located, for example indicates a router in Chicago belonging to GTT, is an endpoint in Seattle belonging to Akamai (located in SIX - Seattle Internet Exchange), and so on.

Sometimes you may never reach the destination hop, like in the example above. This is completely normal for TCPShield, as we disable all protocols except TCP 25565/443/80 across our edge nodes.

Lets look at the above trace a bit more closely.

 1 (  302.809 ms  10.786 ms  182.727 ms
 2  * * *
 3 (  198.191 ms  33.049 ms  173.817 ms

The first 3 hops contain critical information here that will help us determine the cause for the high latency. You will see each of the 3 probes here have varying degrees of latency. Starting at the first hop, one of the probes has 302.809ms. This definitely isn't expected as the first hop is usually your home router or switch in a residential network - something that should be below 5ms ideally for all probes on that hop. Indeed, we have 10ms shown on one of the probes, however the other 2 have latencies in the triple digits which signify an unstable connection.

For context, I was running this traceroute over WiFi in an area with many connected clients. We can also see the same effects shown in the 3rd hop. During this point, our packets still have not left the internal network to the outside world yet - so as a result of our unstable connection to the WiFi router, this has introduced propagation delay to the remainder of the traceroute. As mentioned previously, packets are routed on the internet in a hop-by-hop basis. If one of the hops in that path experiences high latency or congestion, this will impact the quality of service for the remainder of the trace. In networking, a chain (or path in this case) is only as strong as its weakest link.

Any hops where you see,, or are RFC1918 addresses, which are reserved for internal networks. If you see IP addresses in these CIDRs, it usually signifies packets have not exited to the public internet yet. Any high latency within these hops typically propagate to the remainder of the trace.

This now gives us more clues as to what is happening at hop 14 - the last hop we could observe before hitting TCPShield's edge nodes and thus not being able to proceed further (as we block ICMP across our edge).

14 (  416.777 ms  509.795 ms  432.394 ms

BHS in this case indicates Beauharnois - our North American proxy location, and VAC indicates the mitigation infrastructure utilized at this facility. During normal circumstance (i.e. when connected over ethernet), the computer used to perform this traceroute typically experiences 60ms to this VAC hop. Do you remember the 302.809ms probe we saw on the first hop of our trace? Our unstable connection to our internal network has unfortunately propagated its way to the rest of our Traceroute, resulting in the high latency on hop 14 - therefore adding ~300ms to what would should normally be within the 60-70ms range. Networks are only as stable as the weakest link - and in our case, our weakest link is indeed our internal (home) network.

The BGP (border gateway protocol) is the system that glues the internet together, and is responsible for making routing decisions between switches and routers on the internet. In normal circumstances, BGP will route around these problems automatically when high latency or packet loss becomes and issue between major PoPs (point of presences) on the internet. However in our case, there really isn't anything to route around because we haven't routed anywhere yet - the high latency is present within our LAN itself, and our only solution at this point would be to diagnose why this could be happening.

Diagnosing propagation delay

Since our high latency occurred within our internal network itself - it would be a good idea to determine why this is happening. After all, WiFi isn't totally stable from a jitter standpoint, especially considering this trace was performed with many other clients connected - and presumably other appliances or electronic equipment interfering as well. For residential connections, you should first try and see if the high latency is still present when performing a traceroute over Ethernet, which is not prone to the inherent pitfalls introduced with Wifi.

If the high latency still occurs over Ethernet, the most probable scenario is a congestion issue within the ISP your player(s) are using itself. During high traffic periods of the day, sometimes an ISP's downstream link to its distribution routers may be overwhelmed, and thus forwarding and propagation delays will be an inherent issue that cannot be solved without contacting the ISP's network engineers to further diagnose the issue.

Last updated


Need help?