Got slow download but fast upload speeds over wireless? Here's a fix.

If you find that your wireless download speeds are abysmal while your uploads speeds are pretty solid, especially with Apple devices, I've got a possible solution for you. I actually struggled with this issue for a while and decided to write down my findings in a blog post in case I, or anyone else, runs into this in the future.

tldr: disable WMM QoS in your router settings.

Symptoms

At home, I have the following setup:
Whenever I used my laptop or phone, the Wi-Fi connection felt incredibly slow. Youtube videos took forever to load, Google Maps tiles filled in slowly, and even gmail felt unresponsive. On the other hand, my desktop, which was connected to the router via an ethernet cable, worked just fine. 

Numbers

To confirm my observations, I decided to take some bandwidth measurements using bandwidthplace.com, speakeasy.net, and speedtest.net for the laptop and the Speed Test app for the iPhone. The results were pretty consistent across all app and device pairs and looked something like this:

Desktop
  • Download: 24 Mbps
  • Upload: 4.5 Mbps
Laptop
  • Download: 0.65 Mbps
  • Upload: 4.5 Mbps
iPhone
  • Download: 0.58 Mbps
  • Upload: 4.4 Mbps

Yikes! My laptop and iPhone download speed were more than 30 times slower than my desktop's download speed! On the other hand, the upload speed was roughly the same on all devices. What the hell was going on?

Failed attempts

After googling for solutions, I tried a number of tweaks commonly suggested around the web:
  • Change DNS hosts
  • Change wireless channel
  • Change the wireless channel width
  • Use a different security mode (WPA2 personal)
  • Shut off firewalls
  • Enable or disable IPv6 settings
  • Reboot the router
None of these worked. 

The solution

Out of desperation, I started tweaking random settings on my router and stumbled across one that finally worked. The directions for other routers may be a little different, but here's what I did:
  1. Go to http://192.168.1.1 and login to your router. If you've never done this, look for instructions that came with your router or do a google search to find the default username and password.
  2. Find a page that has QoS settings. For the E1200, you need to click on "Applications & Gaming" and select the "QoS" sub-menu.
  3. Disable WMM Support
  4. Click save.
That's it. The second I disabled WMM support, the download speeds for my laptop and iPhone both jumped to 24 Mbps, perfectly matching my desktop. 

What the hell is WMM?

WMM is apparently an 802.11e feature that provides higher priority for "time-dependent" traffic, such as video or voice. In theory, this should make things like VoIP calls and video chat (e.g. Skype) perform better. In practice, having it enabled destroyed my Wi-Fi download speeds. Since I disabled it, my Wi-Fi is blazing fast and I've seen no negative side-effects.

If anyone has more information as to why this would be the case, please share it here.

Update (09/13): some nitty-gritty details

In the last year, this post has had over 100k views and helped many people fix their download speeds. I'm happy I was able to help people. Other folks have been eager to share advice too: I got an email from a Russ Washington in Atlanta who did some impressive investigative work to uncover a potential underlying cause. In case it helps others, here is his email:
Yevgeniy: I ran into your blog post "Got slow download but fast upload speeds over wireless? Here's a fix." I have some info you may find useful. 
This happened to me too when I moved to Comcast - but I had DSL running in parallel. The Comcast traffic had this problem but the DSL did not. Also, it affected my Linksys router when it had stock firmware *and* after switching to DD-WRT. Clearly the traffic itself was at issue, so I broke out the packet sniffer. 
*All* inbound Comcast traffic (Internet --> client) was tagged with a DSCP value of 8 (Class Selector 1). The DSL traffic had a DSCP value of 0. So Comcast is tagging all traffic to be treated a certain way by QoS: "Priority," which sounds good but is actually the second-*lowest* possible. 
WMM, itself a QoS technique, apparently de-prioritizes (drops?) based on the Comcast-supplied value. Turning off WMM worked around it - but since WMM is part of the 802.11n spec, I wanted root cause. Judiciously replacing that set-by-Comcast DSCP value does the trick. 
So between my Linksys router and both ISPs, I had a Netscreen firewall. It lets me set DSCP values by policy - so I told it to match the DSL (DSCP 0). This yielded great improvement. However, I was still not getting full speed so even a zero value was not the best for > DSL rates. I set the DSCP value to 46 (Expedited Forwarding) and bingo, up to 20Mbps, almost full provisioned speed (25Mbps). 
Why only download issues? Because the only Comcast-tagged packets are the inbound ones: Internet --> you, including those big data packets. When uploading, yes, you get sent ACK packets and such - but they are tiny connection-control packets. I imagine WWM weirds out on them too, but you (usually) wouldn't notice when doing multi-Mbps speed tests. 
I am still trying to udnerstand WMM, but this was a big find, and I was lucky to have a firewall that let me packet-tweak. Hope you find the info useful. 
Russ Washington
Atlanta, GA

Seven Languages in Seven Weeks: Prolog, Day 3

After a rocky day 2 of Prolog, I'm back for a 3rd day in my Seven Languages in Seven Weeks series of blog posts.

Prolog, Day 3: Thoughts

Today, I got to see Prolog flex its muscles. After just 2 days of using the language, we were already able to use it to solve two relatively complicated puzzles: Sudoku and eight queens. Even more impressively, the Prolog solutions were remarkably elegant and concise.

I had complained on the previous day that, for simple problems, the Prolog approach did not communicate its intent particularly well. Day 3 turns that completely around. For example, let's check out the 4x4 Sudoku solver in the book. Here's how you run it:


And here is the code:


Even if you don't know Prolog, the code is so eminently declarative and visual, that you can still get an idea of what's going on. We break the 4x4 puzzle down into individual elements, rows, columns, and squares. After that, we just apply some constraints to them: all elements must have a value between 1 and 4 (fd_domain) and the values in each row, column, and square must be different (fd_all_different).

And that's it.

A few lines of code and the Prolog compiler figures out values that satisfy these criteria to get you a solution. Although not an entirely even comparison, take a look at Sudoku solvers I found online in Ruby, Java, and C++. I'm sure each of these imperative solutions could be made prettier; perhaps they are faster; but none of them comes close to the declarative solution in terms of communicating the code's intent.

6x6 Sudoku

Modify the Sudoku solver to work on six-by-six puzzles, where squares are 3x2. Also, make the solver print prettier solutions.

The code:


The output:


I took the easy way out on this problem, just extending the 4x4 solver to handle 6x6 puzzles with some copy and paste.

9x9 Sudoku

Modify the Sudoku solver to work on nine-by-nine puzzles.

The code:


The output:


With an even bigger puzzle, I finally decided to avoid copy and paste and build something more generic. The code above should be able to solve any NxN puzzle, where N is a perfect square (4x4, 9x9, 16x16, etc).

The approach is the same as before: ensure the values are all in the range 1..N, carve them into rows, columns, and squares, and check that no value in each row, column, or square repeats.

The bulk of the work is done by a rule called slice:


The goal of slice is to chop the Puzzle into a list of N sublists. Each sublist represents a row, column, or square (depending on the variable Type) in Puzzle. The slice rule takes one element at a time from Puzzle and uses one of the slice_position methods to put this element into the proper spot in its sublist. For example, here is slice_position for rows:


For each element I of Puzzle, we first figure out which row (sublist) it lives in: X = I // Size. The // in prolog is a shortcut for integer division. We then figure out where in that sublist the element belongs: Y = I mod Size. Pretty simple. Squares, however, are a lot more complicated:


To get the math right on this one, I got some help from Hristo. He even posted his reasoning on the Math StackExchange to see if anyone could come up with a formal proof for his formula. Once that piece was in place, the Sudoku solver was pumping out solutions to 9x9 puzzles in no time.

Wrapping up Prolog

Prolog is a fascinating language. If you've done imperative programming your whole life, you really owe it to yourself to try it out. It's a refreshingly different approach to problem solving that will definitely impact the way you think.

I found it particularly bizarre to be manipulating the solution or output to some programming puzzle, even though the solution wasn't yet known! Of course, in Prolog, you're not actually manipulating the solution, you're merely describing and defining it. Sometimes it was easy to invert my thinking this way; at other times, it was brutally difficult, like trying to mentally reverse the direction of the spinning girl illusion.

Unfortunately, much like Io before it, Prolog suffers from the lack of an active online community. You can find some information via Google and StackOverflow, but it's often sparse and incomplete. The documentation is a bit scattered, seems to be written in a very academic language, and is often more confusing than helpful. Worst of all, there are several flavors/dialects of Prolog and the code from one often doesn't work in another.

Having said all that, I have a suspicion that declarative programming is going to grow in popularity in the future (10+ years). Being able to just tell the computer what you want instead of how to get it could provide enormous leverage for programmer productivity and creativity. Of course, I think we'll need a language more intuitive and expressive than Prolog, as well as a smart enough compiler to understand it, but the declarative approach to coding seems like a much bigger leap forward than, say, the whole object oriented vs. functional programming debate.

Onto the next chapter!

Changing gears one more time, head over to Scala, Day 1 to learn about a language that mixes OO and functional programming.

Seven Languages in Seven Weeks: Prolog, Day 2

Today is the second day of Prolog in the Seven Languages in Seven Weeks series of blog posts. You can find the first day of Prolog here.

Prolog, Day 2: Thoughts

Today, Prolog broke my brain. The chapter started with recursion, lists, tuples, and pattern matching, all of which were tolerable if you've had prior exposure to functional programming. However, after that, we moved onto using unification as the primary construct for problem solving, and the gears in my head began to grind.

It took me a while to wrap my head around using unification, but once it clicked, I was both elated and disappointed. I was elated because (a) I often get that way when I learn something new and (b) the book was clearly getting me to think about programming in a totally new manner. However, I was disappointed because even accomplishing something as trivial as adding all the values in a list required a recursive solution that seemed unnecessarily complicated:


The algorithm starts with a base case: the sum of an empty list is 0. We then add a recursive case: the sum of a non-empty list is the head of the list plus the sum of the tail. This isn't too bad once you get used to it, but I find that this sort of code does not communicate its intent well at all. That is, I'm a believer that "programs must be written for people to read, and only incidentally for machines to execute" (Structure and Interpretation of Computer Programs) and this prolog code seems to be the exact opposite. Even for something as trivial as adding the values in a list, I find myself distracted by the need to do pattern matching on the list, recursive calls, and base cases.

In fact, while I have no doubt that the declarative approach is very powerful for certain types of problems, it's not exactly what I expected when I first heard of declarative programming. Conceptually, I thought declarative programming would be all about describing what the solution looks like. From the Prolog I've seen so far, which is admittedly very little, I feel like what we're actually doing is setting up elaborate "traps" to force the unification engine to fill in the proper values for our variables.

As a counter-example, here's how an "ideal" declarative programming might let me define the sum of a list:


To me, the "code" above screams it's intent far more clearly than the recursive prolog solution. An even clearer example comes later in this blog post, where I sort a list using Prolog. While writing this sorting code, I felt like I was playing a game of "how do I setup my rules and atoms to arm twist unification into sorting?" If I had designed Prolog using a coding backwards approach, I would've strived to let the user define a sorted list in a much more natural manner:


(Update: turns out it is possible to do something close to this. See the end of the post.)

However, I admit freely that I'm no expert on compilers or language design, so perhaps I'm being naive. Maybe there's no way to define a syntax or compiler that can handle such simple looking definitions in the general case. Perhaps the unification approach in Prolog is as close as we can get and I just need time until my brain gets used to it more.

Prolog, Day 2: Problems

Reverse the elements of a list


Find the smallest element of a list


A pretty simple problem, except that there doesn't seem to be a way to do an if/else statement in Prolog. Instead, to handle the possible outcomes of TailMin =< Head and TailMin > Head, I had to use pattern matching and a bit of copy & paste. If anyone knows a more DRY way to do this, please let me know.

Sort the elements of a list


I implemented merge sort. In retrospect, this seems to somewhat defeat the purpose of declarative programming. That is, instead of describing the solution - that is, what a sorted list looks like - I'm effectively describing the individual steps it takes to sort a list. In other words, this is awfully close to imperative programming.

Unfortunately, I couldn't think of a more "declarative" way of solving this. I initially wrote something similar to a selection sort: it recursively called sort_list on the tail until we got down to one item. At that point, it would use the merge method to arrange the head and tail in the proper order. As we went back up the recursion stack, the merge method would insert the head in the proper spot in the partial sublist. This was obviously less efficient, but at least the sort_list method looked declarative. If only there was a way to define it without a merge step, I'd be in business.

Fibonacci series


I ran into two gotchas writing a fibonacci function: first, I had to remember that the recursive calls to "fib" are not really function calls and you can't just directly pass "N - 1" or "N - 2" as parameters. However, when defining N1 and N2, I ran into a second gotcha: you need to use the "is" keyword instead of the equals ("=") sign.

Factorial



UPDATE: eating humble pie

Ok, I was wrong. It turns out you can write Prolog code that looks much more declarative and much less imperative. After reading through Day 3 of Prolog, I was a bit wiser, and was able to write the following code for sorting:


I'm sure it's not as fast as the merge sort I wrote on my first attempt, but the intent is much clearer: I'm quite obviously describing what a sorted list should look like instead of walking through the steps of how to build one. The recursion and various language quirks of Prolog still take some getting used to, but from a readability perspective, I'm much happier with what I've been seeing since reading Day 3.

Moving right along

Check out Prolog: Day 3 for some more declarative goodness, including an elegant Sudoku solver.

Seven Languages in Seven Weeks: Prolog, Day 1

After finishing up Io, it's time to shift gears yet again in my Seven Languages in Seven Weeks series of blog posts. This time, it's time for something radically different: Prolog.

Prolog, Day 1: Thoughts

The main goals of Seven Languages in Seven Weeks is not actually to teach you seven new languages, but to teach you seven new ways of thinking. In fact, the languages in the book are deliberately chosen so as to represent a wide spectrum of approaches to programming problems.

While the first two languages, Ruby and Io, felt pretty familiar, the third one is a totally different kind of beast. Prolog is my first exposure to declarative programming and definitely a new way of thinking. All the previous languages followed an imperative model: you write code to tell the compiler what to do one step at a time to arrive at some result. In declarative programming, you actually start by describing the result you want and the compiler figures out the steps to get you there.

As an example, consider sorting a collection of integers. In an imperative language, you would describe all the steps of a sorting algorithm:
  1. Divide the collection into sublists of size 1.
  2. Merge pairs of sublists together into a new sublist, keeping the values in sorted order.
  3. Continue merging the larger sublists together until there is only 1 list remaining.
With declarative programming, you would instead describe what the output list should look like:
  1. It has every element in the original list.
  2. Each value at position i in the output list is less than or equal to the value at position i + 1.
And that's it. The prolog compiler would take this description and figure out how to assemble a list that matches your description.

Well, that's the theory any way. After the first chapter, I've only gotten a small taste of this model of programming, so I'm still finding it hard to understand (a) how hard it would be to describe something more complicated than sorting and (b) if the compiler could come up with efficient solutions.

Nevertheless, many of us have been using a (limited, non-turing complete) form of declarative programming for years: HTML. Instead of writing procedural code that instructs the browser how to render the page pixel by pixel, HTML lets you describe what the result should look like and the browser figures out how to render it for you.

Prolog, Day 1: Problems

Books and authors

Make a simple knowledge base representing some of your favorite books and authors. Find all books in your knowledge base written by one author.

Knowledge base:


Queries:


Music and instruments

Make a knowledge base representing musicians and instruments. Also represent musicians and their genre of music. Find all musicians who play the guitar.

Knowledge base:


Queries:


Normalization?

For the books knowledge base, I defined the rules in a "normalized" style as I might use for a SQL database. Looking back at it now, I'm not sure this is the best way to do it. It doesn't seem like I can do anything meaningful with the "normalized" rules other than, perhaps, checking if a given atom is valid.

For the music knowledge base, I only defined the relationships and not any individual atoms. This seems closer to the style in the book and can field the same queries, but with less code. I would hazard a guess that this is the proper way to do it, but I'd love to hear back from anyone who has had more than 1 day of exposure to Prolog.

Day 2

For more Prolog goodness, continue on to Prolog, Day 2.

Seven Languages in Seven Weeks: Io, Day 3

Today is the final chapter of Io in the Seven Languages in Seven Weeks series of posts. You can find the previous day of Io here.

Io, Day 3: Thoughts

Although I'm only on the second language out of seven in the book, a pattern is emerging: day 1 is very basic syntax, day 2 is more advanced syntax, and day 3 shows you some of the advanced applications that set the current language apart from all the others. 

It's a great strategy: each jump is small enough that you can follow along, but big enough that you're able to get a thorough look at the language in just a few days. In fact, my biggest complaint so far is that the examples in the final day of Io are very intriguing, but also very short, so I'm dying to see more.

In a single chapter, we tore through using metaprogramming and concurrency in the span of just a few pages. It was tough to appreciate it all in such a short time. I was able to get a little more practice with Io metaprogramming by implementing a super simple "doInTransaction" method similar to the one I created in Ruby



The idea was to be able to run some code, such as starting and ending a transaction, before and after a "block" of statements. For added fun, I wanted to be able to support curly braces for defining blocks. Accomplishing both was trivial by taking advantage of the fact that Io treats "{" as the message "curlyBrackets". Handle that message properly in your object, add some basic introspection, and you're done.

Unfortunately, I wasn't able to think of a suitable "toy" example to learn more about coroutines. I'm still fuzzy on a lot of the nuances, such as how memory is shared between the "threads", how many threads there are, and how yield and resume really interact. I'd love to see some more examples, especially those that show a practical use case for Io actors.

Io, Day 3: Problems

Enhance Builder XML

Enhance the XML program (see the original source, the original test file, and the original output) to add spaces to show the indentation structure. Also, enhance the XML program to handle attributes: if the first argument is a map (use the curly brackets syntax), add attributes to the XML program. For example, book({"author": "Tate"}..) would print <book author="Tate">.

This is an awesome example of Io's flexibility and power when it comes to creating DSLs. In some 30 lines of code, Io can process this Builder format:


To produce the equivalent HTML output:


Most of the (surprisingly concise and elegant) builder source code came from the book. Here's my updated version that handles indentation and attributes:


The biggest stumbling point was trying to use addAssignOperator in the same file as the test script. This doesn't work: the OperatorTable has already been loaded and can't be changed. By splitting the code into two files, one for source and one for testing, I was able to properly handle the colon and avoid the very frustrating "Sequence does not respond to ':'" error.

Create a list syntax that uses brackets


A much easier problem, but another great example of the flexibility of Io: Ruby-like syntax for lists in just a couple lines of code.

Wrapping up Io

This was the last day of Io and I must admit, I'm a bit sad to see it go. It's a beautiful example of just how simple and flexible a language can be. Of course, being able - and tempted - to change just about anything is a bit of double-edged sword: more than once I saw unexpected consequences from overriding the "forward" method. However, it's undeniably powerful. If nothing else, Io has made me more excited to learn about the Lisp family, with Clojure being the 6th language in the book.

I wish I got to see some more examples of concurrency in Io, but the book was pretty sparse in that area. Even worse, I can't find much online. Unfortunately, Io's community is tiny. It's hard to justify spending too much time on a language that, in all honesty, I'll probably never use in any capacity besides learning.

Time for something new

Continue on to Prolog, Day 1, to learn about a radically different style of programming.

Seven Languages in Seven Weeks: Io, Day 2

Today is Day 2 of Io in my Seven Languages in Seven Weeks series of blog posts. You can check out Day 1 of IO here.

Io, Day 2: Thoughts

Day 2 made some huge leaps and bounds over the basic syntax introduced in Day 1. The key learning from this day is that in Io, just about everything is a message sent to an object. There aren't separate semantics for calling functions, using control structures, or defining objects: those are all just objects reacting to some sort of message.

One of the most startling examples of this is the realization that even the basic operators in Io, such as +, -, and *, are actually messages. That is, the code "2 + 5" is actually understood as the message "+" being sent to the object 2 with 5 as a parameter. In other words, it could be re-written as "2 +(5)". The "+", then, is just a method defined on the number object that takes another number as a parameter.

This makes supporting operators on custom objects simple: all I have to do is define a "slot" with the operator's name. For example, here's an object for complex numbers that can be used with the "+" operator:



I found this fairly eye opening. As I think of the syntaxes of other languages I'm used to, such as Java, there are "special cases" all over the place. For example, the "+" operator has special code to handle addition for numbers and String concatenation and nothing else; for loops, while loops, if statements, defining classes, and so on are all special syntax features. In Io, they are all just objects responding to messages.

Io, Day 2: Problems

Fibonacci

Write a program to find the nth Fibonacci number. Both the recursive and iterative solutions are included:



Safe division

How would you change the "/" operator to return 0 if the denominator is zero?


2d add

Write a program to add up all the values in a 2-dimensional array.


myAverage

Add a slot called "myAverage" to a list that computes the average of all the numbers in a list. Bonus: raise an exception if any item in the list is not a number.


Two Dimensional List

Write a prototype for a two-dimensional list. The dim(x, y) method should allocate a list of y lists that are x elements long. set(x, y, value) should set a value and get(x, y) should return that value. Write a transpose method so that new_matrix get(y, x) == original_matrix get(x, y). Write the matrix to a file and read the matrix from a file.


Guess Number

Write a program that gives you ten tries to guess a random number from 1-100. Give a hint of "hotter" or "colder" for each guess after the first one.


On to day 3!

Continue on to day 3 of Io here.

Seven Languages in Seven Weeks: Io, Day 1

Welcome to the first day of Io in my Seven Languages in Seven Weeks series of blog posts. After spending a few days playing around with Ruby, Io is definitely a change of pace.

Io, Day 1: Thoughts

From what I've seen so far, Io is a prototype-based language (similar to JavaScript), with extremely minimal syntax (none of Ruby's syntax sugar), objects are just a collection of "slots" that contain either data or methods, and you interact with objects by passing them messages. To give you a taste, here are some snippets:

We'll start with the classic Hello World:


The way to think about this in Io terms is that you are passing the "println" message to the "Hello, World!" String object. I must note that having a space between object and message makes the code noticeably harder for my mind to parse. If the code had used a dot instead - "Hello, World!".println - I would've found it much easier! As it is, perhaps because I'm not used to it, my comprehension is slowed and my aesthetic sense is tingling.

Here's a simple example of defining variables and methods:


Method calls look similar to most languages I'm used to: "method param1, param2, ..." However, I wonder if the Io way of looking at it is that the speak method is an object and the phrase parameter is the message?

Finally, here's an example that shows objects and prototypal inheritance:


In prototype-based languages, the distinction is blurred between a "class" - that is, some sort of template defining an object and its behavior - and an "instance" of that class. In Io, they are pretty much one and the same: you just clone an existing object to create a new one, whether you intend them as instances or templates.

The one place where "instances" do differ from "classes", however, is by convention: the class-like objects are usually named with an upper case first letter (Dog, Cat) while the instance-like objects are named with a lower case first letter (myDog, myCat). I suppose this sort of design greatly simplifies the language, as there's no need for special syntax, constructs, or rules for "classes".

Io, Day 1: Problems

The day 1 problems in this book are always very basic. I skipped a few of the really simple ones as they are not too interesting.

Io typing

Evaluate 1 + 1 and then 1 + "1". Is Io weakly or strongly typed?


As you can see above, Io is a strongly typed language.

Dynamic code slot

Execute the code in a slot given its name.


Explanation: the "System" object contains various system properties and methods. I pass the "args" parameter to it to get the command line parameters. I then use the "at" method to access the parameter at a given index: in Io, index 0 has the name of the app (DynamicCodeSlot.io) and index 1 is the first argument (foo or bar).

By calling the "getSlot" method, I get back the object stored at the slot named as a command line argument. Finally, the "call" method does what you'd expect: it calls that slot.

Io, Continued

Continue on to Io, Day 2.