Logitech M510 Wireless Mouse Review




Best Wireless Gaming Mouse - I have been seeking a fully-featured wireless mouse for the past two weeks and have finally found my optimum choice in the Logitech M510 Wireless Mouse. This mouse is just large enough to be a comfortable fit for virtually any hand. The battery life is outstanding! The package says the batteries will last 2 years. I have been using it for gaming and normal computer

Seven Languages in Seven Weeks: Erlang, Day 2

After learning some basic Erlang syntax on Day 1, I take on the second Erlang chapter, which introduces some more interesting concepts.

Erlang, Day 2: Thoughts

I'm finding it very easy to dive into Erlang. After going through the Prolog and Scala chapters of this book, as well as making heavy use of Scala at work, the functional constructs used in Erlang feel natural. I've grown very fond of pattern matching in the last few months and have found it to be a very powerful tool for expressing complex concepts in a very concise and readable manner. Erlang's heavy reliance on pattern matching makes me happy.

However, the syntax does feel slightly clunky: I constantly forget to end lines with dots and separating clauses of control structures with semi-colons gets annoying. I suspect this is something you get used to. Moreover, the end result, at least in the dead-simple code snippets I've looked at so far, is pleasantly readable.

Erlang, Day 2: Problems

List lookup

Consider a list of keyword-value tuples, such as [{erlang, "a functional language"}, {ruby, "an OO language"}]. Write a function that accepts the list and a keyword and returns the associated value for the keyword.

My implementation:


Shopping list price

Consider a shopping list that looks like [{item, quantity, price}, ...]. Write a list comprehension that builds a list of items of the form [{item, total_price}, ...] where total_price is quantity times the price.

My implementation:

Sample usage:


Tic-tac-toe

Write a program that reads a tic-tac-toe board presented as a list or a tuple of size nine. Return the winner (x or o) if a winner has been determined, cat if there are no more possible moves, or no_winner if no player has won yet.

My implementation:

Sample usage:

My first tic-tac-toe solution was a bit more complex, using recursion to scan all rows, columns and diagonals. However, I found that for a 3x3 board, the simple pattern matching approach, while somewhat verbose, was much easier to read.

Seven Languages in Seven Weeks: Erlang, Day 1

After a long hiatus, I'm finally back to working my way through Seven Languages in Seven Weeks. After finishing up Scala, I'm now on the 5th language, Erlang, though it has taken me quite a bit longer than 5 weeks to get here.

Erlang, Day 1: Thoughts

The first Erlang chapter is just a gentle introduction to the language, so I haven't formed much of an impression of it yet. So far, it looks like a dynamically typed functional programming language with Prolog syntax and pattern matching. Of course, I mostly know of Erlang for its concurrency story, so I'm excited to experiment with that in later chapters.

Erlang, Day 1: Problems

Write a function that uses recursion to return the number of words in a string

Write a function that uses recursion to count to ten

Write a function that uses matching to selectively print "success" or "error: message" given input of the form {error, Message} or success

On to day 2

Check out Erlang, Day 2, for more functional programming goodness.

Gigabyte GM-ECO600 Long-Life Wireless Laser Mouse Review




Best Wireless Gaming Mouse - I bought this wireless mouse for my 13 yr old nephew. He does gaming on his laptop and really likes this mouse. He use the mouse everyday for anything from surfing the web to editing photos and vids and it is really good. The Gigabyte GM-ECO600 Long-Life Wireless Laser Mouse was great, fast tracking, and felt moderately nice in the hand. The dpi adjustments are

Razer Naga Epic Gaming Mouse Sales Off



Best Wireless Gaming Mouse - I've been looking for a good gaming mouse for a long while now, seeing as I use a laptop. This Razer Naga Epic Gaming Mouse is absolutely wonderful. It feels very comfortable and all buttons are reachable. The hardware is nicely built and has a solid feel to it. There is a little bit of a learning curve with initial use. Extra buttons is much less annoying than not

Razer Mamba 2012 Elite Ergonomic Wireless Gaming Mouse



Best Wireless Gaming Mouse - I just wanted a mouse that worked well. This is the first wireless mouse that I have tried in years that had forward and backward buttons, and actually had a scroll wheel that you could easily click as a center mouse button. The Razer Mamba 2012 Elite is a very comfortable mouse to use in both wired and wireless modes. The accuracy is dead on. The grip/movement is

Logitech Wireless Gaming Mouse G700 Discount



Best Wireless Gaming Mouse - I have a small hand. One of my biggest problems in search of a mouse that my hand could comfortably rest on. I've always found that most mice are designed for games too big and clunky for us smaller handed individuals, or they are smaller and don't have the additional mapping buttons that I enjoy. I got Logitech Wireless Gaming Mouse G700 for about a year and half

Choosing The Best Wireless Gaming Mouse



Wireless Gaming Mouse - The computer mouse is perhaps the most important input device used in games, and using a gaming mouse, well suited, which allows you to play your best. It can be the difference between victory and defeat.

Gaming mouse are available in different versions: optics, laser and ball mice. Mouse balls are old and are virtually dead, and will not be discussed further. Optical

Seven Languages in Seven Weeks: Scala, Day 3

After some functional programming on day two, it's time for the third and final day of Scala in Seven Languages in Seven Weeks.

Scala, Day 3: Thoughts

After two lengthy chapters on the object oriented and functional programming syntax/options in Scala, the third day rushes through some of the most intriguing features, including pattern matching and concurrency via actors. I would have preferred to spend a bit more time on these complicated topics.

In fact, I had the same complaint on Day 3 of IO, where we also blasted through a discussion of concurrency in just a few pages. I respect the difficulties of plowing through seven different languages in a single book and don't expect deep discussions of any one of them, but I think the book would've been stronger if it had a greater bias towards the more advanced "day 3" topics of each language instead of basic syntax discussions in day 1 or 2.

Scala, Day 3: Problems

Extend the "sizer" application to count and size links

Take the sizer application (code, output) and add a message to count the number of links on a page. Follow these links and calculate their size as well, so you get the total size for each page.

The code:

The output:

This problem was a great way to experiment with actors in Scala. The sequential solution is self explanatory, so here's an outline of the concurrent one:

  1. The caller creates B Base Actors, one for each of the B base URLs.
  2. The caller then calls receive to await a message from each Base Actor.
  3. Each Base Actor concurrently loads its base URL, finds the links on the page, and creates L Link Actors, one for each of the L links on the page.
  4. The Base Actor then calls receive to await a message from each of its Link Actors.
  5. Each Link Actor concurrently loads the page for its given link and sends a message to its parent Base Actor with the size of that page.
  6. When the Base Actor has received a message form each of his Link Actors, it sends a message to the caller with the total size.
  7. When the caller has received B messages from the Base Actors, we are done.

The sequential code takes nearly 20 seconds to run while the concurrent code takes less than 3 seconds, a 7x improvement. The concurrent code is definitely more complicated, but not unreasonably so. Even though it was my first time using Scala actors, the code took less than 30 minutes to get working, much of it spent learning about the self keyword. In fact, I find it very easy to reason about Scala's actors, which is not something I can say for Java's synchronized keyword, Executors, Runnable, and various other concurrency constructs. 

Wrapping up Scala

I'm a bit torn when it comes to Scala. Most of the time, I saw it as a vastly improved version of Java. The support for closures, functional programming, pattern matching, and actors all seem like genuinely useful tools that would dramatically improve productivity, code readability, correctness, and expressiveness. I've already used Scala in a few of my projects to build some features that would've been nearly impossible or incredibly ugly in Java. 

However, even in my limited exposure to Scala, I've already come across a number of hiccups. As I mentioned on day 1, the API docs are not helpful and look like they are written for academics. The IDE support is vastly inferior compared to Java. I've now tried both Eclipse and IntelliJ, and each one has significant problems: e.g. missing compile errors on some code; finding errors on other code that's actually valid; broken/missing auto complete; issues with the refactor/rename functionality; poor support for running scripts. The compiler is slow. The type hierarchies are complicated. Type inference doesn't always work as well as you'd hope.  

However, there is one issue that worries me above all else: feature overload. It seems like Scala is trying to be all things to all people. It's object oriented; it's functional; it has type inference; it has lots of syntactic sugar; it has actors; it's compatible with Java; it has first class support for XML; they are even trying to add macros. While all of these features could lead to an incredibly powerful language, they could also lead to one that's incredibly complicated and difficult to use.


User experience counts. Not only for products, but for programming languages too.

How many features can you pile into one language before it becomes too cumbersome? How much syntax sugar do you need to understand to be able to read or write code? How many paradigms and mental models do I need to juggle to follow along? Do so many options make a language more flexible or less? Will there be such a thing as "idiomatic Scala" or will it be a free-for-all? Is it better to have a dozen ways to do something or one "proper" and well known way?

I don't know the answers to these questions, but I suspect they'll have a large impact on Scala's adoption. In the meantime, I'll keep hacking away at it to see what I can learn.

On to Erlang

Read on to learn about the next language in the book, Erlang.

Seven Languages in Seven Weeks: Scala, Day 2

After a bumpy start with Scala on Day 1, I've moved onto the second day of Scala in Seven Languages in Seven Weeks.

Scala, Day 2: Thoughts

The second Scala chapter shifts gears to functional programming. Unfortunately, I was impatient on Day 1 and had already looked up all of these concepts (and some more) to build a Tic Tac Toe game. As a result, I breezed through the chapter.

On a side note, I was using Scala on a personal project and rewrote some Java code using Scala. As much as I complained yesterday about Scala's complexity, the slow compiler, and poor IDE support, I must admit one thing: the resulting code was noticeably cleaner, shorter, and easier to read.

The language is certainly not perfect, but I need to make sure I'm not missing the forrest for the trees: it's still likely a vastly superior alternative to Java.

Scala, Day 2: Problems

The functional programming problems in this chapter were extremely simple. I burned through them in a few minutes and present the code without further comment:

String foldLeft

Use foldLeft to compute the total size of a List of Strings.


Censorship

Write a Censor trait with a method that will replace "curse" words with "clean" alternatives. Read the curse words and alternatives from a file and store them in a Map.



On to day 3

Learn about pattern matching and actors in Scala, Day 3.

Seven Languages in Seven Weeks: Scala, Day 1

It's time for a new chapter in the Seven Languages in Seven Weeks series: today, I take a crack at Scala.

Scala, Day 1: Thoughts

After using Java for years, I was curious to try out Scala, which has often been described as the next step in the evolution of Java. Scala's feature list is impressive: object oriented, functional, type inferencing, traits/mixins, currying, pattern matching, concise syntax, interoperability with Java code, an active community, and so on. My previous experiences with Scala had been very shallow/short, so I was excited to take a slightly deeper dive.

The first chapter introduced the imperative and object-oriented features of Scala and walked through the basic syntax. On the surface, the language looks like Java and uses many of the same keywords, so I was able to jump right in. However, I was quickly slowed down by some unexpected differences, including types specified after the variable name instead of before, "static" methods and fields separated into "companion classes" (confusingly named "objects"), and methods definitions sometimes including or omitting an equals sign and or parentheses depending on whether they return values or take parameters as inputs.

I slowed down even more once I started looking at the functional programming concepts and, worst of all,  trying to make sense of the API docs. Although they look thorough, the docs are astonishingly bad when you actually try to use them. For example, here is all the documentation provided for the "reduce" method of Scala's List:


If you're new to Scala's syntax, functional programming, or just a hacker trying to get something done, this sort of documentation is almost useless. Plain, human English or an example would be an order of magnitude more useful. The reduce concept isn't actually that hard to understand, but parsing the dense syntax of the method signature and phrases like "associative binary operator" makes it seem like a PhD is necessary to use this language. Compare this to the reduce documentation for Ruby and underscore.js to see a world of a difference.

The type hierarchy also proved tough to navigate. For example, how do I find the closest common ancestor between List and MutableList? I thought it might be LinearSeq, but there seem to be separate mutable and immutable versions of it. Other classes/traits further up the hierarchy overlap, but are missing common methods I need, such as "collect" or "foldLeft". Overall, this basic search was much harder than, for example, finding the common ancestor between Java libraries like ArrayList and Vector: a glance at the top of the API doc and you're done.

I also ran into difficulties with type inferencing. It definitely saved me some typing and looked beautiful for simple cases and closure parameters. However, type inference couldn't handle many cases that seemed obvious. This was compounded by the sub-par IDE support, at least from IntelliJ 11, which took a while to get working in the first place. I routinely found code that IntelliJ accepted wouldn't actually compile. Oh, and the compiler is slow. Ridiculously slow, given the tiny snippets of code I was testing.

Having said that, I'm still a newbie to the language, and shouldn't complain too much. I'm sure I'll get used to the code, API, and Scala idioms. Still, there is value in being "hacker friendly": one of the reasons Ruby, PHP, and JavaScript have such huge user bases is because you can get started with them in minutes. And there's also something to say about complexity: Scala has a lot of features, syntax, and complicated concepts. I hope that these make the language more powerful and expressive rather than bloated and incomprehensible.

Scala, Day 1: Problems

Build a two player Tic Tac Toe Game

The code:


Sample output:


I tried to keep the code fairly generic, so it should work for any NxN tic tac toe board. I also used this as an opportunity to play with some functional programming, so I intentionally stuffed everything into a List (albeit a mutable one), avoided for loops, too many objects, and so on. To be honest, I'm not thrilled with the result.

I was able to use lots of one-liners, but many are hard to read. I got familiar with the fold, map, and filter methods, but in some cases, a for-loop would've been much cleaner (and faster). Overall, I just get the feeling that the code doesn't communicate its intent very well. I'd love some feedback from how a more seasoned Scala user would've tackled this problem. Would pattern matching be useful? Recursive calls on the head/tail of the List? Or is the imperative style with loops and a 2D array the best way to go?

On to day 2

Continue on to Scala, Day 2.

Got slow download but fast upload speeds over wireless? Here's a fix.

If you find that your wireless download speeds are abysmal while your uploads speeds are pretty solid, especially with Apple devices, I've got a possible solution for you. I actually struggled with this issue for a while and decided to write down my findings in a blog post in case I, or anyone else, runs into this in the future.

tldr: disable WMM QoS in your router settings.

Symptoms

At home, I have the following setup:
Whenever I used my laptop or phone, the Wi-Fi connection felt incredibly slow. Youtube videos took forever to load, Google Maps tiles filled in slowly, and even gmail felt unresponsive. On the other hand, my desktop, which was connected to the router via an ethernet cable, worked just fine. 

Numbers

To confirm my observations, I decided to take some bandwidth measurements using bandwidthplace.com, speakeasy.net, and speedtest.net for the laptop and the Speed Test app for the iPhone. The results were pretty consistent across all app and device pairs and looked something like this:

Desktop
  • Download: 24 Mbps
  • Upload: 4.5 Mbps
Laptop
  • Download: 0.65 Mbps
  • Upload: 4.5 Mbps
iPhone
  • Download: 0.58 Mbps
  • Upload: 4.4 Mbps

Yikes! My laptop and iPhone download speed were more than 30 times slower than my desktop's download speed! On the other hand, the upload speed was roughly the same on all devices. What the hell was going on?

Failed attempts

After googling for solutions, I tried a number of tweaks commonly suggested around the web:
  • Change DNS hosts
  • Change wireless channel
  • Change the wireless channel width
  • Use a different security mode (WPA2 personal)
  • Shut off firewalls
  • Enable or disable IPv6 settings
  • Reboot the router
None of these worked. 

The solution

Out of desperation, I started tweaking random settings on my router and stumbled across one that finally worked. The directions for other routers may be a little different, but here's what I did:
  1. Go to http://192.168.1.1 and login to your router. If you've never done this, look for instructions that came with your router or do a google search to find the default username and password.
  2. Find a page that has QoS settings. For the E1200, you need to click on "Applications & Gaming" and select the "QoS" sub-menu.
  3. Disable WMM Support
  4. Click save.
That's it. The second I disabled WMM support, the download speeds for my laptop and iPhone both jumped to 24 Mbps, perfectly matching my desktop. 

What the hell is WMM?

WMM is apparently an 802.11e feature that provides higher priority for "time-dependent" traffic, such as video or voice. In theory, this should make things like VoIP calls and video chat (e.g. Skype) perform better. In practice, having it enabled destroyed my Wi-Fi download speeds. Since I disabled it, my Wi-Fi is blazing fast and I've seen no negative side-effects.

If anyone has more information as to why this would be the case, please share it here.

Update (09/13): some nitty-gritty details

In the last year, this post has had over 100k views and helped many people fix their download speeds. I'm happy I was able to help people. Other folks have been eager to share advice too: I got an email from a Russ Washington in Atlanta who did some impressive investigative work to uncover a potential underlying cause. In case it helps others, here is his email:
Yevgeniy: I ran into your blog post "Got slow download but fast upload speeds over wireless? Here's a fix." I have some info you may find useful. 
This happened to me too when I moved to Comcast - but I had DSL running in parallel. The Comcast traffic had this problem but the DSL did not. Also, it affected my Linksys router when it had stock firmware *and* after switching to DD-WRT. Clearly the traffic itself was at issue, so I broke out the packet sniffer. 
*All* inbound Comcast traffic (Internet --> client) was tagged with a DSCP value of 8 (Class Selector 1). The DSL traffic had a DSCP value of 0. So Comcast is tagging all traffic to be treated a certain way by QoS: "Priority," which sounds good but is actually the second-*lowest* possible. 
WMM, itself a QoS technique, apparently de-prioritizes (drops?) based on the Comcast-supplied value. Turning off WMM worked around it - but since WMM is part of the 802.11n spec, I wanted root cause. Judiciously replacing that set-by-Comcast DSCP value does the trick. 
So between my Linksys router and both ISPs, I had a Netscreen firewall. It lets me set DSCP values by policy - so I told it to match the DSL (DSCP 0). This yielded great improvement. However, I was still not getting full speed so even a zero value was not the best for > DSL rates. I set the DSCP value to 46 (Expedited Forwarding) and bingo, up to 20Mbps, almost full provisioned speed (25Mbps). 
Why only download issues? Because the only Comcast-tagged packets are the inbound ones: Internet --> you, including those big data packets. When uploading, yes, you get sent ACK packets and such - but they are tiny connection-control packets. I imagine WWM weirds out on them too, but you (usually) wouldn't notice when doing multi-Mbps speed tests. 
I am still trying to udnerstand WMM, but this was a big find, and I was lucky to have a firewall that let me packet-tweak. Hope you find the info useful. 
Russ Washington
Atlanta, GA

Seven Languages in Seven Weeks: Prolog, Day 3

After a rocky day 2 of Prolog, I'm back for a 3rd day in my Seven Languages in Seven Weeks series of blog posts.

Prolog, Day 3: Thoughts

Today, I got to see Prolog flex its muscles. After just 2 days of using the language, we were already able to use it to solve two relatively complicated puzzles: Sudoku and eight queens. Even more impressively, the Prolog solutions were remarkably elegant and concise.

I had complained on the previous day that, for simple problems, the Prolog approach did not communicate its intent particularly well. Day 3 turns that completely around. For example, let's check out the 4x4 Sudoku solver in the book. Here's how you run it:


And here is the code:


Even if you don't know Prolog, the code is so eminently declarative and visual, that you can still get an idea of what's going on. We break the 4x4 puzzle down into individual elements, rows, columns, and squares. After that, we just apply some constraints to them: all elements must have a value between 1 and 4 (fd_domain) and the values in each row, column, and square must be different (fd_all_different).

And that's it.

A few lines of code and the Prolog compiler figures out values that satisfy these criteria to get you a solution. Although not an entirely even comparison, take a look at Sudoku solvers I found online in Ruby, Java, and C++. I'm sure each of these imperative solutions could be made prettier; perhaps they are faster; but none of them comes close to the declarative solution in terms of communicating the code's intent.

6x6 Sudoku

Modify the Sudoku solver to work on six-by-six puzzles, where squares are 3x2. Also, make the solver print prettier solutions.

The code:


The output:


I took the easy way out on this problem, just extending the 4x4 solver to handle 6x6 puzzles with some copy and paste.

9x9 Sudoku

Modify the Sudoku solver to work on nine-by-nine puzzles.

The code:


The output:


With an even bigger puzzle, I finally decided to avoid copy and paste and build something more generic. The code above should be able to solve any NxN puzzle, where N is a perfect square (4x4, 9x9, 16x16, etc).

The approach is the same as before: ensure the values are all in the range 1..N, carve them into rows, columns, and squares, and check that no value in each row, column, or square repeats.

The bulk of the work is done by a rule called slice:


The goal of slice is to chop the Puzzle into a list of N sublists. Each sublist represents a row, column, or square (depending on the variable Type) in Puzzle. The slice rule takes one element at a time from Puzzle and uses one of the slice_position methods to put this element into the proper spot in its sublist. For example, here is slice_position for rows:


For each element I of Puzzle, we first figure out which row (sublist) it lives in: X = I // Size. The // in prolog is a shortcut for integer division. We then figure out where in that sublist the element belongs: Y = I mod Size. Pretty simple. Squares, however, are a lot more complicated:


To get the math right on this one, I got some help from Hristo. He even posted his reasoning on the Math StackExchange to see if anyone could come up with a formal proof for his formula. Once that piece was in place, the Sudoku solver was pumping out solutions to 9x9 puzzles in no time.

Wrapping up Prolog

Prolog is a fascinating language. If you've done imperative programming your whole life, you really owe it to yourself to try it out. It's a refreshingly different approach to problem solving that will definitely impact the way you think.

I found it particularly bizarre to be manipulating the solution or output to some programming puzzle, even though the solution wasn't yet known! Of course, in Prolog, you're not actually manipulating the solution, you're merely describing and defining it. Sometimes it was easy to invert my thinking this way; at other times, it was brutally difficult, like trying to mentally reverse the direction of the spinning girl illusion.

Unfortunately, much like Io before it, Prolog suffers from the lack of an active online community. You can find some information via Google and StackOverflow, but it's often sparse and incomplete. The documentation is a bit scattered, seems to be written in a very academic language, and is often more confusing than helpful. Worst of all, there are several flavors/dialects of Prolog and the code from one often doesn't work in another.

Having said all that, I have a suspicion that declarative programming is going to grow in popularity in the future (10+ years). Being able to just tell the computer what you want instead of how to get it could provide enormous leverage for programmer productivity and creativity. Of course, I think we'll need a language more intuitive and expressive than Prolog, as well as a smart enough compiler to understand it, but the declarative approach to coding seems like a much bigger leap forward than, say, the whole object oriented vs. functional programming debate.

Onto the next chapter!

Changing gears one more time, head over to Scala, Day 1 to learn about a language that mixes OO and functional programming.

Seven Languages in Seven Weeks: Prolog, Day 2

Today is the second day of Prolog in the Seven Languages in Seven Weeks series of blog posts. You can find the first day of Prolog here.

Prolog, Day 2: Thoughts

Today, Prolog broke my brain. The chapter started with recursion, lists, tuples, and pattern matching, all of which were tolerable if you've had prior exposure to functional programming. However, after that, we moved onto using unification as the primary construct for problem solving, and the gears in my head began to grind.

It took me a while to wrap my head around using unification, but once it clicked, I was both elated and disappointed. I was elated because (a) I often get that way when I learn something new and (b) the book was clearly getting me to think about programming in a totally new manner. However, I was disappointed because even accomplishing something as trivial as adding all the values in a list required a recursive solution that seemed unnecessarily complicated:


The algorithm starts with a base case: the sum of an empty list is 0. We then add a recursive case: the sum of a non-empty list is the head of the list plus the sum of the tail. This isn't too bad once you get used to it, but I find that this sort of code does not communicate its intent well at all. That is, I'm a believer that "programs must be written for people to read, and only incidentally for machines to execute" (Structure and Interpretation of Computer Programs) and this prolog code seems to be the exact opposite. Even for something as trivial as adding the values in a list, I find myself distracted by the need to do pattern matching on the list, recursive calls, and base cases.

In fact, while I have no doubt that the declarative approach is very powerful for certain types of problems, it's not exactly what I expected when I first heard of declarative programming. Conceptually, I thought declarative programming would be all about describing what the solution looks like. From the Prolog I've seen so far, which is admittedly very little, I feel like what we're actually doing is setting up elaborate "traps" to force the unification engine to fill in the proper values for our variables.

As a counter-example, here's how an "ideal" declarative programming might let me define the sum of a list:


To me, the "code" above screams it's intent far more clearly than the recursive prolog solution. An even clearer example comes later in this blog post, where I sort a list using Prolog. While writing this sorting code, I felt like I was playing a game of "how do I setup my rules and atoms to arm twist unification into sorting?" If I had designed Prolog using a coding backwards approach, I would've strived to let the user define a sorted list in a much more natural manner:


(Update: turns out it is possible to do something close to this. See the end of the post.)

However, I admit freely that I'm no expert on compilers or language design, so perhaps I'm being naive. Maybe there's no way to define a syntax or compiler that can handle such simple looking definitions in the general case. Perhaps the unification approach in Prolog is as close as we can get and I just need time until my brain gets used to it more.

Prolog, Day 2: Problems

Reverse the elements of a list


Find the smallest element of a list


A pretty simple problem, except that there doesn't seem to be a way to do an if/else statement in Prolog. Instead, to handle the possible outcomes of TailMin =< Head and TailMin > Head, I had to use pattern matching and a bit of copy & paste. If anyone knows a more DRY way to do this, please let me know.

Sort the elements of a list


I implemented merge sort. In retrospect, this seems to somewhat defeat the purpose of declarative programming. That is, instead of describing the solution - that is, what a sorted list looks like - I'm effectively describing the individual steps it takes to sort a list. In other words, this is awfully close to imperative programming.

Unfortunately, I couldn't think of a more "declarative" way of solving this. I initially wrote something similar to a selection sort: it recursively called sort_list on the tail until we got down to one item. At that point, it would use the merge method to arrange the head and tail in the proper order. As we went back up the recursion stack, the merge method would insert the head in the proper spot in the partial sublist. This was obviously less efficient, but at least the sort_list method looked declarative. If only there was a way to define it without a merge step, I'd be in business.

Fibonacci series


I ran into two gotchas writing a fibonacci function: first, I had to remember that the recursive calls to "fib" are not really function calls and you can't just directly pass "N - 1" or "N - 2" as parameters. However, when defining N1 and N2, I ran into a second gotcha: you need to use the "is" keyword instead of the equals ("=") sign.

Factorial



UPDATE: eating humble pie

Ok, I was wrong. It turns out you can write Prolog code that looks much more declarative and much less imperative. After reading through Day 3 of Prolog, I was a bit wiser, and was able to write the following code for sorting:


I'm sure it's not as fast as the merge sort I wrote on my first attempt, but the intent is much clearer: I'm quite obviously describing what a sorted list should look like instead of walking through the steps of how to build one. The recursion and various language quirks of Prolog still take some getting used to, but from a readability perspective, I'm much happier with what I've been seeing since reading Day 3.

Moving right along

Check out Prolog: Day 3 for some more declarative goodness, including an elegant Sudoku solver.

Seven Languages in Seven Weeks: Prolog, Day 1

After finishing up Io, it's time to shift gears yet again in my Seven Languages in Seven Weeks series of blog posts. This time, it's time for something radically different: Prolog.

Prolog, Day 1: Thoughts

The main goals of Seven Languages in Seven Weeks is not actually to teach you seven new languages, but to teach you seven new ways of thinking. In fact, the languages in the book are deliberately chosen so as to represent a wide spectrum of approaches to programming problems.

While the first two languages, Ruby and Io, felt pretty familiar, the third one is a totally different kind of beast. Prolog is my first exposure to declarative programming and definitely a new way of thinking. All the previous languages followed an imperative model: you write code to tell the compiler what to do one step at a time to arrive at some result. In declarative programming, you actually start by describing the result you want and the compiler figures out the steps to get you there.

As an example, consider sorting a collection of integers. In an imperative language, you would describe all the steps of a sorting algorithm:
  1. Divide the collection into sublists of size 1.
  2. Merge pairs of sublists together into a new sublist, keeping the values in sorted order.
  3. Continue merging the larger sublists together until there is only 1 list remaining.
With declarative programming, you would instead describe what the output list should look like:
  1. It has every element in the original list.
  2. Each value at position i in the output list is less than or equal to the value at position i + 1.
And that's it. The prolog compiler would take this description and figure out how to assemble a list that matches your description.

Well, that's the theory any way. After the first chapter, I've only gotten a small taste of this model of programming, so I'm still finding it hard to understand (a) how hard it would be to describe something more complicated than sorting and (b) if the compiler could come up with efficient solutions.

Nevertheless, many of us have been using a (limited, non-turing complete) form of declarative programming for years: HTML. Instead of writing procedural code that instructs the browser how to render the page pixel by pixel, HTML lets you describe what the result should look like and the browser figures out how to render it for you.

Prolog, Day 1: Problems

Books and authors

Make a simple knowledge base representing some of your favorite books and authors. Find all books in your knowledge base written by one author.

Knowledge base:


Queries:


Music and instruments

Make a knowledge base representing musicians and instruments. Also represent musicians and their genre of music. Find all musicians who play the guitar.

Knowledge base:


Queries:


Normalization?

For the books knowledge base, I defined the rules in a "normalized" style as I might use for a SQL database. Looking back at it now, I'm not sure this is the best way to do it. It doesn't seem like I can do anything meaningful with the "normalized" rules other than, perhaps, checking if a given atom is valid.

For the music knowledge base, I only defined the relationships and not any individual atoms. This seems closer to the style in the book and can field the same queries, but with less code. I would hazard a guess that this is the proper way to do it, but I'd love to hear back from anyone who has had more than 1 day of exposure to Prolog.

Day 2

For more Prolog goodness, continue on to Prolog, Day 2.

Seven Languages in Seven Weeks: Io, Day 3

Today is the final chapter of Io in the Seven Languages in Seven Weeks series of posts. You can find the previous day of Io here.

Io, Day 3: Thoughts

Although I'm only on the second language out of seven in the book, a pattern is emerging: day 1 is very basic syntax, day 2 is more advanced syntax, and day 3 shows you some of the advanced applications that set the current language apart from all the others. 

It's a great strategy: each jump is small enough that you can follow along, but big enough that you're able to get a thorough look at the language in just a few days. In fact, my biggest complaint so far is that the examples in the final day of Io are very intriguing, but also very short, so I'm dying to see more.

In a single chapter, we tore through using metaprogramming and concurrency in the span of just a few pages. It was tough to appreciate it all in such a short time. I was able to get a little more practice with Io metaprogramming by implementing a super simple "doInTransaction" method similar to the one I created in Ruby



The idea was to be able to run some code, such as starting and ending a transaction, before and after a "block" of statements. For added fun, I wanted to be able to support curly braces for defining blocks. Accomplishing both was trivial by taking advantage of the fact that Io treats "{" as the message "curlyBrackets". Handle that message properly in your object, add some basic introspection, and you're done.

Unfortunately, I wasn't able to think of a suitable "toy" example to learn more about coroutines. I'm still fuzzy on a lot of the nuances, such as how memory is shared between the "threads", how many threads there are, and how yield and resume really interact. I'd love to see some more examples, especially those that show a practical use case for Io actors.

Io, Day 3: Problems

Enhance Builder XML

Enhance the XML program (see the original source, the original test file, and the original output) to add spaces to show the indentation structure. Also, enhance the XML program to handle attributes: if the first argument is a map (use the curly brackets syntax), add attributes to the XML program. For example, book({"author": "Tate"}..) would print <book author="Tate">.

This is an awesome example of Io's flexibility and power when it comes to creating DSLs. In some 30 lines of code, Io can process this Builder format:


To produce the equivalent HTML output:


Most of the (surprisingly concise and elegant) builder source code came from the book. Here's my updated version that handles indentation and attributes:


The biggest stumbling point was trying to use addAssignOperator in the same file as the test script. This doesn't work: the OperatorTable has already been loaded and can't be changed. By splitting the code into two files, one for source and one for testing, I was able to properly handle the colon and avoid the very frustrating "Sequence does not respond to ':'" error.

Create a list syntax that uses brackets


A much easier problem, but another great example of the flexibility of Io: Ruby-like syntax for lists in just a couple lines of code.

Wrapping up Io

This was the last day of Io and I must admit, I'm a bit sad to see it go. It's a beautiful example of just how simple and flexible a language can be. Of course, being able - and tempted - to change just about anything is a bit of double-edged sword: more than once I saw unexpected consequences from overriding the "forward" method. However, it's undeniably powerful. If nothing else, Io has made me more excited to learn about the Lisp family, with Clojure being the 6th language in the book.

I wish I got to see some more examples of concurrency in Io, but the book was pretty sparse in that area. Even worse, I can't find much online. Unfortunately, Io's community is tiny. It's hard to justify spending too much time on a language that, in all honesty, I'll probably never use in any capacity besides learning.

Time for something new

Continue on to Prolog, Day 1, to learn about a radically different style of programming.

Seven Languages in Seven Weeks: Io, Day 2

Today is Day 2 of Io in my Seven Languages in Seven Weeks series of blog posts. You can check out Day 1 of IO here.

Io, Day 2: Thoughts

Day 2 made some huge leaps and bounds over the basic syntax introduced in Day 1. The key learning from this day is that in Io, just about everything is a message sent to an object. There aren't separate semantics for calling functions, using control structures, or defining objects: those are all just objects reacting to some sort of message.

One of the most startling examples of this is the realization that even the basic operators in Io, such as +, -, and *, are actually messages. That is, the code "2 + 5" is actually understood as the message "+" being sent to the object 2 with 5 as a parameter. In other words, it could be re-written as "2 +(5)". The "+", then, is just a method defined on the number object that takes another number as a parameter.

This makes supporting operators on custom objects simple: all I have to do is define a "slot" with the operator's name. For example, here's an object for complex numbers that can be used with the "+" operator:



I found this fairly eye opening. As I think of the syntaxes of other languages I'm used to, such as Java, there are "special cases" all over the place. For example, the "+" operator has special code to handle addition for numbers and String concatenation and nothing else; for loops, while loops, if statements, defining classes, and so on are all special syntax features. In Io, they are all just objects responding to messages.

Io, Day 2: Problems

Fibonacci

Write a program to find the nth Fibonacci number. Both the recursive and iterative solutions are included:



Safe division

How would you change the "/" operator to return 0 if the denominator is zero?


2d add

Write a program to add up all the values in a 2-dimensional array.


myAverage

Add a slot called "myAverage" to a list that computes the average of all the numbers in a list. Bonus: raise an exception if any item in the list is not a number.


Two Dimensional List

Write a prototype for a two-dimensional list. The dim(x, y) method should allocate a list of y lists that are x elements long. set(x, y, value) should set a value and get(x, y) should return that value. Write a transpose method so that new_matrix get(y, x) == original_matrix get(x, y). Write the matrix to a file and read the matrix from a file.


Guess Number

Write a program that gives you ten tries to guess a random number from 1-100. Give a hint of "hotter" or "colder" for each guess after the first one.


On to day 3!

Continue on to day 3 of Io here.

Seven Languages in Seven Weeks: Io, Day 1

Welcome to the first day of Io in my Seven Languages in Seven Weeks series of blog posts. After spending a few days playing around with Ruby, Io is definitely a change of pace.

Io, Day 1: Thoughts

From what I've seen so far, Io is a prototype-based language (similar to JavaScript), with extremely minimal syntax (none of Ruby's syntax sugar), objects are just a collection of "slots" that contain either data or methods, and you interact with objects by passing them messages. To give you a taste, here are some snippets:

We'll start with the classic Hello World:


The way to think about this in Io terms is that you are passing the "println" message to the "Hello, World!" String object. I must note that having a space between object and message makes the code noticeably harder for my mind to parse. If the code had used a dot instead - "Hello, World!".println - I would've found it much easier! As it is, perhaps because I'm not used to it, my comprehension is slowed and my aesthetic sense is tingling.

Here's a simple example of defining variables and methods:


Method calls look similar to most languages I'm used to: "method param1, param2, ..." However, I wonder if the Io way of looking at it is that the speak method is an object and the phrase parameter is the message?

Finally, here's an example that shows objects and prototypal inheritance:


In prototype-based languages, the distinction is blurred between a "class" - that is, some sort of template defining an object and its behavior - and an "instance" of that class. In Io, they are pretty much one and the same: you just clone an existing object to create a new one, whether you intend them as instances or templates.

The one place where "instances" do differ from "classes", however, is by convention: the class-like objects are usually named with an upper case first letter (Dog, Cat) while the instance-like objects are named with a lower case first letter (myDog, myCat). I suppose this sort of design greatly simplifies the language, as there's no need for special syntax, constructs, or rules for "classes".

Io, Day 1: Problems

The day 1 problems in this book are always very basic. I skipped a few of the really simple ones as they are not too interesting.

Io typing

Evaluate 1 + 1 and then 1 + "1". Is Io weakly or strongly typed?


As you can see above, Io is a strongly typed language.

Dynamic code slot

Execute the code in a slot given its name.


Explanation: the "System" object contains various system properties and methods. I pass the "args" parameter to it to get the command line parameters. I then use the "at" method to access the parameter at a given index: in Io, index 0 has the name of the app (DynamicCodeSlot.io) and index 1 is the first argument (foo or bar).

By calling the "getSlot" method, I get back the object stored at the slot named as a command line argument. Finally, the "call" method does what you'd expect: it calls that slot.

Io, Continued

Continue on to Io, Day 2.

Seven Languages in Seven Weeks: Ruby, Day 3

This is my 3rd day of Ruby in the Seven Languages in Seven Weeks series of posts. You can find the previous day here.

Ruby, Day 3: Thoughts

The third day combines metaprogramming techniques (define_methodmethod_missing, and mixins) with what what we learned in the previous chapters (flexible syntax, blocks, yield) to work some magic. Whereas day 1 and 2 showed how Ruby could be more concise and expressive than other languages, this chapter shows some of the capabilities available in Ruby, such as beautiful DSLs and composable designs, that are nearly impossible in stricter languages.

I saw small of examples of this when I was working on the Resume Builder: the profile data I was fetching from the LinkedIn APIs came back as JSON. I wanted to have a nice Ruby class to wrap the JSON data and was able to do this cleanly and concisely using some very simple metaprogramming:



Instead of defining dozens of getters and setters as in the LinkedIn API Java Library, I just declared the fields in an array (SIMPLE_PROFILE_FIELDS), looped over them, and used define_method to create the appropriate methods. To be fair, this is kids stuff; if you really want to see metaprogramming shine, take a gander over at ActiveRecord.

Of course, with great power comes great big bullet wounds in the foot. Metaprogramming must be used with more a bit more caution than other programming techniques, as chasing down errors in dynamic methods and trying to discern "magic" can be painful.

Ruby, Day 3: Problems


CSV application

There was only one problem to solve on this day: modify the CSV application (see the original code here and the original output here) to return a CsvRow object. Use method_missing on that CsvRow to return the value for the column given a heading.



Using this sample file:


The code above will produce the following output:


Moving on

This was the final day in the Ruby chapter. Join me next time as I work my way through a totally new language: Io.

Sherlock, The Reichenbach Fall: What Really Happened?

The PBS/BBC Sherlock series is one of the most entertaining shows I've seen in years. It's a modern take on Conan Doyle's classic with strong writing, a superb cast, and plenty of mystery and deduction. If you're not watching it, you're really missing out.


Spoiler alert!

In fact, if you're not watching it, you should probably miss out on this blog post too. Seriously, stop reading.

What follows is an in-depth, full-of-spoilers discussion of what happens in the final episode of the show thus far, The Reichenbach Fall.

The final question

I'm sure you know exactly what I'm going to discuss: how did Sherlock fake his own death?

Using my amateur deductive reasoning and a healthy amount of rewind & pause, I have a pretty good guess at what happened. Let me walk you through the reasoning.

To start with, let's state the very obvious: either the body that fell off the roof was (a) Sherlock, or (b) it wasn't. I think we can fairly confidently eliminate option (b).

An impostor?

Could Sherlock have thrown Moriarty's body or the test dummy from earlier in the episode in his place? Not likely.

First, there was no way Sherlock could have bent over, hauled up a body or a dummy, brought it to the edge, and shoved it over, all without Watson noticing. There were some quick cuts and edits during the scenen, but we have no reason to believe that Watson looked away from Sherlock at any point during their conversation.


Other practicalities make this even more difficult: Moriarty was dressed noticeably differently than Sherlock (different coat, white shirt instead of dark, a tie instead of a scarf) and had shorter hair; seconds before Sherlock jumps, he looks back and Moriarty's body is lying there, still in the original outfit; the scene where Sherlock jumps is shot from behind and it's clear no one is just shoving a body/dummy off the roof.


Finally, the biggest evidence of all: the body falling through the air is clearly flailing its arms and legs. Neither a dummy nor a dead body would fall like that.

Verdict: it must have been Sherlock himself who jumped off the building.

How did he survive?

One key piece of evidence is that Sherlock is extremely specific in where he wants Watson to stand during their conversation. Here is the layout of the scene:



When Watson arrives, Sherlock is on the hospital roof, some 6-8 stories up, with a shorter, 2-3 story brick building between him and Watson. Watson tries to run around to the side of the brick building, but Sherlock yells at him to return to his previous spot. Sherlock is very vehement about this.

It's likely that Watson would be able to see Sherlock from either vantage point, which leaves only one other possibility for why Sherlock would care about where Watson stands:
(1) There was something between the brick building and the hospital that Sherlock didn't want Watson to see.
After Sherlock jumps, Watson again tries to run around the brick building and this time, is knocked down by someone on a bike. This is unlikely to be an accident and gives us our second hint:
(2) Sherlock needed to delay Watson until the something was no longer visible.
If you watch the scene closely, there is one item that fits both of these criteria: a truck filled with bags (garbage? recycling? laundry?) parked right next to the spot where Sherlock's body ends up. You get your first glimpse of this truck just as Watson is coming around the corner, just before he is knocked down:



You see the same truck drive away, out of the scene, a few seconds later as Watson finally gets to Holmes' body:


Think on that for a second: if a body comes crashing down a few feet from your truck, do you just casually drive away or jump out and see what the hell just happened? The fact that the truck drove away increases our confidence that it was part of the plot.

Verdict: the bags in the back of the truck served as padding to break Sherlocks' fall.

How did it go down?

Before meeting with Moriarty, Holmes seeks out Molly and tells her that he thinks he will die and that he needs her help. Holmes must've already realized that Moriarty's goal was to get him to commit suicide, so he enlisted Molly - who works at a morgue and could certainly fake autopsy reports and death certificates - to help him fake it. It's also worth remembering that it was Sherlock, not Moriarty, who arranged the meeting on, of all places, the rooftop of a hospital.

When Moriarty blew his brains out, Sherlock had no choice, and jumped. He landed in the truck, covered himself with some sort of blood (possibly provided by Molly), and dropped down onto the pavement to play dead. In fact, he did better than that. We saw Holmes with a bouncy ball much of the episode; it turns out there is a classic magic trick that involves squeezing a ball under your armpit to cut off circulation to your arm and make it seem like you have no heartbeat.

What about the bystanders?

Since all the bystanders could see the truck and Holmes fall into it, they must have been in on it. The crowd that gathers around Holmes' body and the biker that knocks over Watson were either part of Holmes' homeless network or government folks brought in by Mycroft.

Mycroft is an interesting possibility because his reaction to reading about Holmes' death is ambiguous: was he sad or relieved? Even more telling is the fact that Mycroft isn't with Watson and Mrs. Hudson at the cemetery to pay his respects to Holmes. My guess is that Mycroft knows Sherlock is alive, though it's possible that he merely deduced it after the fact.

No matter how improbable..

It'll be some time before the third season comes out and reveals the truth. In the meantime, feel free to join me in speculating by leaving your best theory in the comments.

Seven Languages in Seven Weeks: Ruby, Day 2

In my previous post, I went through the Day 1 Ruby problems from Seven Languages in Seven Weeks. Today, I'll share my solutions to the Day 2 problems and some more thoughts about Ruby.

Ruby, Day 2: Thoughts

I originally learned Ruby (and many other programming languages) the "hacker way": that is, I did a 10 minute syntax tutorial, browsed other peoples' code a bit, and then just started using the language, looking up missing pieces as I went. Although this is the most fun and productive way I've found to get started with a language, it can also lead to missing some of the finer points and subtleties.

For example, until the "Ruby, Day 2" chapter, I never had a full appreciation for Ruby code blocks and the yield keyword. For example, even though I frequently used "times" to do looping, I never thought deeply about how it worked:


It turns out that times is just a function (slightly obscured because Ruby doesn't require parentheses for function calls) on the Integer class that takes a code block as an argument. Times could be implemented as follows:


This style of coding allows for some powerful possibilities. For example, it is surprisingly easy to introduce a "do in a transaction" function:


Using this, I can now trivially wrap any number of statements in a transaction:


The equivalent in less expressive languages, such as Java, often involves vastly more code, implementing arbitrary interfaces, anonymous inner classes, and a lot of very hard-to-read code. For comparison, here is an example of how Java's Spring Framework recommends wrapping JDBC code in transactions:



Ruby, Day 2: Problems

The Day 2 problems are only slightly tougher than Day 1. The most fun part was coming up with a way to keep the code as concise as possible.

Print 16
Print the contents of an Array of 16 numbers, 4 numbers at a time, using just "each". Now, do the same with "each_slice" in Enumerable.


Tree
Modify the Tree class initializer (original code here) so it can accept a nested structure of Hashes. Trickiest part here was that the "collect" function can call the passed in block with either one argument that's an Array or two arguments that represent the (key, value) pair.


Grep
Write a simple grep that will print the lines and line numbers of a file having any occurrence of a phrase anywhere in that line.


Ruby vs. Java, Round 2

I couldn't resist implementing the grep code in Java to see how it compares:


It's 33 lines long. The Ruby solution was a one-liner.

Ruby, Continued


Check out more Ruby goodness on Ruby, Day 3.

Seven Languages in Seven Weeks: Ruby, Day 1

I recently picked up a copy of Seven Languages in Seven Weeks by Bruce A Tate. The book is a survey of seven very different programming languages: Ruby, IO, Prolog, Scala, Erlang, Clojure, and Haskell. For each language, the goal is to give you just enough of a taste that you can see what makes it unique, what its strengths and weaknesses are, and the mindset and philosophy behind it.


Each section of the book focuses on a different language and includes coding problems for the reader to try at home. I've decided to record my my solutions to the problems and thoughts about each language in my blog. Today, we'll start with Ruby.

Ruby, Day 1: Thoughts

I've used Ruby fairly extensively the last few years, including several Ruby on Rails apps (Resume Builder, Veterans Hackday) and a number of utility scripts. There is a lot to like about Ruby - the concise & clean syntax, incredible flexibility, expressiveness, powerful DSLs - but my favorite part is the central tennet of the language, as expressed by its creator:
"Ruby is designed to make programmers happy." Yukihiro Matsumoto
The language isn't built for speed, concurrency, or any particular feature set. Its central "success metric" is programmer happiness and productivity, which are, arguably, the biggest bottlenecks in most projects.

Ruby, Day 1: Problems

The "Day 1" Ruby chapter focused on the very basics of the language, so I didn't learn anything new. The problems are extremely simple and basic, but for completeness, here are my solutions:

Hello, World
Print the string "Hello, world".


Hello, Ruby
For the String "Hello, Ruby", find the index of the word "Ruby".


Print Name
Print your name ten times.


Print Sentence
Print the string "This is sentence number 1" where the number 1 changes from 1 to 10.


Random Number
Write a program that picks a random number. Let a player guess the number, telling the player whether the guess is too high or too low.


Ruby vs. Java

Coming from a Java background, every time I see Ruby, I'm amazed at how concise and readable it is. There is far less boilerplate: you don't have to wrap everything in classes and methods, no semi-colons, far fewer curly braces, and so on. Everything is an object and there are countless helper functions, all with intuitive names: even if you've never used Ruby, it's easy to guess the effects of 10.times or 1.upto(10). Whereas in the Java world, libraries seem to compete on having every bell, whistle, and tuning knob, in the Ruby world, libraries focus much more on having the simplest, easiest, one-line-and-you're-done API possible.

For comparison, I implemented the number guessing game in Java:


It's has more than twice the number of lines of code as the Ruby version (and I kept opening curly braces on the same line!) and even though I've been doing Java for a very long time, it still took longer to write. Of course, there are many other trade-offs at play here, but they key thing to think about is the golden rule of programming:
"Programs must be written for people to read, and only incidentally for machines to execute.SICP
Ruby has its downsides, but it is one of the best languages I've seen for writing code that others can read, understand, and maintain for a long time after.

Ruby, Continued

The Ruby explorations continue on Ruby, Day 2.