Home Facade Sketch

June 25th, 2013

On the way back from our evening walk today I snapped a photo and decided to do a little design mockup. We’ve been thinking about what direction we want to go with the front of our house to modernize it; this is a hodgepodge of many of our ideas so far.

Original

 

After (my ‘artist’ rendering)

We are mostly looking for a way to add some contrast and get away from the drab gray/white/gray/black theme that’s going on right now. First priority is replacing the front door I think. We’ve seen a lot of doors we really like around friends’ homes and parade tours lately, and that seems like a good step that we can fit nicely into the overall plan.

I was trying out SketchBookExpress for this mockup, and pretty happy with it overall. For a free drawing app, it has the key features: layers, basic brush tools, and an eyedropper. Its cost about matches my level of graphic design skill, I think!

Quantified Running

June 16th, 2013

Lately, I’ve been getting very interested in the “quantified self” movement – it’s a great combination of hobbies for me. I like that it lets me play with technology and also play with my brain psychology to push myself to improve. We even preordered a couple Fitbit Flexes, but it seems that there’s a huge production shortage because despite being released on May 15th, it’s impossible to buy anywhere and our pre-order might be shipping in July.

In the meantime, I’ve been enjoying the steady improvements to the Nike+ running software since that’s what I have on my phone for now. Last night I had an interesting run that I was pretty excited about:

Screen Shot 2013-06-16 at 9.55.38 AM

 

For the whole first mile and a half or so, I was able to sustain a great speed that would have let me easily break my 25:00 5k goal (I only need to average about 8:00/mile, and my pace there was more like 7:45/mile). I was surprised at how great I felt even despite the 80° heat. Then I had to pay the price when I hit a huge hill going north on the nearby road – I don’t usually run this route so I was taken a little by surprise and didn’t have the energy saved for it. Having to walk for a while there cost me about a minute and a half on my overall time, which was enough to miss the goal. Still, it was my fastest time so far for the year and I’m very happy that was able to even come in under pace for such a big chunk of it.

Psychologically, the data is very motivating; at the time, I thought I just fell apart due to exhaustion and my pacing was incorrect. Now that I can review the metrics later on though it’s more obvious just how big that climb was. An easy route adjustment or a little planning ahead and saving a burst of energy should be enough to make the difference next time. Instead of going out for my next run thinking I’m not strong enough yet, I can approach it with some confidence and some better tactics in mind.

Hair Space (U+200A)

June 5th, 2013

I’m working my way through HackDesign.org’s nice design tutorials for developers, and running into a whole world of letter design that I didn’t even know existed. I’ve always found typography interesting and have participated in the cute “which is the best programming font” discussions (I’m a PragmataPro die-hard), but haven’t dug much deeper into all the variations as they apply to digital content.

Many wonderful links on this particular topic here, although I would certainly recommend starting the course from the beginning. My particular favorite today is from Yves Peters:

The non-breaking space is not the only special space character available in HTML. An em space is as wide as the type size, creating a perfectly square separator. The en space is half its width. Very useful in tabular material is the figure space, which takes up as much room as the numerals in the font, while the punctuation space is as wide as the dot or comma. Thin spaces can be used between the dot and the next letter in abbreviated names, and hair spaces to detach em dashes from the neighboring characters. And then there’s the three-per-em space, the four-per-em space, the six-per-em space …

I had sort of a vague notion in my head that &nbsp might have a different effect than tapping the spacebar, but had only really considered it from an encoding and string parsing standpoint. I suppose all these quirky types of spaces must be used all over the web and having subliminal effects on me without me truly understanding the difference between a tabular space and a hair space. I did take care to make sure to <strong> those space names and not <b> them, as well!

The need for speed: responsive scrolling

June 2nd, 2013

One of the things I try to do in my modern technological life is to be platform agnostic and try lots of different brands of hardware and software. My hope is that this keeps me sharp and aware of the greatest possible number of design trends and innovations per time spent. Practically speaking, this means that every once in a while I pick up an Android device, since most of my personal technology use has been Apple-made for the past few years (I use Windows full-time at work and for gaming at home). There are some cool things about Android, and the wide-open control and open-source feel appeal to me on a few different levels. I like widgets, customizability, and the breadth of options that are free development ecosystem provides.

Every time I force myself to do this though, there’s one thing that nags at me constantly. It seems like it should be such a simple thing and yet it’s an aspect of the user experience in which there’s a strange gulf between Apple and everybody else. Most writers would at this point reveal that they’re talking about design and quantity of 3rd-party apps, but I won’t go there. Instead, I want to talk about one very simple thing responsive scrolling. Scrolling a text region, a web page, or a list on an iOS device is a beautiful experience, where the distance between you and the machine melts away and it feels like you’re actually manipulating the data itself with your finger. The granularity and speed of the scrolling are such that it feels like sliding a physical object with no friction. You don’t have to spend brain cycles thinking about what scrolling action you need to take to get the result you want.

Inexplicably, I have never seen scrolling behavior like this on any other device. Even a fast new (at the time) Nexus 7 was very clearly doing a lot of chugging to keep up with scrolls on a complex page. There’s always a delay of at least a few seconds between finger movement and scrolling of the view. I don’t know whether this is a symptom of a patent of Apple’s that nobody wants to license, an inability of app developers to keep data processing off the main thread, or even some kind of architecture limitation of the OS. Everyone once in a while I pick up a Nook in Barnes & Noble or a Galaxy phone in the AT&T store, pull up a website, and scroll. So far no Android device has passed the same test that my iPhone 3G did in 2009, despite processors at least five times as powerful. This is hard to really see in a video, but the closest I could find was here: http://www.youtube.com/watch?feature=player_detailpage&v=ETIayifu7Bw#t=56s.

I believe this difference is a representative microcosm of the type of reasons that people choose Apple devices. It doesn’t seem like there’s that much difference in hardware specs, in application functionality, or in many cases even beauty of the UI itself (I’m a big fan of 4.0+ Android), but there’s a certain level of user interaction quality that other manufacturers just seem unable to match. This is definitely disappointing to me as somebody who supports open source software and worries about the close nature of Apple’s system. It seems like such a shame to let this difference in polish tilt the balance. Then again, maybe it’s a lost cause and the same designers and engineers who could fix this in Android have already given up and gone over to the other side.

Takeaway for me in my own design and development: speed and responsiveness matters a lot more than we often think. I believe this type UI performance is one of the most unappreciated critical factors that customers consider when choosing hardware and software, maybe because it’s hard to quantify in ways that aren’t very misleading (mHz and processor cores have little real relationship to responsiveness). Time to go optimize some software – and hopefully not have to change platforms to get it done!

Asteroid Runner on the iPhone!

May 18th, 2013

Earlier this week I was very happy to be able to get my Asteroid Runner game working on the iPhone with minimal fuss. It only took two tries, once using the older “iOSImpact” framework which turned out not to implement the accelerometer events, and once using the newer “Ejecta” framework. Both are made by the creator of the ImpactJS framework and basically provide a Javascript interpreter and a mock HTML5 canvas that has a rich enough API to match the way the game runs in the browser. It might be possible to use just a raw UIWebView, but after a brief foray it seems there’s a bunch of overhead around resource loading for all the images and sounds that isn’t worth re-implementing as far as I can tell.

iPhone 5 screenshot

iPhone 5 live screenshot

One of the things I’ve been happiest with through this porting process is that I built the screen width and height to be flexible early on; the game plays a little differently depending on your browser/device shape, but it avoids any kind of nasty scaling problems. Might need to move away from this for balance reasons, but during functionality testing it’s nice to see the whole spectrum on different devices (even iPhone 4 vs. iPhone 5 with their different screens).

Screen Shot 2013-05-18 at 10.10.48 PM

Relative screen heights are used throughout (with apologies for the abuse of the term ‘consts’ here)

This also required extending the ImpactJS entity class to allow positioning entities relatively instead of absolutely (since we have no idea how many pixels will be in the screen). I created a plugin that injects this into impact.entity here. I think this could be useful for other games as well that share this flexibility of scale.

Javascript Intellisense

May 14th, 2013

Working on my Asteroid Runner game again, and I finally found the first IDE/editor that is able to parse the ImpactJS engine effortlessly: IntelliJ IDEA!

Screen Shot 2013-05-14 at 1.48.44 PM

 

After trying just about everything under the sun (had previously used Coda and Komodo on this project hunting for this functionality), I had pretty much given up hope. Now I’m happy again and excited to be able to code even faster than before. It sure beats having to keep a browser window open to hunt through the API documentation for everything. The best part is that I didn’t even have to specify any paths; it did the traversal and indexing all on its own. Major points to Jetbrains for the ease of use here, although it still seemed a little too cumbersome to get the rest of the project set up in the first place.

Trustworthy Advice and Mental Process

May 13th, 2013

We just got back from a trip to Alaska, and in a moment of vacation-induced clarity and relaxation, I discovered a gap in my mental model that I didn’t know existed: I don’t have a process to go back and ‘edit’ the trustworthiness of advice when I later learn that the advice-giver is untrustworthy. I found this to be a pretty fascinating brain problem and I’m enjoying thinking about solutions to it.

Here was the scenario:

  1. We get on a tour bus to see the cultural history of Ketchikan. I give the tour guide the benefit of the doubt and assume he’s an informed guy since he’s worked here for 6 years giving the same tour.
  2. As we drive around the touristy parts of downtown, he points out a candy shop that sells chocolate-covered oreos and says that they’re delicious. At this point I’m assuming he’s correct and I’m envisioning something gooey, melty, and decadent. We plan to go get one once the tour is over.
  3. The tour goes on for a few hours, and our guide proceeds to toss his credibility out the window bit by bit. We learn interesting cultural nuggets like: “Bill Clinton signed an act to create the Tongass National Forest Preserve in the 1970s”, “There’s a boat stuck in the mud, must have been stranded at low tide”, and “Here’s Ketchikan’s tanning salon.” It becomes apparent pretty quickly that he’s not exactly the man for the job.
  4. Afterwards we still go to the candy shop to try a chocolate-covered oreo. It’s pretty terrible, unfortunately – just bland confectioner’s chocolate crusted onto a regular oreo out of a package. Not exciting in any way.

As I’m eating the disappointing oreo, I realize that I’m not at all surprised that the tour guide’s recommendation was poor and not well-thought-out… but the problem was, once I learned that his credibility was low, I never went back and added any suspicion to my mental concept of “we should go get a chocolate-covered oreo”. I’m fairly confident I would have been hesitant if he’d mentioned it at the end of the tour, but it never even occurred to me that it was an issue because I had already filed away the original fact before ever learning that he was untrustworthy.

This was a fairly scary realization for me; it seems likely that I’m leaving context clues on the table when taking advice from people and then acting on it much later. Fortunately it’s rare for my opinion of someone’s trustworthiness to drop so significantly, but I suppose it must happen sometimes. I’m not sure that I always even maintain the link between the nugget of information and the person who told it to me, depending on how much time has passed.

So far the best solution I’ve come up with is to focus specifically on the problem case: a major drop in the trustworthiness of someone I’ve met. If I run into a scenario like this, I hope to go back in short-term memory and do the editing based on that index while I still remember more details about the linking. This seems like a more pragmatic use of mental energy than trying to remember who told me each piece of information for eternity, but it doesn’t handle the case of a trustworthiness drop far into the future once short-term memory is exceeded. I’m not sure I have a viable solution for that scenario other than increasing my general skepticism up front, which I’m very hesitant to do.

Angular.js and data binding

April 12th, 2013

A few weeks ago in a burst of creative energy, I decided it was time to take another stab at overhauling my shopping list site. This time I’m forcing myself to learn a new framework, Angular, which has something very cool that I want to get to know better for professional development: data binding.

The basic idea behind data binding is that it creates a two-way link between an element that you see on the screen (like a button or text box) and a variable or object in the data structure generated by your code. This two-way link automatically handles keeping the two in sync, which is a big time saver for a language like Javascript where updating UI elements can be quite messy (depending on how much jQuery you use, maybe).

Here’s the basic structure of the shopping list now:
Screen Shot 2013-04-12 at 11.15.04 PM

There are at least two really cool things going on here:

  1. The “ng-repeat” attribute on the table row tells Angular to repeat this row once for each item in the “shoppingList” data structure. This means I don’t have to worry about managing the actual HTML of the list, it’s just created for me. My HTML markup doesn’t ever get hugely long and complex, and it doesn’t have to be generated by Javascript so it’s much easier to read.
  2. The “ng-model” attribute on the checkbox sets up automatic two-way data binding between the checkbox’s state and the value of the “item.isGotten” field. When I click the checkbox, .isGotten gets set to true; when I unclick it, it gets set to false. This saves me having to write the worst kind of click event handling code. It’s not just making programming faster, it’s making it more fun by letting me spend all my time thinking about the data structures and algorithms instead of the mundane glue.

 

The Javascript behind this is also much simpler than the old ugly version that had to handle parsing, generating HTML, handling events, and all that. It doesn’t save to the server quite yet, but it’s almost there. Because data binding handles all the basic syncing, I only have to define the logic for how to add and remove items from the list. The savings in lines of code is something like 80%.

Screen Shot 2013-04-12 at 11.21.10 PM

This is simple and clean in a way that warms my heart. Most fun I’ve had with web development yet!

Shopping List: Input Encoding

March 16th, 2013

It all started a couple weeks ago with an apostrophe. We were adding “Esther’s 80th card” to the shopping list, and it kept coming up “Esther\’s 80th card”. Seemed innocuous enough – at some level the apostrophe is probably being escaped with a backslash. This wasn’t the first time we’d seen it and I had a little time over the weekend, so I thought I would jump in and fix it with some better encoding.

Little did I know that this would expose all kinds of other problems with special characters, particularly ampersands, spaces, and quote marks of any kind. There seemed to be some black magic going on at multiple levels making it difficult to debug why simple encode()/decode() calls were falling far short.

Eventually, after learning about such fun concepts as PHP’s “magic quotes” and all the different variants of encoding and decoding calls that are broken in fun ways, here’s what I came up with for saving data:

 

I think this works for all reasonable inputs. At least it works for every test I can come up with. We end up with fully-encoded strings stored in the flat file, that aren’t decoded until immediately before being displayed to the user. The most fun part by far is the oddity that $_POST only operates properly with an encoded input string, but then it unhelpfully decodes it for you (I couldn’t figure out how to turn this off, anyway). So I have to encode once in the Javascript and once in the PHP.

In any case, time to go get some M&Ms! I can’t remember the last time I ate regular M&Ms, but it makes for a great example in HTML special characters.

Neil Freeman’s “Electoral College Reform” Map

March 3rd, 2013

A few weeks ago a pretty cool map popped up on the internet: it’s a redrawing of U.S. state lines to create fifty regions with equal population. The goal is to end the disparity between popular vote and electoral college vote, and normalize the value of each person’s individual vote in both presidential and congressional elections. You should check it out here if you haven’t seen it, it’s quite fascinating: http://fakeisthenewreal.org/reform/.

There are many things I like about this map. It’s drawn based on both population distribution and commute patterns, so few people will have to cross a state line to drive to work despite the massive redrawing. The state names are whimsical but logical, based mostly on landscape features whose names we don’t usually see in common usage at least in other parts of the country. “Big Thicket” and “Firelands” look like they come from a fantasy novel, while “Shenandoah” and “Atchafalaya” are compelling for their unusual lettering and sound. I can’t help but imagine the map as describing an alternate reality United States, where history and culture are divergent just like the state boundaries. How different would history have been if there were no Mason-Dixon line? What would happen to college and professional sports if the states were so sharply divided between rural and urban?

As much as I like the map (to be fair, I’m likely to buy the poster once it’s available), there are a few ways in which it bothers my inner nerd:

1. There’s no quadripoint; no equivalent of the “4-corners” intersection between Colorado, Utah, New Mexico, and Arizona. This is disappointing since it’s one of my favorite quirks of the current U. S. layout. Quadripoints seem to me impossibly unlikely without some human intervention in border definition, which the current U.S. layout certainly had a lot of (witness the presence of so many straight lines). I suppose it’s inevitable that a programmatically-generated map is almost certain to have few such quirks, but it feels like some amount of human expression is lost in the transition.

2. It removes the elegant game mechanic of the unequal distribution of voting power between the House and Senate. The current system, where large states get over-represented in the House and small states get over-represented in the Senate, has always struck me as a clever way to create interesting discussions about how different types of legislation benefit different subsections of the population. I really like that it leads to different value systems and platform distributions between the two houses; it seems like this would likely contribute to more thoughtful legislation in the bills that can pass both houses. If we instead tried to make each state equally well-represented in both houses, the only differentiator left would be the longer term length in the Senate.