Hey, look over here.
It’s starting to feel a lot like summer. During a recent visit to Arriviste, I tried one of their seasonal specials, which consisted of espresso, Coke, and bitters (the name escapes me). It was delicious and it inspired me to make my own. So, I dusted off my Toddy and made a batch of Musumba from Commonplace with the standard Toddy recipe. Then:
- 4 ounces of cold brew
- 4 ounces of tonic water
- 1 tsp of simple syrup
- Dash of aromatic bitters
Blue Apron has been a game-changer in my household. It lowered the bar for me to learn how to cook by cutting out shopping and finding recipes, and limiting the cook time. The lower bar enabled me to focus on just the prep work and cooking, so now I’m cooking! The feedback loop here is obvious—cook food, eat food, yum, repeat.
This is a general principle that applies everywhere. Remove friction and tighten feedback loops. Then iterate and expand. For example, at my job. If we want to have better quality through more testing, we start small. Set up a testing framework and create examples that enable the team to easily get started. They can copy the existing examples and modify for their features. Create a feedback loop that automates the testing and makes the results visible. Once you establish that, you can start adding more sophisticated tests and increase coverage.
My Fujifilm X100T delights me.
It’s been 2 years since I bought the camera. That purchase was big step in my photography hobby, having dabbled in it in years prior. It was my first grown-up camera and the most I had ever spent on that kind of equipment. I grew up in the school of thought that the requisite gear to take your photography to the next level was to buy the best DSLR body and prime lens you could afford. Then, play the lens game and begin diversifying from there.
Given that background, I struggled with whether the X100T would satisfy my needs. A fixed lens. What if I grew tired of the one focal length? What if I wanted to someday capture sports or landscapes? The video features are rather primitive—what if I need to do some filming?
The answer to those questions is, “Too bad.” But I found those limitations ultimately to be liberating. There would be no lens game to be played. I would not be taking my camera out for high-action sports or shooting a documentary. My camera had already answered those questions with a “no,” and I could focus on what the camera was capable of.
I carried this mindset into my workflow as well. My old school-of-thought dictated that I must shoot in RAW in order to preserve all of the image data, giving me the greatest flexibility to process the best photos. I decided to shoot only in JPEG. This freed me from a whole category of anxieties and questions. “Where do I store the files? Local back up and then a cloud solution? I’ll need a more powerful machine to operate Lightroom. But first, I need a license to Lightroom.”
Around this time is when Google Photos was released. Unlimited photo storage, as long as you are shooting JPEG. My photos are synced across all my devices. Sold. (Yes, it’s free, so I’m the product being sold.)
Nowadays, my “workflow” is pretty minimal. I take pictures. I’ll copy them to my phone using the built-in WiFi (rather clunky, but usable), or pop out my SD card and copy it on my laptop. Google Photos syncs them to the cloud. If I want to publish something, I’ll bring the photo into Darkroom on my phone. If I want to make a print, Flag app on my phone. That’s it. No post-processing, really.
That’s why I enjoy using my camera. There’s not much to think or worry about. I’ve got no extra lenses to pack. All of the basic functions are obviously exposed on the body. I can set every dial to Auto, and I’ll get great pictures. If I’m feeling particularly artistic, or need to adjust to my environment, those controls are right there. Want less bokeh? Click, click, click. Subject too blurry? Dial in the shutter speed. That’s it.
So, let’s go out and shoot!
I upgraded to macOS Sierra today on my work laptop. I hit a few minor niggles related to some keyboard hacks I had in place. The first was working around the fact that Karabiner no longer works in Sierra, which is well documented by Brett Terpstra here. I don’t rely too much on the “Hyper” key, so actually the main thing I cared about was remapping Caps Lock to Escape. This, by the way, has been a game-changer.
I uninstalled Karabiner and Seil, which I had previously installed to accomplish my hacks (although I was never clear what Seil was for).
The second issue was related. I type on a Das Keyboard Model S Ultimate. Before the upgrade, I swapped the option and command keys because it is natively a PC keyboard. This stopped working in Sierra. Despite setting the modifier keys to be swapped, the behavior did not change. So, I used Karabiner-Elements to accomplish the swap:
A watchOS 2 app that taps you at specified intervals. My old Casio watches have a feature where they beep twice at the top of the hour. This helps me be aware of the time without having to look at my watch. This would be even more useful if it were undetectable by others, via a tap on the wrist. This would also be useful for when giving presentations, so that you are aware of your time, again, without looking at your watch. I suppose this would be useful for any timing application. Right now, you always have to wake up your watch to see the time.
There’s an iOS app that basically does this: http://buzzclockapp.com/
The main area of improvement I see for current calendar apps is the visualization of time. In my previous sketch, I’m representing my schedule with two clocks (one for AM and PM). The problem with this is that it doesn’t do a good job of communicating the continuity of time. There are a lot of things to consider. I found an excellent post by Doug McCune with lots of nuggets about this topic. . He begins by identifying two main challenges: continuity and personal context. Both of these are quite applicable to designing a good calendar app. He wraps up this section, saying:
He goes on to discuss various representations like line charts, circular charts, spiral charts, and more. He specifically addresses using two clocks to show a day’s worth of data and brings up its problems:
The biggest problem with the chart is the incorrect continuity. A single clock on its own isn’t a continuous range, it’s really only half a range. So the clock on the left is showing 12am – 12pm, but when you reach the end of the circle the data doesn’t continue on like the representation shows. Instead you need to jump over to the second clock and continue on around. It’s difficult to see the ranges right around both 12pm and 12am, since you lose context in one direction or another (and worse, you get the incorrect context from the bordering bubbles).
I recognize this, but I’m still on the fence as to whether this is enough to throw out the clock metaphor entirely. Nonetheless, it’s a fascinating read with great insights. Although his goals are different from mine, there are many things I can learn from his piece.
I also found a gallery of time-oriented visualizations that will hopefully spark some ideas as I continue to explore this calendar app idea.
Here’s a doodle of some ideas I’ve had in my head, based on my rant of current calendar apps. I’m exploring the use of clocks to show my busyness landscape. Although it gets a little awkward at the AM-to-PM transition, clocks are ubiquitous and easily interpreted. I’m using clocks also to show duration, which, by stacking, I can save space. So, each line on the grid represents up to two hours. Events with zero duration and all-day events should be treated differently.