Evadne Wu :: Human Interface & Technical Delivery Specialist

Volatility and Redundancy

I personally believe that most software engineering (or even product engineering) can either be done with a very small group of highly proficient individuals or a larger team composed of redundant workers who have undergone an unified training plan.

Specialist teams specializing in rapidly bootstrapping a new product and reducing uncertainty (therefore reducing value-at-risk) may not be suitable for maintaining business-as-usual applications that must now work harmoniously with reality. Conversely, it will be inappropriate to task teams built in an uniform manner with massive creation of new value thru introduction (or delivery) of new technology at an appropriate time and location.

The risk profile for any software project also changes over time as more code is written and the chasm between specification and deliverable is gradually bridged or exposed. On day one, it may be equally possible that you’ll either be doomed or reveled, but on day ∞ you can probably pass the Most Doomed Individual label around freely.

The easiest way to gauge the type of your team is by measuring the median pain somebody may bring if they were to quit unexpectedly on the next day. If you have a well-knit team comprised of highly efficient individuals, you must tolerate occasional volatility.

It seems that the best solution for all is to build both kinds of teams and allow open-allocation. There are two other options in the matrix — a large team of highly-proficient and effective individuals, and a small team of duds. While the latter is certainly unpalatable, the former is possible, but I also believe that there is a natural limit on the size of the team, before communication grinds down to a halt and turns professionals into unwilling slackers.

Great engineering organizations justify their own existence by creation — and facilitation of creation — of massive value thru identification, development, and delivery of new technology. They drive the world forward, are financially solvent and perform much better than the regular “$1m for $5m” pack.

Creating OS X Installation Media

There is a built-in way to create installation media from the over-the-air installers which still works today. No more fiddling with InstallESD.dmg.

$ sudo


    --volume <path to volume to convert>
    --applicationpath <path to Install OS X>


  • --volume, A path to a volume that can be unmounted and erased to create the install media.
  • --applicationpath, A path to copy of the OS installer application to create the bootable media from.
  • --nointeraction, Erase the disk pointed to by volume without prompting for confirmation.


    --volume /Volumes/Untitled
    --applicationpath /Applications/Install OS X

Value Confirmation with h5Validate

There are a few human interface tips for creating successful value confirmation components:

  • The error must be highly visible. Applying red outlines on invalid fields is a good idea.
  • Validation must occur in real-time as the human attmepts to correct the discrepancy between the original value used for commiting data, and the replica value used for validation. Validation must occur as the human types and re-types in either field, presenting or dismissing visible error markers.

The following snippet implements password validation with a popular JavaScript component h5Validate with the following assumptions:

  • The app uses $ for jQuery.
  • Whenever two password fields are contained in .set-user-password, a validation link is anticipated.

One thing to note is the escapeRegex function appropriated from jQuery UI. This ensures that special characters are matched properly, avoiding situations like when a . placed in the source string allowed any character to be matched (which defeats the purpose of validation).

$form.on("keyup blur change", ".set-user-password :password", function () {
  var $this = $(this);
  var $allPasswordFields = $this.closest(".set-user-password").find(":password");
  var escapeRegex = function (value) {
    // Stolen from $.ui.autocomplete.escapeRegex
    return value.replace(/[\-\[\]{}()*+?.,\\\^$|#\s]/g, "\\$&");
  if ($allPasswordFields.length == 2) {
    if ($$allPasswordFields.eq(0))) {
      var $next = $allPasswordFields.eq(1);
      $next.attr("pattern", escapeRegex($this.val()));
      if ($next.val().length) {

The Art of Quitting

On the day I gave a three months’ notice, the CTO shared his art of quitting a software engineering job:

  • Individual contributors should give a two weeks’ notice.
  • Find your replacements and train them well.
  • Write documentation in the successor’s working language (especially important if it differs from your working language).
  • Ensure that your colleagues are able to see the things you see.
  • Quit on extremely good terms, so you may be invited back for a talk, or consulted about affairs irrelevant to your previous work product.
  • Be honest about the technical debt you know about and share ways to clean it up.

Once upon a time, the man quit the job in the morning, went to an interview in the noon, discovered more bozos than estimated, and re-assumed his old position in the evening. I found his recount otherworldly and was not quite sure how this would work for any person less stellar.

I left a team of three and never went back. Rumor has it that Management expected productivity to drop by one-third, but it actually dropped by 100%. By the time I passed by the country again, a couple of years later, one of the co-founders had pivoted into the shoe store business, and the office was vacant.

Embedding an UIWebView, without the Peripheral View

@implementation RAEditorView
- (instancetype) initWithCoder:(NSCoder *)aDecoder {
    self = [super initWithCoder:aDecoder];
    if (self) {
        [self setup];
    return self;
- (instancetype) initWithFrame:(CGRect)frame {
    self = [super initWithFrame:frame];
    if (self) {
        [self setup];
    return self;
- (void) setup {
    [self hideKeyboardBar];
- (void) hideKeyboardBar {
    static NSString * uniqueSuffix;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        uniqueSuffix = (__bridge_transfer NSString *)CFUUIDCreateString(NULL, CFUUIDCreate(NULL));
    for (UIView *aView in self.scrollView.subviews) {
        Class ownClass = [aView class];
        NSString *className = NSStringFromClass(ownClass);
        if (![className hasSuffix:uniqueSuffix]) {
            NSString *newClassName = [className stringByAppendingString:uniqueSuffix];
            Class newClass = objc_allocateClassPair(ownClass, [newClassName UTF8String], 0);
            if (newClass) {
                IMP nilImp = [self methodForSelector:@selector(methodReturningNil)];
                class_addMethod(newClass, @selector(inputAccessoryView), nilImp, "@@:");
            object_setClass(aView, (newClass ?: NSClassFromString(newClassName)));
- (id) methodReturningNil {
    return nil;

LiveFrost: Fast, Synchronous UIView Snapshot Convolving

LiveFrost is a new thing that Nicholas and I have spent half an evening working on. It gives you fast and synchronous UIView snapshot convolution by providing a LFFrostView, a blurring view for UIKit which you can drop into any superview to be blurred. When the app runs, LFFrostView will be filled with a convolved image drawn from the snapshot of its superview.

LiveFrost is released under the MIT license and comes with a sample app.

Other Solutions

There are many competing implementations available: FXBlurView, ios-realtimeblur are the top two hits.

iOS-blur is another one that warrants special mention. It’s an amazingly brilliant hack for iOS 7+ which simply stole UIToolbar and had that view do the blurring.

iOS-blur deserves special mention because it relies on Apple’s kindness and generosity to work. If you try to run it on an iPhone 4, where LiveFrost works smoothly, it would refuse to blur. However, if you’re just looking for a blurring view for your iOS 7+ application which does not target the iPhone 4, and you’re not keen on customization nor compatibility, this library obviously does the blurring with the least amount of code. :)

General Workflow

The general idea of such a blurring view is pretty simple:

  • Draw the contents of its superview into a bitmap context, like a CGBitmapContextRef if you are using Core Graphics.
  • Blur the bitmap algorithmically. (For example, by using GPUImage’s GPUImageUnsharpMaskFilter, or the Accelerate framework’s vImageConvolve_ARGB8888.)
  • Send the bitmap back onto the screen in some way.

Not so simple in practice. The first thing you’d notice when running samples of these implementations on the real device is possibly the slugginess, low frame rates, or out-of-sync blurring results lagging a few frames behind the main view.

Slow Drawing Explanations

In greater detail, the jankiness (in which you lose frames) is usually caused by doing too much on the main thread (1 second /60 frames = 0.016̅ seconds per frame). If you’ve ever profiled such solutions, they’re usually spending a lot of time in drawing into a large image buffer. Once you’ve solved that by bringing the scale factor down (as the product will get convolved anyway), you’ll find the solution still spending a lot of time creating single-use image buffers.

That isn’t right nor necessary. If the blurring view has not been resized — in other words, if its bounds size has not changed — it should not have to waste time throwing away then reclaiming memory. Reusing this context gives you much more time for actual work.

Once jankiness is fully understood we can pierce through reasons causing frame lag as well. The developer, faced with the problem of things taking too long, may try rendering asynchronously — off the main thread — to ease the burden. Now they have precisely two problems: frame lag and threads.

First of all, putting rendering on the background means the bitmap has to (conceptually) travel to the background thread, get operated on then at a later stage re-committed back to the layer, some times in form of -[CALayer setContents:] as an CGImageRef. As all drawing is done by the Render Server (backboardd) the actual image may be committed several frames past its originating frame, resulting in visible lag.

Rendering views off the main thread may also not work out as intended. Some collection views driving multiple cells, usually one per represented object, compute layout in lump sum. They usually hold an internal layout map that correlates objects to their presentation items, deriving bounds and other attributes from the same source. (This is exactly why infinite scrolling is so difficult to achieve with UITableView. This class expects you to know everything because it wants to use that to compute a complete layout, or at least something it can get layout information from.) Views that prefer to build interim layout states as it go still need to constantly mutate their layer trees, updating subviews to reflect content with the new offset. Even though CALayer is thread-safe, you might catch the view in the middle of mutating its own subviews as you attempt to render it from a background thread.

Practically, this results in missing cells in the final images. If you scroll really fast on an implementation which throttles number of frames, you’ll see this happening a lot if you have a long enough collection view to draw from.

If the drawing or convolving itself still takes too long, the developer will have to manually drop frames. They might decide to have a demigod object which listens to CADisplayLink, and implement the callback handler like this. I first learned of this technique from Brad Larson’s answer to CADisplayLink OpenGL rendering breaks UIScrollView behaviour:

- (void) refresh {
    if (dispatch_semaphore_wait(_renderSemaphore, DISPATCH_TIME_NOW) == 0) {
        dispatch_async(_renderQueue, ^{
            for (UIView<LFDisplayBridgeTriggering> *view in self.subscribedViews) {
                [view refresh];

Using this technique, the developer effectively clamps the depth of the dispatch queue to one block maximum. When the callback is fired, it invokes dispatch_semaphore_wait with an immediate timeout, effectively tricking the function to return immediately if the semaphore is not released — as the previous queued block has not yet finished. This technique throttles the number of frames processed by the blurring code without causing main thread slowdowns.

Unfortunately, fancy procrastination can’t save you from being late. You need to draw things fast on the main thread.

Fast Synchronous Drawing

It’s possible that you’ve spotted the #1 time sink — disposable single-use contexts. This approach is really clean because no streams ever crossed and really slow because you’re now constantly deallocating and reallocating. Larger images need to be held in larger chunks of memory and it’s harder to find larger chunks of memory when you’re in a tight spot.

You should therefore reuse the bitmap contexts. Create or re-create them when the working size of the bitmap you have has changed, for example when you’ve got a new bounds that has a different size, and only when you have to. In other times, just draw into the context you have and don’t throw the memory away.

Turns out -[CALayer renderInContext:] is really fast when drawing into a context with a 0.5f scale factor (instead of 2.0f on a Retina Display), and it’s also much faster to convolve a smaller image.

LiveFrost obtains a pretty stable and high frame rate by using these simple rules.

Timing Sources

By default, LiveFrost uses CADisplayLink to drive update notifications. Instead of using a NSTimer which fires at fixed intervals, CADisplayLink allows you to synchronize drawing with the refresh rate of the display. By using CADisplayLink, you can be sure that on every invocation, you get draw and update the exact frame in exactly the right run loop mode you specified. Not such with NSTimer, which is also scheduled on a run loop but does not care about the screen.

The only weakness is that by default, CADisplayLink does not pause. LiveFrost will be convolving the same image over and over even if the underlying view has not been updated. This is, generally speaking, a design tradeoff to avoid exposing more interface than necessary, but you can always take the LFFrostView off screen when done.

If you’re trying to do something with Open GL ES, you can possibly look into the LFDisplayBridgeTriggering protocol:

@protocol LFDisplayBridgeTriggering <NSObject>
- (void) refresh;

By default, interfacing with the display link is done over LFDisplayBridge, which holds a mutable, unretained, unsafe set of pointers to LFFrostView instances. If you pause the display link within LFDisplayBridge, you can then control actual refreshes, still, by calling [[LFDisplayBridge sharedInstance] refresh]. However, if you’re not overlaying UIKit things over your Open GL ES view, you might consider to just convolve things with Open GL ES directly without touching LiveFrost.

Like this, if you’re feeling adventurous:

LFDisplayBridge *displayBridge = [LFDisplayBridge sharedInstance];
CADisplayLink *displayLink = nil;
Ivar displayLink = object_getInstanceVariable(displayBridge, "displayLink", &displayLink);
[displayLink pause];

Hardware Compatible Coding

This is pretty much a side note.

CGImageRef is a versatile wrapper which means it could have to be dynamically decoded if necessary. If you’ve ever profiled an app trying to display a JPEG file obtained from the Internet, you’ll see lots of time spent in decoding and converting such image to the GPU’s native format.

Fortunately, if you’re already drawing into a bitmap buffer, you have full control and the result does not require additional transcoding. It’ll be fast.

Using CAMediaTimingFunction to calculate value at time (t) »

Courtesy to Ivan Vučica’s StackOverflow answer, WebCore contains UnitBezier.h which is assumed to be the same code that returns value at time for any media timing function in Core Animation.

#ifndef UnitBezier_h
#define UnitBezier_h

#include <math.h>

namespace WebCore {

    struct UnitBezier {
        UnitBezier(double p1x, double p1y, double p2x, double p2y)
            // Calculate the polynomial coefficients, implicit first and last control points are (0,0) and (1,1).
            cx = 3.0 * p1x;
            bx = 3.0 * (p2x - p1x) - cx;
            ax = 1.0 - cx -bx;

            cy = 3.0 * p1y;
            by = 3.0 * (p2y - p1y) - cy;
            ay = 1.0 - cy - by;

        double sampleCurveX(double t)
            // `ax t^3 + bx t^2 + cx t' expanded using Horner's rule.
            return ((ax * t + bx) * t + cx) * t;

        double sampleCurveY(double t)
            return ((ay * t + by) * t + cy) * t;

        double sampleCurveDerivativeX(double t)
            return (3.0 * ax * t + 2.0 * bx) * t + cx;

        // Given an x value, find a parametric value it came from.
        double solveCurveX(double x, double epsilon)
            double t0;
            double t1;
            double t2;
            double x2;
            double d2;
            int i;

            // First try a few iterations of Newton's method -- normally very fast.
            for (t2 = x, i = 0; i < 8; i++) {
                x2 = sampleCurveX(t2) - x;
                if (fabs (x2) < epsilon)
                    return t2;
                d2 = sampleCurveDerivativeX(t2);
                if (fabs(d2) < 1e-6)
                t2 = t2 - x2 / d2;

            // Fall back to the bisection method for reliability.
            t0 = 0.0;
            t1 = 1.0;
            t2 = x;

            if (t2 < t0)
                return t0;
            if (t2 > t1)
                return t1;

            while (t0 < t1) {
                x2 = sampleCurveX(t2);
                if (fabs(x2 - x) < epsilon)
                    return t2;
                if (x > x2)
                    t0 = t2;
                    t1 = t2;
                t2 = (t1 - t0) * .5 + t0;

            // Failure.
            return t2;

        double solve(double x, double epsilon)
            return sampleCurveY(solveCurveX(x, epsilon));

        double ax;
        double bx;
        double cx;

        double ay;
        double by;
        double cy;


By great fortitude and a series of lucky events I was recently interviewed by App Stories for the benefit of App Camp for Girls. What you’ll read is extremely biased, and my answers change every week any way, but I’m posting it here for posterity with much gratitude.

App Camp for Girls seeks to nurture future female developers and has recently finished their Indiegogo campaign 100% overfunded. Volunteer to help at their Portland events (and they’re looking to expand out of Portland too).

What do you currently do?

I’m a human interface specialist aspiring for world domination and running a consulting business on the side. I design human interface for mobile, web and desktop software, and do some degree of mobile and Web engineering. (Tongue in cheek variant: I help clients build business applications that make them superhuman, in an affordable and profitable way.)

How did you get started in Mac and/or iOS programming?

I started out working on visual design and digital publishing and was compelled to learn Web design and development to create a street photography portfolio. It kicked off a series of events forming Iridia Productions with a friend of mine and got me started on software development and the business side thereof.

I started out building business software on the Web with Cappuccino, a framework built on top of the Objective-J language which is a port of Objective-C atop JavaScript.

Iridia’s formative months coincided with the launch of the iPad and the explosive period of growth of the iOS platform as a whole, and we shifted from Web consulting to iOS consulting. We built our and my first iOS application around this time.

What was the first app you created and what did it do?

Tarotie was the application I created with my co-founder at Iridia to explore what’s possible with iOS, when iOS 3 was mainstream and iOS 4 was taking territory. It’s an interactive Tarot deck. You’ll place the phone face-down on the table, then flip it back up. The app picks up device motion, recognizes the rotation and shows a new card with the built-in flip animation.

Where did you get the idea for the app?

Both of us have a Tarot deck sitting on our respective shelves, and I have an unhealthy level of obsession with replacing ordinary things with convoluted technology. It’s mostly a series of coincidences.

What went well? What could have gone better?

A professional astrologist praised its apparent accuracy; we were half baffled. For a side project, Tarotie took over a whole year to build, only because we spent very little time on it between client projects. It could have been shipped in weeks if we both knew what to build and were both focused on the project for a very short period of time.

What is your favorite among the apps you’ve developed?

I once built an engine that used up to seven UIWebViews concurrently in pages to drive an infinite magazine-style layout which scrolls at 60 FPS.

Another app reads and writes WMF-wrapped PNG files natively, handles rich text editing on the iPad, with full RTF reading and writing support, and synchronizes everything with Dropbox while recording or playing back audio at the same time.

What advice do you have for young people who want to make apps?

Build your own support network by integrating the local and global community. As a novice, the first step is the hardest, and classrooms won’t always work. (Teaching iOS development to absolute novices, drawing from my own experience, is very difficult to do in certain cultures where failure is frowned upon.) Find out where people go and attend these meet-ups, show your progress to other peers, and find mentors early in the process. If there’s no such local community, you can easily create one by providing food, beer and a place to meet regularly. People will come out of altruism, curiosity or boredom, and you get to learn from them in all cases. (Bonus: If you buy people beer, they will like you.)

Rack up your open-source contributions as soon as possible to acquire experience, force exposure and heighten proximity. In certain cases, it’s highly beneficial to start with small tasks that add to an existing framework than to start from scratch. It’ll provide you some structure to learn about specific areas. Interacting with the open-source development community helps you learn more about the craft itself — ways to build usable, solid and performant software — and the meta-craft, which is the way of performing these tasks as an useful participant.

Postpone your “life project” and hedge some time against an initial piece you can finish expediently. It’s imperative that when shipping your first application, you ship it in a matter of weeks by creating very little. A string of small victories is far more beneficial than a long march with no proof of succeeding. Get something on the screen, and worry about product-market fit later on your second or third project when you are technical enough to avoid jumping into rabbit holes.

Find your ten-year and five-year anchors and work towards them. Acquire specific skills, connections and knowledge if need be, and don’t be afraid to eventually get out of building applications if that’s not the best way to manifest your ideas.

What’s your Twitter and username?

It’s @evadne on both platforms.

Semicolon Jeopardy

var something = function () {
  alert("something destructive.");

  alert("do it now. :D");

//  Function declaration for `something` is underhanded.
//  It has no terminating semicolon, so the immediately evaluated expression,
//  which evaluates to `undefined`, is used as a parameter to immediately invoke
//  the function, whose results will be set to `something`.

//  This is one of the many reasons why semicolon insertion is bad.

SocialAuth is a tiny pod that handles Facebook and Twitter authentication including redirection that fully works for iOS 6+ on the Simulator and real devices.

The video is not exhaustive, but there’s a sample app you can play with.

Correctly Handling ACErrorAccountNotFound »

Note: SocialAuth handles several edge cases and fixes incompatibile hacks.

Call -requestAccessToAccountsWithType:options:completion: on your ACAccountStore. If it failed and told you that there are no accounts set up for the particular type — with an error code ACErrorAccountNotFound (currently 6) — use SLComposeViewController to force a transition to

SLComposeViewController is the only system-provided class that shows the respective page for your social service. iOS 5.1 breaks prefs://, so it’s difficult for your app to try and show the correct Settings pane, and an embedded oAuth browser is less legit than the system dialog.

Tested fine on real device. iOS 6.1 Simulator (10B141) does not like this treatment, so beware if you use this in production.

Note: it’s not fine to present then immediately dismiss a view controller once and hope that nothing bad would happen. We’re extremely lucky that this manoeuvre actually works without freezes or visible artifacts.

ACAccountStore * const accountStore = [ACAccountStore new];
ACAccountType * const accountType = [accountStore accountTypeWithAccountTypeIdentifier:ACAccountTypeIdentifierFacebook];

[accountStore requestAccessToAccountsWithType:accountType options:@{
    ACFacebookAppIdKey: @"-snip-"
} completion:^(BOOL granted, NSError *error) {
    if (!granted) {
        switch (error.code) {
                case ACErrorAccountNotFound: {
                    dispatch_async(dispatch_get_main_queue(), ^{
                        SLComposeViewController *composeViewController = [SLComposeViewController composeViewControllerForServiceType:SLServiceTypeFacebook];
                        [self presentViewController:composeViewController animated:NO completion:^{
                            [composeViewController dismissViewControllerAnimated:NO completion:nil];
                default: {
                    NSLog(@"%s %x %@", __PRETTY_FUNCTION__, granted, error);
                    [[[UIAlertView alloc] initWithTitle:@"D:" message:error.localizedFailureReason delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil] show];