Wednesday, November 21, 2018

The Open Plan Office, My Story

It's interesting to find a lot of angry workers on the Internet expressing hatred of the open plan office citing extreme annoyances such as the following:
  • People shouting at the top of their lungs
  • People playing loud music
  • The office blasting loud ambient music
  • People pretending to be guitar heros on the floor
How exactly is someone supposed to work let alone work productively in such an environment?  To me it seems that the perception of collaboration is more valuable than actual productivity.  Though I have no idea what's so collaborative about some perfect stranger from marketing walking past my work station several times a day.

My own Experiment Involving 3 Companies

Firstly, I credit my first real job in the software industry working in a company with private offices.  Let's call this company Privacy Products for the duration of this post because I felt my privacy was respected at this company.  I worked for 7 years there and I left reluctantly wondering "what else is out there?"  I went on to work for a startup with an open plan office that was very spartan and austere (read: no personal affects).  Let's call this company Spartan Sprockets.

I only lasted 8 months at Spartan Sprockets.  After discussing future steps with my manager I decided to resign.  About a month later I was diagnosed with severe depression and the doctor gave me some of the strongest anti-depressants available.  I thought long and hard about the reasons why I had to leave this company, here is what I came-up with:
  • A team struggling to self-organize and meet deadlines concurrently
  • Continuous pressure to deliver on-time
  • Writing experimental code that must go directly into production
  • Refactoring the worst rats nest of anti-patterns in legacy code I have seen
I'm not a stranger to any of those things, so why did I feel so overwhelmed?

The new company I started working for was not a startup, had much more work / life balance worked into the culture, and was more socially engaged.  It also had an open plan office but personal affects littered the place.  I'll refer to this company as Happy Fun Town.  In particular this company is known nationwide as one of the top software companies to work for.

And I hated it, but I was smart you see!
I made a point of paying attention to any sources of frustration and anxiety this time, and I learned a few things.  What I found is that my attention would wander to another individual or thing in the office that was foreign or novel.  I found I was distracted by all kinds of things such as:
  • Two people sitting next to each other pointing at the same screen
  • A picture of someone's face with a cat's body pinned to the wall
  • Large earrings bracketing long hair
  • Three monitors and a guy in a blue hoodie
  • White iPhone headphones in someone's ears
  • A random cough
  • A sudden "It could be... blah blah blah" from an ambient conversation that doesn't concern me in the slightest
  • A gentle clank on a desk and wondering what would make a sound like that
  • I could even overhear someone shift in their chair
  • I would lock eyes with a co-worker, stare for a moment clueless and bewildered, then mutually nod to resolve the awkwardness.
Noise-cancelling headphones were no help.  At both Spartan Sprockets and Happy Fun Town I worked alongside the most respectful and well-mannered people.  These people were not (completely) obnoxious.  Overall I worked well with most of these people and we would hang out for lunch on Fridays. I enjoyed Privacy Products a lot more though and occasionally think about boomeranging.

Summarizing my Overall Experience

Everyday distractions prove to be the single greatest reason I dreaded coming to work every day.  And it's particularly pernicious because when I get to that point that I dread my career I obsess over (unrelated) big things.  You know you're in trouble when you ask questions like "What is the meaning of it all?"  But it was just a death by a thousand cuts.

Small relatively benign distractions but amplified by constant frequency minute to minute, hour to hour, day to day, week to week, year to year lead to a dramatic deficit in productivity.  I agree with Julian Treasure on this point: you are one third as productive in an open plan environment versus a private office.  I went for more that three years without a private office and without privacy.  During that time I was not proud of the software I wrote.  However, in the several years I worked for Privacy Projects, I helped them take a grassroots product from humble beginnings to a point where it exceeded sales of their flagship product.  I was proud of the code I wrote, and because I had a private office, I had a large enough work-space to experiment with 4 computer monitors.  In seven years I saw my team grow from 4 people to 15 people between two teams.  Every quarter revenue steadily soared even amid the great recession proceeding the summer of 2008.

I produced code I hated both in and out of the office.  I always had a side-project on the go no matter where I worked.  While I worked for Privacy Products I wrote from ground up a voxel-based terrain engine with multi-resolution support and shadow casting; in other words, literally rocket science.  While working for Spartan Sprockets and Happy Fun Town I found I had to shelve the terrain engine project due to feelings of burn-out.  In lieu of that project I took on a much simpler project and found I got stuck on some of the simplest problems and could not seem to move past them; eventually that got shelved too.

I doubt that open plan offices are necessary for effective collaboration. In fact I believe open plan could hurt effective collaboration.  Over the past decade I've found that the vast majority of meetings with other people only involved one other person.  For this a private room is not only sufficient, it is absolutely necessary.  I don't understand where some get this airy fairy idea that an open-plan office leads to greater creativity because ideas naturally bleed from one brain to the next.  Because how exactly does an open plan office encourage active listening? How do ideas spread when everyone wants to be heard and nobody wants to listen?  In every meeting I've been in where there was 4 or more people the topic always drifted, the meetings often went over time, people constantly interrupted each other, and the amount of ground covered was much less than desired.

Data to Consider

Certainly there are ways to avoid careless office plans.  But if implemented poorly you will overhear impromptu discussions from other departments that do not even concern your team or its work let alone you and your individual work.

Now it's time for some data:


I will say this for an open plan office, you can maximize the number of staff per square foot.  It's difficult to gauge the cost of cramming staff together so closely in the long-run, but that's why I decided to write this post.

Here is some research suggesting that the gains of an open plan office don't make-up for the losses:
http://www.sciencedirect.com/science/article/pii/S0272494413000340

Highlights:
  • Satisfaction with IEQ changes depending on the degree of workspace enclosure.
  • Occupants of different office layouts have different IEQ priorities.
  • Noise and privacy loss identified as the main source of workspace dissatisfaction.
  • Benefits of enhanced interaction didn't offset disadvantages in open-plan offices.

Where am I Now?

I have been employed for six years now working remotely for a company that I consider to be on par with Privacy Products because, well, I'm working from my home in my own private office.  Once again I have 4 monitors that allow me to spread more information across my periphery.  I would say my health has improved substantially.  I am off the anti-depressants, my weight is stabilized, I did some traveling, I took-up scuba diving, I climbed a mountain, I joined a club.

Thursday, April 26, 2018

Procedural Terrain Research Project

Rationale

In 2008 I decided that I wanted to build a game for simulating the development and sustainability of a colony on Mars.  I had high ambitions and a keen interest in the epistemic qualities of such an endeavor.  I started with terrain and wanted to ensure as realistic geology and geomorphology as possible because education and simulation was more important to me than entertainment.

I eventually shelved this project discouraged by gaming research suggesting that playing video games could be very bad for a young person's health including shrinkage of the hippocampus and impairment of the prefrontal cortex.  There were other factors involved in that decision as well, but that was definitely the final nail in the coffin.

Below I document my journey procedurally generating an impact crater with outflow channels meandering around it.  Each two-dimensional grayscale image illustrates the DEM (digital elevation model) resulting from each test (top-down view with lighter colors representing higher elevations and darker colors representing lower elevations).  I thoroughly enjoyed this project and hope to find a productive and practical application for these algorithms I have developed.

The Crater




At first I tried to simulate the particle physics of an impact crater as it unfolds in real-time, but this was much too computationally intensive even simulating large virtual particles.  So I started with an optical template with little more than an ejecta blanket.


Here I've elaborated on the template a bit and carved out a cavity in the while preserving the relative isostatic uplift in the center similar to the effect of a falling drop of water into a pool.  Some large craters have multiple concentric rings from the resulting impact (Mare Orientale and Schrödinger are stunning examples in particular) suggesting a corollary to ripples in a body of water.


The next thing I wanted to do was give it a realistic environment.  Here I employ brown noise and merge it with the template.  Brownian noise is particularly useful for synthesizing realistic terrain.  It is characterized by higher amplitudes at lower frequencies (as opposed to white noise which is uniform by comparison).  It tends to best model the appearance of mountains as it does the sound waves emitted by a water fall.


The smoothness of the impact crater was the first thing that bothered me and I was determined to resolve this before moving onto any other experiments.  This is one of the inherent disadvantages of using a template or decal approach offering less flexibility in crater shape perturbation.


This became an exercise of playing with a variety of aggregating mathematical operations until I produced something visually appealing.  I still think this is too simple and inherently too circular.

River Channels



Pleased with the crater I took a short break and moved-on to my next ambition: outflow channels.  Immediately I noticed I had to deal with these subtle rounding errors resulting in pixelated holes.  This is one of the problems inherent to reconciling partial non-integral values to an integral grid system.


This project kindled a love and appreciation for mathematics and linear algebra in particular.  Resolving these errors wasn't easy but it wasn't impossible either. Now I had a baseline established for synthesizing branches and making erosion channels more interesting and varied.


I played with the idea of parallelizing outflow branch computation, but decided that degree of fine-grained parallelism wasn't necessary.  Something didn't sit right with this channel bulldozing the impact crater.  The meandering of the branches also appears to be much too to haphazard, something was missing.


From observing Gold Crater on Mars I concluded that the walls of an impact crater were compressed and higher density than other terrain.  In fact terrain density varies a great deal in general.  So I added an additional density map companion to the height map model and used this to train branch synthesis.  This additional channel made it possible to enhance it further in the future using density generation algorithms.



Taming these meandering branches still proved to be as difficult as herding cats.  I used the pink dots here to help me trace exactly what the main branch was doing.  Was my algorithm ignoring the density values? was I making incorrect assumptions about the calculations?  While one branch broke through the crater wall the other did not.  This still marks an improvement.


Eventually I stumbled upon something that started to make sense.  I wasn't happy with it yet though, the outflow channels appeared too smooth for my liking.  But at least I had an iteration with plenty of outflow channel branches and not a single one breached the impact crater walls.


Finally I stumbled upon something that blew my mind.  There were some emergent effects I had not anticipated: riverine islands.  When I stumbled upon this iteration I was tremendously happy.  In fact that day marks one of my best days of coding experiments.

Here is a 3D rendering of the resultant height map model from three different perspectives.  Note the riverine islands in the last rendered image.  The spires here are the same residual pink / red dots I used for troubleshooting flow paths.


Going Forward

I am currently researching practical applications for the code I have produced for procedural terrain generation.  While the individual algorithms are not distributed / parallelized, the system that generates an environment with these features is (I opted to use OpenMP).  Procedural algorithms include impact cratering, outflow channels, volcanic spires & outflow, wind erosion and weathering. 

In particular the settlement of Mars in any capacity (whether an outpost or self-sustaining human colony) is tremendously exciting and suggests an apropos domain for this technology.

Friday, December 16, 2016

Box Intersection Test Without Branching

If you're like me you have a hard time finding basic things on the Internet.  Maybe these concepts aren't all that basic then, I haven't really considered that.  If you're also like me you want to see the code, not endless walls of text, so without further ado, here is the fast code to detect if two integral 3-dimensional axis-aligned bounding boxes intersect without branching assuming a twos-complement architecture:

    boolean intersects(IntAxisAlignedBox left, IntAxisAlignedBox right) {
        return
            (
                lineDeltaFactor(left.min.x, left.max.x, right.min.x, right.max.x) |
                lineDeltaFactor(left.min.y, left.max.y, right.min.y, right.max.y) |
                lineDeltaFactor(left.min.z, left.max.z, right.min.z, right.max.z)
            ) == 0;
    }

    int lineDeltaFactor(int leftMin, int leftMax, int rightMin, int rightMax) {
        final int
                leftWidth = leftMax - leftMin,
                rightWidth = rightMax - rightMin,

                leftMid = leftMin + ((leftMax - leftMin) >> 1),
                rightMid = rightMin + ((rightMax - rightMin) >> 1);

        return (abs(leftMid - rightMid) << 1) / (leftWidth + rightWidth + 1);
    }

    int abs(int value) {
        final int mask = value >> (Integer.SIZE - 1);

        value ^= mask;
        value += mask & 1;
        return value;
    }

Now, if you have a smart compiler / LLVM, it will inline expand these functions to avoid expensive stack juggling and v-table lookups. Note that this example also includes a branchless version of the absolute value function.  Now if your architecture isn't twos-complement, then this isn't going to work because abs will return garbage. This will fail for input values that are close to 32-bit extremes (i.e. Integer.MAX_VALUE and Integer.MIN_VALUE).

Thursday, November 19, 2015

Computing n-tuple Combinations of a Set - Java

Introduction

I recently ran into a problem while building the type compiler for my proprietary scripting language.  It seemed like a braindead problem I should have remembered from my high-school days of coding.  After reviewing a few recursive and somewhat clumsy examples on the Internet, and also after decidedly turning down Guava (In interest of compactness, external dependencies on my compiler are kept at a minimum), I decided to attack the problem on my own.

Problem

Calculate the complete list of n-tuple combinations of elements in a set of arbitrary type.  For example, computing the triple (3-tuple) list of combinations of the following set should satisfy the following:

Input: [1, 2, 3, 4, 5]
Output: [[1,2,3],[1,2,4],[1,2,5],[1,3,4],[1,3,5],[1,4,5],[2,3,4],[2,3,5],[2,4,5],[3,4,5]]

Features

I came up with a solution that boasts the following features:

  1. Iterable (no memory consumption)
  2. Non-recursive (no stack juggling)
  3. n-tuple combinations only (as opposed to permutations)

Code

Here's the code:
package com.extollit.collect;

import java.text.MessageFormat;
import java.util.*;

public class CombinationsIterable<T> implements Iterable<List<T>> {
    private final List<T> delegate;
    private final int rank;

    /**
     * Create an iterable list over the specified list that returns an iterator for iterating over
     * all the n-tuple combinations in the list (n-tuple specified by the rank parameter).
     *
     * @param delegate The list to compute n-tuple sub-set combinations from
     * @param rank The size of the combination tuple, must be > 0 and <= delegate size
     */
    public CombinationsIterable(List<T> delegate, int rank) {
        this.delegate = delegate;
        if (rank > delegate.size() || rank <= 0)
            throw new IllegalArgumentException(
                MessageFormat.format("Combination tuple size (rank) must be less than or equal to the delegate list size and greater than zero, but {0} <= 0 | > {1}", rank, delegate.size())
            );

        this.rank = rank;
    }

    @Override
    public Iterator<List<T>> iterator() {
        return new ComboIter();
    }

    private class ComboIter implements Iterator<List<T>> {
        private final ArrayList<T> workspace = new ArrayList<T>(rank);
        private final int [] indexStack;
        private final int spanSize;

        private int stackPointer;
        private List<T> current = null;

        private ComboIter() {
            indexStack = new int[rank];
            stackPointer = 0;
            indexStack[0] = -1;
            spanSize = delegate.size() - indexStack.length;
            for (int c = 0; c < indexStack.length; ++c)
                workspace.add(c, null);
        }

        private List<T> findNext() {
            while (stackPointer >= 0 && ++indexStack[stackPointer] > spanSize + stackPointer) {
                stackPointer--;
            }

            if (stackPointer >= 0) {
                this.workspace.set(stackPointer, delegate.get(indexStack[stackPointer]));
                for (int c = stackPointer + 1; c < indexStack.length; ++c) {
                    indexStack[c] = indexStack[c - 1] + 1;
                    ++stackPointer;
                    this.workspace.set(c, delegate.get(indexStack[c]));
                }
                return this.workspace;
            } else
                return null;
        }

        @Override
        public boolean hasNext() {
            return acquireCurrent() != null;
        }

        @Override
        public List<T> next() {
            List<T> current = acquireCurrent();

            if (current != null) {
                current = new ArrayList<T>(current);
                this.current = findNext();
            }

            return current;
        }

        private List<T> acquireCurrent() {
            if (this.current == null)
                this.current = findNext();
            return this.current;
        }

        @Override
        public void remove() {
            throw new UnsupportedOperationException(
                "Combinations iterator does not support removal of combinations because it's functional mapping to the underlying collection is non-injective"
            );
        }
    }
}

It passes a few of my tests, please notify me of any errors.

Friday, November 7, 2014

Why Automated Tests Suck

...or how to build a test plan that doesn't kill fairies.  This post discusses a few core automated test plan guidelines.  Bad tests can actually defeat the purpose of automated tests entirely and really bad tests make manual testing a better option (wat?).

Noise vs. Signal

Let's take a moment to compare a test plan to something that would make Carl Sagan feel like a kid in a candy store: exoplanet research!  Scientists often search for exoplanets by seeking-out an oscillating signal amid a great deal of noise.  Essentially you're looking for evidence of a wobble in the star's movement because of a planet's (or a borg transwarp nexus') gravity is tugging on it as it revolves around its star.  But at such great distances away such a signal is encumbered by a myriad of problems.  The signal to noise ratio for this analysis is less than 1%.  Finding genuine planets is incredibly difficult and false positives are common (Zarmina).  What's my point?  The more noise, the harder it is to isolate a genuine signal.  Similarly, if you have tests that fail (and it's ok, those failures don't really count) then it becomes much harder to isolate genuine test failures from test failures that "don't matter."  Particularly for a release test plan, your release test plan should have zero failures and zero skipped tests.  The more you skip tests or ignore tests, the more you erode its utility and reliability as a device that produces a definitive and objective result.

I'm a New Hire Idiot

Another reason to ensure that your release test plan has absolutely no failed or skipped tests is because your test plan should be self-evident and be independent of idiomatic understanding.  For example you should never have to say that "test x is failing because we haven't updated the x87g filter data model for the cogsworth subsystem metacontroller yet, just ignore it because I don't think anybody uses that component anyway."  This is an example of idiomatic understanding and causes the naive person to lose trust in your test plan (and pull their hair out).  When new hires join your organization (or transfers from another team) often ramping-up on idiomatic understanding is so difficult because it is fragmented, subjective, inconsistent, and exemplary of the human vices that lead to conflicts.  Make tests a priori in their function and purpose.

Tests as Strong as Bismuth!

Tests should not be so brittle that you're spending more time updating your test data than you are writing code that brings value to your users.  In fact, when this is true often the tests are abandoned and left failing and then ignored and then forgotten and yet they still linger and co-ops ask about them and get an earful about the myth of the one test and the war of the first build, but it was all for naught because Isildur was struck down by a brittle test; volatile data pierced his heart.  And yet no co-op has been found worthy to take the one test deep into JMeter and destroy its design once and for all.

One might think that tests like these ought to be removed, but it isn't clear to the average person whether that would do more harm than good.  If a test fails too often then one of two things has to happen: either you update the test, or you completely redesign the test.  Removal of a test is quite simply a step backward.  Someone saw fit to provide that coverage initially, so upon what basis does one reason that such coverage is no longer necessary?  One side-effect of test removal is that can signal to the naive person that the associated features have also been removed (then you get a call from an angry customer threatening to sue your company for breach of contract when the feature broke and was never fixed, well, more often they will simply move-on and invest their dollars elsewhere silently.  The sad thing is this will happen too with brittle constantly failing tests that go ignored).

Sometimes a test that fails too often benefits from a complete overhaul because quite likely, and I'll put it bluntly, your test is just wrong.  A test that isn't self-contained is an integration test.  Some believe integration tests are as scam, I believe they are unavoidable.  That isn't necessarily a mutually exclusive statement either come to think of it.  However, if your unit test doesn't need to consume external resources (of any kind) then ensure it doesn't.  In fact, you may want to redesign the code the test is built around. "But wait Batman!  This is looking a lot like TDD!"  You're right, it is.

Tests Express Simple Statements

Each individual test should make a specific statement that can be used to ensure a straightforward objective expectation is aligned with actual state.  Here are some examples:
  1. When I call the delete function passing existing user x to it, it should yield the HTTP 200 success response.
  2. If I call the delete function passing a non-existent user x to it, it should yield the expected HTTP 404 response.
  3. When I call the get model foo function, the response body should contain a JSON object containing at least all of the following field names.
Avoid writing tests that involve too many test conditions and assertions.  Otherwise, it starts to take on the stench of the integrated test.  This may manifest as either multiple assertion statements or for example the exclusive comparison of a result against a template.  These are both examples of testing too many conditions in one test and does not resolve to a clear statement.  For example, compare the following two statements:
  1. The result should contain the following fields in this precise order and contains nothing else besides those fields and have values match these other values precisely.
  2. The result should contain at least the following fields.
Number two is a good definition for a test whereas number one can be broken-down into 4 separate statements (and consequently 4 separate tests).  Tests that test multiple conditions are often duplicating those conditions in other tests, and when they fail it simply adds more noise that developers and QA have to sift through like a child drowning in a playpen full of plastic balls.  Test plan noise erodes usability, refined targeted tests that reflect clear and concise statements increase clarity and usability.  If you cannot express your unit test in one coherent sentence, then it's not a unit test.

It's Not My Fault! The Server Was Down

This essentially embodies the difference between a unit test and an integration test.  However, I want to attack this issue from an angle independent of vernacular.  Tests should be consistent in terms of the environment and resources upon which they depend.  The other dimension to consider besides space is time.  A good way to rattle a developer's nerves is to introduce a test that downloads JSON from some bizarre and esoteric web service in China on Wednesdays unless it's a full-moon with the exception of when Mars is at apposition precisely 39.2 hours before daylight savings.  This is a hyperbole, I don't think anyone would actually write a test like this (knock on wood), but it illustrates a scoping problem concerning time and space (e.g. China doesn't even use DST).  A reliable, self-contained, test without side-effects should not be contingent upon transient state such as specific dates, times and system locale.  With respect to the environment, all of the above can break, and it's never clear when that occurs.  A machine's internal clock can get thrown out of whack, or the system locale may be subtly different (e.g. en_CA vs en_US).

If you must test a system that relies upon something non-deterministic such as one of the following:
  1. The current time
  2. A random number
Then design TimeProvider and RandomNumberProvider interfaces and build them into your code.  In your unit test, you can fulfil these interfaces with implementations that are deterministic.

The environment and resources for which a test plan requires should be reproducible upon one machine and one machine only.  If it is absolutely necessary to introduce external dependencies then categorize your tests (discussed later).  Furthermore, tests should never assume locale or timings, you are not the master of the universe, He-Man beat you to the punch decades ago; or a century ago if this blog is still around eighty some-odd years from when it was written (that's good for a laugh); or I suppose any x number of years ago from now (your now, not my now) since it's impossible for me to predict when this post will be read (...or how a crazy person made a point by stumbling into his own dog food, tune in at 6pm tonight for the full story).  Here are some examples of unhelpful failures of tests using such kinds of resources:
  1. An authentication token used in requests expires on a specific date
  2. A DNS failure occurs for a host
  3. A static IP was changed for a host
  4. A host was down
  5. Matching localized error messages against a template
  6. Testing values from a database maintained by another team / department
  7. Using sleep to wait for some resource to become available
And suggested resolutions for these conditions:
  1. Generate a token as part of the test setup phase, destroy it as part of teardown.
  2. Use multiple DNS otherwise see #4.
  3. Never ever rely on static IPs, they're not even descriptive and you don't have a monopoly on what IT does with the infrastructure.
  4. Rather than fail the test, flag it as skipped, categorize this test elsewhere.
  5. Compare error codes, not messages
  6. Have your own copy of the database, update the database in tandem with the tests.  Consider the database as part of the coverage.
  7. Either poll or block, but don't sleep because it's not reliable to predict how much time a resource will require.  Apply a timeout, and if it elapses then flag the test as skipped, categorize this test elsewhere.
In general a test contingent upon external resources that become unreachable from time to time, if you must write such tests, they should not fail when the external resources become unavailable, rather they should be flagged as skipped. i.e. employ trinary state: pass, fail, skipped.

One may argue that it is sufficient to run the tests again if they fail because they often succeed eventually.  This is essentially saying that it's okay that these features don't always work, so don't worry if it blows-up, just keep hitting refresh until something shows-up.  Side-effects are ok, but side-effects in unit tests are just plain evil.  Don't go there.  Depending on your organization a test suite can take anywhere from half an hour to more than a day (maybe even more?).  If you expect people to simply re-run the test suite to get tests to pass, you're asking them, in the worst case scenario, to invest another day particularly if your tests aren't entirely stateless.

Now for the elephant in the room.  Ideally your tests are isolated and self-contained and have no reliance on external resources.  You might then shoot back with "But how are we supposed to test our REST API then without relying on external resources?"  You're asking the wrong question.  Solid tests are the product of a project designed from the ground-up that ritually employs Test-Driven Development.  If you find yourself in a situation where you have to employ integration tests to provide any coverage to some sub-system, then this is an example of code that is simply not testable (as far as TDD is concerned).

Categorize Your Tests

This is more of a suggestion than a rule of thumb.  TDD is not an exact science, in fact software engineering as a whole is an inexact science.  If you must have tests contingent upon transient, temporal or volatile state, then divide your test plan into categories characterized by the type of test:
  • Self-contained tests
  • Tests that interact with external resources
  • Tests involving temporal state
  • Tests that cannot be run concurrently with others
Additional to categorizing tests, thoroughly document any unreliable characteristics of the test so that the naive person will understand better what's happening when it is mysteriously skipped.  All test plan categories should be stored within the same test plan.  In other words, when someone opens the test-plan, all tests should be visible or at least evident and no tests should be hidden away in some obscure folder of the file-system.  Each category should also have a description.

Build a House then Tear it Down

Good tests should be stateless, idempotent, without side-effects.  Running one test should not subsequently affect the result of another test.  It should be possible to run any number of tests, cherry-picked or otherwise, in any order with consistent results each time.  Each and ever test should have its own setup and teardown.  Each test should setup the state it requires and teardown that state afterward leaving nothing behind that wasn't there initially and restoring anything that was there initially.  This may dramatically increase the time to execute a test plan, however a high-performance set of tests is not conducive to the fundamental purpose for a test plan.  Troubleshooting tests whose results appear to vary with each other is notoriously difficult particularly if such variance is not reflected in the purpose of the test.

Everyone Loves Minions

...or many threads to do your bidding.  Design your tests in such a manner that they can be forked and run concurrently.  The reason for this is to improve the performance of your test plan and make it scale so that you can simply farm it out to as many machines as you need to reduce that day-long test suite execution to just under an hour.  Another notable benefit of such a design depends largely on your product, but particularly for service-oriented software, concurrency is a critical factor of realistic use cases.

Here are some good design principles for a test plan that scales:

Remember Test-Driven Development?

Before you sit down to write a feature consider that it should be testable.  Think about testable code with respect to tests that can scale.  Can your code reliably service multiple requests concurrently without race conditions?  If you find it is too difficult to write tests that can be executed concurrently without randomly failing then it's likely not the test rather the feature design that's implicated.  A cornerstone of TDD is that asking questions about what you expect a sub-system to do helps influence good interface design.  This is why it's important to think about the test before implementing the feature.  The feature and its tests (the tests and its feature?) have an inextricable relationship.

Unique IDs

When you establish state for your test, or more simply put, creating records during setup, then create your entities with unique names keyed by the test name and current thread executing in the test plan.  This should be little more complex than using a name and a numeric sequence.  A numeric sequence to represent thread is more helpful than some ubiquitous unique ID because for example in the case of a race condition it can hint at the point of failure during thread interleaving.  For example, if you're testing deleting users, and the user name is the primary key, come-up with a username like "bob.smith-deletetest-thread-3."  When you tear down your state, rather than track the entities created during setup, remove all entities that match the template used to generate them (e.g. remove all "bob.smith-deletetest-thread-< any number >" entities).  This way if there are side-effects from a previous execution of the test, this will account for them.  It is also good practice to perform this before setup as well for it is reasonable to assume that the test owns any state keyed by a specific pattern that the test uses during setup especially if the name of the test is part of that key pattern.  Here is some example code:
function cleanup()
{
    var objects = service.getUsers();
    var re = /^bobsmith-deletetest-thread-\d+$/;

    for (var c = 0; c < objects.length; ++c)
    {
        if (re.test(objects[c].name))
            service.deleteUser(objects[c].name);
    }
}

function setup()
{
    cleanup();
    service.createUser({
        name: 'bobsmith-deletetest-thread-' + threading.getCurrentThreadNum()
    });
}

function teardown()
{
    cleanup();
}
Some tests by their nature imply a has before relationship, for example the "test global lockout policy" test that locks out all users for security reasons would probably cause all other tests to fail too.  It may be prudent to separate these tests into yet another isolated category so that the distinction in test characteristics is clear.

Last Word

In this post I discussed how test noise can erode the usability and reliability of a test plan.  Eventually people will learn not to trust what the tests are telling them and bugs are permitted to fly beneath the radar.  I also discussed how this is related to idiomatic understanding within the scope of the company, team, or even individual responsible for a particular test, and how this leads to conflicts and misunderstandings and broken builds and weeping release managers.  Tests should be focused and targeted and not try to account for a lot of different conditions.  Tests that involve external dependencies or temporal, transient and volatile resources are also more brittle than tests with a more self-contained environment.  I only advocate such tests if TDD is impossible due to legacy code that cannot be tenably refactored.  Categorize tests based on their nature so that if you must have tests more brittle than others, at least it's clear which ones are more brittle than others and document the why and the how (to resolve).  Tests should be tasteless, I mean stateless (it's lunchtime and I skipped breakfast).  Where possible, build tests that can be executed concurrently to improve performance and ensure that your features can handle concurrency well.

Thursday, June 9, 2011

Mars Tycoon, The "Sandbox World" Strategy Game

extollIT Enterprises is developing a new kind of innovative video game. We call this genre the Sandbox World game. There are three other games I know of that are similar to our concept, but only one can be credited with its inspiration:
  • Dwarf Fortress
  • Infiniminer
  • Minecraft
  • Roblox
If you haven't heard of these games, I suggest trying Minecraft or Roblox. Dwarf Fortress has inspired Mars Tycoon, but it is very difficult to get into if you are a novice gamer.

Our small team of three has been developing this game since the fall of 2008. You are a space-pioneer in the year 2492 and desire to escape the chaotic political climate of Earth to investigate a strange new mineral on Mars (and beyond) first discovered on a small asteroid that impacted Earth. Scientists have dubbed it with the name Loganite and although it's chemical signature has long been familiar to scientists it has never been found in nature let alone in vast quantities.

Loganite has significant commercial value as a nuclear fuel as well as having other peculiar benefits. Scientists are only just beginning to discover Loganite's value to mankind.

Your job is to establish a mining base on Mars, extract the Loganite, and compete against other nations scrambling for the mineral. You will have to defend your mining base from saboteurs, natural disasters, and skirmish attacks.

Launch of Mars Tycoon is scheduled for 2015. If you posses any of the following skills / experience / knowledge and would be interested in joining our team, please contact us using the About -> Contact Us menu at the top of the page:
  • 3D Modeling / Animation
  • 3D Programming Experience
  • Programming for CG, CUDA, PhysX
  • AI Programming Experience
  • Biochemical Engineering
  • Mechanical Engineering
  • Civil Engineering
  • Chemical Engineering
  • Geophysics
  • Horticulture
  • Astrobiology
  • Genetic Engineering
"Wait a sec! These are pretty steep qualifications!"
Actually not really. Although some of these "qualifications" demand post-doctoral work, you don't even need to have a post-secondary degree here. Self-directed study is very acceptable. For example, we are interested in people with Chemical Engineering knowledge to help us design a realistic Asimovian Sci-Fi experience.

We have a rich and comprehensive vision for Mars Tycoon in terms of both storyline and gameplay mechanics, so Mars Tycoon will be an introductory game into this universe. If it is successful, we plan to release sequels and prequels each sporting both an expanded storyline and new gameplay mechanics.

Friday, January 15, 2010

Computing Arc-Length of a Spline - Advanced

Here is a snippet of my own software used in another project that computes the arc-length of a spline using the Gauss-Legendre quadrature (numeric method of integration) implemented with recursive template meta-programming in C++:
First-off, you will need a constant 5x2 array of doubles to initialize the abscissae for, which is essentially a pre-computed table of the roots of the Legendre polynomials of order n. In this example I use order of 5, which was accurate enough for my purposes:
const double ABSCISSAE_5[5][2] = {
    -0.90617984593866399280,  0.23692688505618908751,
    -0.53846931010568309104,  0.47862867049936646804,
    0.0,     0.56888888888888888889,
    +0.53846931010568309104,  0.47862867049936646804,
    +0.90617984593866399280,  0.23692688505618908751
};
Stub out an exception to generate runtime errors when incorrectly implementing the class:
class CalcEx : public std::exception {};
Below the recursive class template is defined. It uses template-inheritance and since templates are interface-agnostic, you will need to define this method on your class: inline real f (const real t) const. It's essentially providing the implementation for an abstract method declared in the base class.
template <typename T, typename Derived, int NN>
class GaussLegendreInt
{
    private:
        template <int N> T abscissa (const unsigned int i, const unsigned int j) const
        {
            throw CalcEx ();
        }
        template <> T abscissa <5> (const unsigned int i, const unsigned int j) const
        {
            return static_cast <T> (ABSCISSAE_5[i][j]);
        }

        template <int N> T summate (const T val, const T a, const T b) const
        {
            return val + summate <N - 1> (
                abscissa <NN> (N - 1, 1) *
                static_cast <const Derived *> (this) -> f(
                    (b + a) / 2 +
                    ((b - a) / 2) * abscissa <NN> (N - 1, 0)
                ),
                a, b
            );
        }
        
        template <> T summate <0> (const T val, const T a, const T b) const { return val; }
        
    public:
        inline T compute (const T a, const T b) const
        {
            return ((b - a) / 2) * summate <NN> (0, a, b);
        }
};
Now it's time to put our algorithm to use and build a class that takes a spline object (usually four points or two points and two tangents) and computes the arc-length using the Gauss-Legendre algorithm we've prepared:
template <typename Spline, typename real>
class ArcLength : private GaussLegendreInt <real, ArcLength <Spline, real>, 5>
{
    private:
        const Spline & _spline;

    public:
        inline ArcLength(const Spline & spline) : _spline(spline) {}

        inline real f (const real t) const
        {
            return MAGNITUDE(_spline.computeAt (t));
        }

        static inline real calculate (const Spline & spline)
        {
            return ArcLength(spline).compute(0, 1);
        }
};
The spline object assumes a function called computeAt that returns a vector (2D, 3D or whatever) of which the function MAGNITUDE computes the vector magnitude of. You will define these yourself including your implementation of splines. There are plenty of examples on the Internet on how to implement splines. Some examples include Cubic B-Splines, Catmull-Rom Splines, and NURBS.