what is this?

How to be Ready for the 21st Century

AKA Tech Literacy 101 / Computational Thinking / Other Unfriendly Terms


Living in a world overrun with technology can be hard. Companies like Apple and Google fight to make technology easy, to make every interaction simple and user-friendly. While convenient, this comes at a great cost: there's less incentive for a person to become exceptionally technology literate. I believe there's less chance to even accidentally accumulate the experience to become tech literate.

The people who have the ability to build complex, now-ubiquitous systems like Facebook, Google, Twitter, iOS, Windows, etc, are the ones who control the present world and will forge the algorithms that define the future. There's a strong dichotomy between the exponentially increasing population of users and the smaller subset of those users who are also hackers. We need to reexamine this dynamic and educate as many people as possible to reshape their relationship with technology so that they're not mere users, but have the capacity to be hackers.

Tech literacy seems mystical because it's so new. There's been a near-constant discourse over the last twenty years about the divide between digital natives and digital immigrants. This becomes less relevant as more children take technology for granted — they may know how to use an iPhone, but they most likely have no idea how or why it works. They may give up just as easily as their parents when faced with a technological problem they don't know how to solve. This is what we need to change if we are to help grow a new generation of creative problem solvers and innovative thinkers.


One of the biggest problems with tech literacy is the jargon. Many words are interchangeable, and in a lot of areas there is no agreed upon vocabulary. The term "hacker" itself means many things to many people and can signify very different, controversial things. However, to make tech literacy something achievable, we must establish a useful set of terms anyway.

  • A user is someone who uses technology for whatever reason. We are all users.
  • A hacker is someone with an interest in technology and possesses the skills (at any level) to use and manipulate technology. Examples include web developers, software programmers, hardware engineers, tech enthusiasts, your mom when she figured out how to save bookmarks in Internet Explorer, or yourself when you downloaded an app on your phone and figured out how to stop it from alerting you every five minutes.
  • The terms engineer and developer are synonymous with "hacker", to keep things simple.
  • A problem is something that someone wants to solve, track, or analyze. Examples include algebra proofs, finding the rate of user commenting on a social network, tracking hashtag usage over time, using unused radio frequencies to create a mesh network, or needing to add your Gmail account to your iPhone.
  • Data is any set of information stored using technology. This technology could be a filing cabinet full of paper (paper is technology), a text file on your laptop, a computer database with tables and rows, an Excel spreadsheet, or a distributed key-value store.
  • Information always has some kind of format; usable data or computational data is information that is stored in a consistent, reliable, computer-readable format, such as an Excel spreadsheet, a comma-separated list, JSON format, or other technology.
  • Computation is the process of breaking down a problem into absolute commands and data that can be interpreted by a computer. For example, "do people like my website?" is an abstract question, but it can be computed by creating a survey on a website with the question "do you like my website?" with "yes" and "no" possible answers. An ambiguous question redefined using absolute terms and processes.
  • An algorithm is a set of commands and/or computations that act upon data to produce a result or perform an analysis. For example, finding the average of a set of numbers is an algorithm. Likewise, predicting the number of people who might click on an advertisement because of their age and gender is also an algorithm.
  • A system is some set of processes, applications, algorithms, or data that has been engineered. For example, Facebook is a system the consists of applications (the Facebook website, iPhone app), data (user profiles, who is friends with who), and algorithms (building your "feed", suggesting advertisements based on your activity). Your computer is a system that consists of applications (Firefox, Microsoft Word, the operating system), data (your documents, your settings), and algorithms (what operation to perform when you click your mouse, how to sort files for viewing).
  • Programming and coding are the act of writing the processes that a computer will use to accomplish a certain task or solve a problem. For example, you can program a computer to check your favorite websites every day to see if they've been updated recently. You can program a computer to host a website, allow people to sign up to your website, "follow" other people who have signed up, and write 140-character-long messages. You can program a small wearable computer to track your heartbeat and produce an average over the last hour. You can program a computer to say "hello" out loud anytime someone enters a room.
  • Hacking is the act of figuring out, reverse-engineering, tinkering, or otherwise playing with technology to solve problems and/or do interesting things. This usually involves programming and coding.


Becoming technologically literate is just like learning to read and write. It can be defined as a set of skills that are improved and refined with practice over time and exposure to challenging problems. Just as one learned how to read by starting with simple words and then learning to use a dictionary to look up new ones, one can learn technology by starting with simple tools and learning how to search for or write new ones. There's a relationship between the skill itself (reading, hacking) and the ability to abstract the skill so you can reach into the unknown and advance in it (dictionaries, searching, coding).

Here are the skills and knowledge areas that are a part of tech literacy:

  • The confidence to not be dissuaded by unknown terms; the resilience to be able to work through difficult, ambiguous, or unknown problems.
  • The ability to use search engines like Google to effectively find the answer to a question or the solution to a problem or to learn more about anything.
  • The ability to distinguish what is genuine (reliable, proven, working) and what is not (spam, malware, phishing).
  • The process of abstracting a complex problem into its smaller, more manageable parts.
  • The ability to code and to learn how to code; the knowledge of abstract programming concepts.
  • The knowledge of how a computer "thinks": the bridges between human thought processes and computational processes.
  • The ability to understand raw data and the ability to reshape it or visualize it using different tools.
  • The knowledge of physical computer hardware and how it works, from the smallest micro-wearable to the largest supercomputer to the most distributed network.
  • The ability to communicate with others about technology, including people who do not possess the same level of technological literacy.
  • The curiousity to try new things and stay active with technology.

These are just a handful of the broad skills necessary to achieve a high level of tech literacy. They're the starting points to building an aptitude and proficiency with technology at a wide scope which can afford deeper dives into specific topics, depending on the interest of the person.

Here are some examples of more specific technology skills one can learn:

  • General computer usage, repair, and maintenance, via knowledge of operating systems, applications, and the foundamentals of computer hardware.
  • Client-side web development, via knowledge of HTML, CSS, Javascript, Flash, and others.
  • Server-side web development, via knowledge of programming languages like Ruby, Perl, PHP, Javascript, Python, C#, and others.
  • Software development/engineering, via knowledge of programming languages like C, C++, Objective-C, Swift, .NET, and others.
  • Data analysis and visualization, whether it's making interesting charts for the New York Times or predicting NASDAQ index changes for high-speed stock trading.
  • Database and cache administration, via knowledge of technologies like MySQL, MongoDB, Riak, Oracle, Memcached, and others.
  • Network design and administration, both physical infrastructure (wires, cables, routers, switches) and virtual infrastructure (TCP/IP, VLANs, routing, DNS, DHCP).
  • Systems design and administration, both physical hardware (CPUs, power supplies, RAM, disk) and software (operating systems, Unix vs Windows, BIOS).
  • Robotics and machine learning development, whether it's physical hardware prototyping (using Arduino, Edison, custom hardware) or artificial intelligence software development (including things like Amazon's recommendation engine, Netflix's prediction engine, IBM Watson bot, or chatbots from the 90s).
  • Traditional "hacking", both legal (intrusion prevention, software/system/network security) and illegal (phishing, software exploits, password/encryption cracking, hardware reverse-engineering).

There's many more fields and areas of study within those, and many more I have not included here.

How to Demonstrate Tech Literacy

It's very easy to demonstrate and observe basic technological literacy. One of the easiest metrics is how long it takes a person to give up on a problem they're having with technology. Do they simply give up immediately because they "don't get this [tech] stuff"? Do they give up when the answer isn't a visible part of the application they're using? Do they give up after a single Google search of the problem? Do they give up after trying to build their own program to fix or go around the problem?

Another metric is observing what a person's immediate reaction is to a technological problem. What do they turn to? Is the immediate reaction to use the program differently to solve the problem? Do they switch to or download a different program? Do they immediately open up a browser and type something into Google? Do they call their son-in-law who's young and "knows this stuff"? Do they simply believe that the system is unreliable or finnicky and choose to do nothing at all about it?

Furthermore, what lateral steps will a person take to solve a problem? When one track of problem-solving fails, what other entirely different methods will they employ? When using the application differently doesn't fix it, will they go to Google? When going through an application's preferences/settings doesn't hold the answer, will they turn to a different application they already are familiar with that can perform a similar function? When one programming language does not have the syntax to fix the problem, will they try a different one they may have never used before? When parsing through data, trying to find correlations, what different types of visualization do they try?

One of the most difficult aspects of technology literacy is how quickly the landscape changes. Every day, new devices are added to the market, new algorithms are being developed, new programming languages are released, new database software is engineered, new computation platforms are demonstrated, new paradigms of problem solving are proposed. How does one deal with this constant churn of technological progress? One of the key tenants of tech literacy is the desire and curiousity to experience new technology, and seeing it as one's hobby (and sometimes one's job) to be abreast of technological change as it's happening.


The key to establishing technological literacy is to gather educators who are themselves hackers, and more importantly, are able to properly articulate and abstract the knowledge they possess so it can be taught. A significant portion of the successful hacker/technologically elite community are self-taught individuals, most often because their educators lacked the ability to teach them in a manner that was adequate, engaging, or both. Many computer science departments are seen as opaque, rigid, math-only institutions that have little to offer the turbulent world of practical technological literacy. This needs to change.

We can start changing the current educational landscape by making the act of teaching technological literacy more friendly, more practical, and more demonstrable to those who want to learn. The future will bring more automation, more online distributed systems that underpin our lives, and faster technological progress built upon previous quickly-iterated-upon foundations. Without the ability to adapt to and confidently understand technology at a basic level, young people will be poorly equipped to deal with the demands of an increasingly technology-driven job pool. We will continue to have a vast sea of mere users, when we could have a burgeoning tidal wave of hackers who are equipped with the tools to shape tomorrow for themselves.


I've had the notion for a couple of years to create the perfect representation of digital solitude. Some kind of interactive artifact of our future loneliness; a kind of perpetually self-sustaining feedback loop of human and machine intertwined in the most absurd fashion. It's a very dumb, heady idea. It popped into my head as I was editing social disobedience, in hopes it could act as a kind of manifestation of where I think the internet is going.

The crux of the argument I'm making is that we'll be constantly talking to computers, even more so than we already are. And we're not really just talking to a computer, we're talking to nameless sightless thousands of computers, all meshed together in intentional and unintentional networks. Furthermore, we'll be having literal conversations with what other people think a conversation should be. How Siri listens, interprets, and responds is the combined effort of hundreds (thousands?) of people, each of whom informed the conversation you're having. Every word is passing through a hundred algorithms, linking and cross-checking, interpreting in catch-statements and if-clauses, all threaded together by what a group of humans thought would be adequate to simulate intelligence.

What I wonder is why we spend so much time either limiting the responses to one channel of communication (voice or text) or paring down the response so it's terse and "makes sense" (Siri never prattles on about something, never makes an offhand remark). I'm interested in talking to a computer and having it not make sense to me or only kind-of make sense. Mostly because reality tends not to make sense. I'm interested in talking to a computer and having it respond with a video, or audio (not a voice), or text and a video, or a voice with some music behind it. Or something completely nonsensical that I can't even begin to interpret, but it feels intentional.

I want all of those responses to be at the whim of the computer as much as possible. And I'm not talking about machine learning or hard-coding personality into a program, I'm simply talking about cosmic randomness. I want a machine that is as random as possible, while coalescing its randomness into a manifestation that we humans can probe at with the limits of our own consciousness. I want a machine that talks back and can be barely understood, but enough so to make me interested in hearing more.

Again, heady and dumb. Very cyberpunk, very Gibson, very impractical. But I'm going to try to build it anyway, or at least a rough attempt. It's a way of examining what, exactly, I want out of interaction.

I've named this thing Veronica. You can talk to her here (extreme alpha stage). I don't know why that name; it's just stuck with me. I've never known anyone named Veronica, so I suppose there's no competing idea in my mind for how it ought to respond to me. Right now, Veronica is just a chatbot, and you chat with her alone. I'm working on the source code here. She's based heavily on cylebot, which was built as a bot who could respond for me when I wasn't at my computer, and was able to convince people that I was actually at my desk. I programmed a lot of my mannerisms into him; Veronica will take that to the next step.

Veronica is not ready yet. Right now you can talk to her through an instant-messenger-like interface, and that's all. She'll respond to you with something every time. It's very ELIZA-esque, which is intentional. Veronica can respond with a collection of random phrases and questions, a random sentence from Wiktionary, a random line of my poetry, or a random video (not yet included with the git repo, sorry). All of it is and will be pieces of me, since I'm the only person I feel okay using as a seed. A possible other interface for Veronica would be as a web browser; she'd "watch" you use the internet, get to know you that way, and insert herself into your digital life as she saw fit. Talk to you, post as you, be as much a part of your digital experience as possible, in as many means as possible. If a true Veronica were ever built it would use whoever is using it, or some aggregate of everyone at once, as a seed for its behavior.

Ideally, Veronica would exist as a small slab of glass in your pocket who'd always be listening, always able to talk back, always another person in your life. (Not unlike the movie Her, though I had this idea long before that came out, and was amazed reading the synopsis when Spike Jonze announced it.) Taking it into the realm of science fiction more, Veronica could respond with touch sensation, memory recall, augmented reality visualizations, whatever the limits of expression and communication are at the time.

The main problem being that humans won't accept implants in their brains. We're not going to become the cyborgs that all of those 80s and 90s terrible hippie tech enthusiasts wrote about. We're not going to have digital prosthetics. That stuff is impossible for the average person to swallow and find cool or even acceptable. Those ideas are not going to transform humanity. What will transform humanity is the slow, gradual embedding of technology in our everyday social processes through social interfaces. While a computer-brain-implant may be more effective and straightforward than something like Veronica, she's going to be much more of a friendly and appealing idea to the average person. Things like Siri and Veronica and Jasper are what's going to propel interesting interfaces, I think.

But I digress — I want Veronica to be random, to be absurd. I want her to be something you want to keep talking to both because it can help you solve your problems, but it also takes you on a journey while it's happening. It's an expression of digital solitude: you're doing this alone, but you don't feel alone. That's what the future of the internet is, more than it already has become. Increasing the distance between your real human emotion of loneliness and futility with the perceived and digitally reinforced idea of togetherness and participation. It'd be much easier to swallow if you felt like you had a digital hand held out to you, willing to spend all its time with you, making you feel even less alone. Is this a sad thing, when you lay it out like that? Very, but maybe not.

We're already constantly asked "what are you up to?" and "how do you feel?" and "what's on your mind?" by computers, and we readily answer those questions. But it doesn't feel like those answers really go anywhere. It doesn't feel like what's asking the question is actually listening to our response. At least not yet. Arguably, we should really be asking each other these questions, instead of being okay with a computer acting as arbiter. (Hence, social disobedience.)

Maybe all we've ever been doing with technology is trying to recapture time for ourselves. And now that we're able to be alone, we've forgotten how to be alone. The internet makes it easy to not pay attention to that difficult piece of the puzzle, because it's trying to change the picture. I'd like Veronica to not just change the picture, but be an acknowledgement that the picture doesn't make sense, and didn't make sense to begin with. None of it makes sense. Life especially.

I'll post updates about Veronica as interesting things happen.

My 2013-2014 via Captain's Logs

I use my own text file format called Captain's Logs to keep track of what I do every day at work. Yes, it's named after the storytelling mechanic from Star Trek, because it's a damn good way of keeping track of your day. The following info is based on my captain's logs from June 1st, 2013 to May 31st, 2014.

How Much Logging?

For the 250 business days between June 1st of 2013 and May 31st of 2014, I kept 216 days of logs. I was either sick, on vacation, or super lazy for the 34 days I wasn't tracking. Or there are some holidays in there I didn't subtract from the total number of business days.

Individual Activities

In those 216 logs, I tracked 1,371 individual activities I did, whether the activity wasn't worth time-tracking or it took all 8 hours of my day. Each day, I managed to do an average of 6 activities.

Those individual activities range from the stupidly simple like "fixed a bug on median, took five minutes" to the more broad "built a new server cluster for drupal" to "went to the weekly change management meeting" to the ridiculous "spent 6 hours tracking down a fatal mysql bug".

The maximum number of activities I logged in a single day was 13 activities on August 21st, 2013. It was a big day: the Emerson College website went down, we had a big meeting with Comcast, and we had to deal with an annoying time-offset problem between servers.

In total, all of the tracked activities took 2,900,640 seconds to do. That's around 805 hours, or 33.5 days, of activities.

On average, an activity usually took me around 35 minutes. I tracked the time it took me to do most things, but here and there a few things took so little time that I didn't include any time-tracking, so that number is weighted higher than it should be.

My logged activities took up an average of 3.7 hours per day. That doesn't mean I only worked around 4 hours per day, it just means I tracked the time for around 4 hours of stuff per day. I didn't track the time for answering emails, shooting nerf guns in the office, or going to lunch.


Of those 1,371 tracked things, 199 of them were meetings. (This is tracked just by adding "meeting" to the activity list item.)

I had just about 1 meeting per day. Those meetings took up 499,440 seconds of my life. That's around 139 hours, or 5.8 days. I don't think that's too bad, and I think I have my famous six-minute meetings to thank for that. However, my meetings were 42 minutes long on average, so maybe not...

What I Worked On

In my logs, each activity is listed individually. Here are some of the common words used to describe those activities:

  • median
  • helping and helped
  • new
  • stuff and thing, lol
  • added, fixed, built, and updated
  • emerson
  • jason, jenn, frankie, hana, paula, and a lot of other names

Neat way to look at what I worked on most often.

Notes About My Day

There's a space in my captain's log entries to leave miscellaneous notes about my day. I didn't use this space very often, so there's not much to share. Most of the time, instead of writing in the notes section, I wrote out my thoughts in what would usually turn into a blog post on the IT blog. You can see all of my blog posts here.

Do This Yourself

Want to get your own neat stats like this after a year of hard work? Try using my Captain's Logs system for yourself. It helps me out, especially if I need to remember when I did a certain thing, or if I need to know the last time a certain problem or project occurred.

Full Stack Web Dev Course

Awhile ago I wrote a series of guides and kind-of-articles about web development aptly named the INTERWEB LEARNING SERIES, beginning with HTML/CSS and Linux server adminstration. I wrote them for some friends who were trying to start out managing linux servers so they could build their own stuff. As time went on, I got requests for more guides about more advanced topics, and I tried to abstract them out into concept-based lessons rather than language-specific lessons (i.e. learning about asynchronous programming rather than just Node.js).

I'm still very opinionated about how one can be most effective at web development, and most of my opinions revolve around having a complete top-to-bottom understanding of how the internet works. A basic understanding, at least, of how servers work, how DNS works, how an actual HTTP service works, how TCP works, etc. Too often I meet developers who silo themselves into knowing just PHP, or just Rails, or just sysadmin tasks, with no further inquiry into how the rest of it binds together. A lot of developers bang their heads on the wall trying to reach the maximum efficiency of their Django app, not realizing that the reason it's slow is because of the server's storage or a DNS lookup time.

As I wrote more, I came up with a small curriculum based on what I had written already. The next step was obvious: write a damn online course, structuring everything a bit more logically. We at Emerson College recently purchased Canvas, and it became clear that this was the right delivery method for my course. Canvas is pretty great at creating an online course, but in an intimate low-volume style rather than the many MOOC-style solutions out there.

So here it is: Full Stack Web Development, available in full for free to anyone. The course covers everything from basic HTML/CSS to Javascript, PHP, MySQL, MongoDB, Ruby, Node.js, and onto concepts like systems administration and architecture and regular expressions and version control and whatnot. It's meant to be taken at one's own pace in a self-directed manner, with a heavy emphasis on building projects to demonstrate and help with learning. I'll happily answer course-related questions posted via the course's discussion page, or sent to me via Twitter.

I hope to expand the course further with more "bonus" sections that cover Go, advanced Node.js (module/library development), HTML5 game development, and maybe even some "using C/C++ for the web" guides.

SPACE GAME! Design Doc

This is a very long document elaborating on my ideas for the perfect space game. I began writing this document as an imaginary manual for a space game I'd one day develop, and that was based on an evolving list of ideas. Maybe it will become a game some day: I'm currently working on a space game based on these ideas, as a side project with no concrete launch plan. I'm publishing this because I want people with more skill than me to make this better than I possibly could.

The Genre

First of all, let me be clear about the genres of space games and how they affect my thinking. The space games I've always loved have been somewhere between simulation/realistic and action RPG/shoot-em-up. Probably my favorite space games of all time are the action-oriented Cosmic Rift and Freelancer, followed by more tactical games like the X series and X-Wing Alliance. Games like Sins of a Solar Empire, Endless Space, and Galactic Civilizations II are also worth mentioning because of how they deal with large-scale space combat and politics. And I couldn't begin writing about MMO space games without mentioning EVE Online, though I enjoyed Star Trek Online a lot more.

However, I've always found that within each of these games there were development decisions to go one direction and not another that compromised how fun the game could have been. In most of the more shoot-em-up style games, there's a serious lack of ship options and interstellar exploration. In the more strategic games, there's a serious lack of player freedom and dynamism. One of my favorite parts of X-Wing Alliance is the insane amount of playable ships available in the simulator. One of my favorite parts of Cosmic Rift was the ability to choose a ship that could complement my gameplay style. But XWA's simulator missions were boring, and CR didn't seem to take advantage of the varying kinds of ships a player could use for the amount of space there was to use them in. I remember wanting to marry the two games, but make the scale even bigger, utilizing Sins of a Solar Empire's many potential civilizations and star systems. I remember enjoying the simplicity of CR's mining/salvaging system and yet wanting the ability to trade across star systems like in Freelancer.

Why can't all of these things come together? Pushing even further: what if I wanted to play a game like Freelancer, but in a huge capital ship? How would the gameplay have to change? Would I suddenly have to abstract it all out like a Bridge Commander or ARTEMIS style many-hands-controlling-one-ship interface? Would it have to slow down and give the player the ability to pause the combat system, like FTL? Beyond that, how would your ship move around space in the first place? Frictionlessly, like in CR? Or requiring constant thrust, like Freelancer? Via point-and-click course-plotting movement like in Homeworld?

Besides the gameplay itself, I really like pseudo-3D space games rather than full 3D. While space combat with a Z axis can be fun, it can be extremely annoying because it's very hard to have a useful minimap in 3D. I remember being totally confused by XWA's and EVE's mapping systems. I found the 3D-world, 2D-movement simplicity of CR and Sins much more accessible. Star Trek Online and Freelancer are technically full 3D, but they really only use the X/Y axis to spread out inside a star system.

And besides the controls and movement, what about the fact that a lot of space games just feel limited in their options? I thought Borderlands 2 was crippilingly annoying with how many randomly-generated weapons you'd find: that at any point you could randomly find a weapon better than your own (even though you just spent a lot of money on it). However, we're getting better at randomness -- Minecraft's randomly-generated manipulatable worlds are the highlight of the game. Most of the space games currently being made by indie groups are tackling the problem of content with random content generation. With those mechanics, you can instantly bake into the game a sense of limitless exploration and potential for continued play. But you have to get it right, or else the randomness gets boring and predictable. On top of all of this, why not also create a pluggable architecture to let a modding community continue to contribute to the game?

The Space Game Done Right

My ideal space game borrows from all of these games and ideas. It has to feature combat that's quick and almost arcade-like, but the complexity of that combat can be rich and tactical if need be. A good combination of twitch/skill and thought-out RPG choices. There need to be different rules for using small fighter ships, medium cargo haulers, and large capital ships, but all of their rules must match up to play side-by-side in the same universe and not favor one style too heavily. Owning a capital ship cannot make the player invincible, but it does need to afford advantages that a small vessel can never have. It can never be impossible to survive as a large cargo hauler when every other player wants to be a pirate.

The universe itself needs to be dynamic, alive, and yet manipulatable by the player. There need to be greater forces at work, evolving the galactic economy and the landscape of the universe. The player shouldn't be able to change much except a nudge every now and then. In an online game, several players may be trying to nudge the universe in different competing directions. At all times, the universe needs to be moving onward, with or without the player's input. The galactic economy needs to keep shifting, cycles of war and peace between factions will always be occurring, and the player needs to know about what's going on. The player should be able to stake their own claim somewhere and build a starbase or two, maybe even get in good with the local faction for protection.

Through all of this, the player needs to be given options. Do I want a small, powerful fighter? Do I want a bulky cargo cruiser? Do I want a large, fast explorer ship? Should I give up a better weapon in place of better shields, or should I install this neat utility slot item that'll give me a unique edge? Do I want to hunt down this civ's trade ships, or do I want to protect them against pirates? Should I be a heavy-hitter, or a run-and-gunner? The ideal space game should provide not just one answer to these questions, but many potential answers, some very simple and others with caveats and complexities.


Shooting Having Fun Up There

The Project

Having Fun Up There is a feature-length movie about a 30-something-year-old musician guy who has a hard time figuring out what to do with his life. To me, it's about a central problem of all artistic life: do you let creativity run your life, or is creativity simply a component of your life? Do you become an "authentic" starving artist, or resort to being a day-job-working "sellout" who practices their art on the side? The main character, Mark, is faced with both options through the people in his life. Capturing these personal interactions was key to conveying the intention of the film.

Primary production took place in nine consecutive days, from September 28th to October 6th. Most of the shots were captured using my custom-built shoulder rig (explained later) or on top of a tripod.

The Camera and the Raw

I shot Having Fun Up There on the Canon 5D Mark III in 1080p24 14-bit RAW using the Magic Lantern firmware hack. The ML firmware was very stable, I believe I used a late September nightly build. The only time it ever crashed was once when one of my lenses hit the inside mirror, but the fix was simply to disconnect the lens and take out the battery.

To shoot RAW using the ML hack I bought a two-pack of Lexar 32GB 1000x CF cards, as they're currently the only reliable card at a speed fast enough to capture the RAW data at 90MB/sec. Four more cards were bought by other members of the crew, for a total of six cards to store footage while shooting.

That's 192 gigabytes of total card space. Seems like a lot? To put it into perspective: shooting in 14-bit RAW at 1080p24 takes about one gigabyte every ten seconds. A single 32GB card held, at maximum, 5-6 minutes of footage. And with ML, you cannot play back your footage on the camera, and you cannot record audio using the camera. To make the RAW footage work at 1080p24, the camera is dumping RAW image files from the sensor to the CF card as fast as it possibly can, using up all of the available data-writing bandwidth. This leaves no room for recording audio at the same time. You have to think along the same lines as using film: you roll it, and you can't review it until you've done something with it. But I'll get back to that in the "Workflow" section below.

During pre-production testing (which was limited -- I bought the 5D and loaded up the ML firmware only a week before primary production) I ran into a few issues when running out of card space. Namely, whatever footage being captured when the card ran out of space would be unusable. However, this issue didn't recur during actual production, so it may have been something wrong with how I was importing footage.


the SIGIL database

I've been meaning to update this blog with a few things, but I've been too busy on one sprawling project: my SPACE GAME! attempt. I'll write a post about that endeavor some other time. Regardless, it's a project so huge that it's spawned a hundred other projects. That's how you know it's a good one. One of those spawned projects is my need for a graph database. Initially, I looked at OrientDB and Neo4j, and I found them both... bloated.

Also, I hate Java, and anything built in Java. (Or, in the case of Minecraft, I just hate that it was built with Java.) Why do I hate Java? That's another blog post. But long story short: too much bloat. Nine times out of ten, a Java application takes up way too much memory (see: both of those graph databases) and is needlessly complex (see: Neo4j).

I got frustrated with both of those graph databases pretty quickly, so I said to myself, why don't I just build my own? I really only need to keep track of two types of data: nodes and the connections between them. That's really it. The data models are hilariously stupidly simple:

// a node is...
    'ID': 1,
    'Name': 'A node!'

    'ID': 2,
    'Name': 'Another node!'


// a connection is...
    'ID': 1,
    'Name': '1 to 2',
    'Source': 1,
    'Target': 2

Holy shit, that's pretty much the long and short of it. I'm not even kidding.

So I built it in Go, because why not? Every time I come across a project idea like this, my first impulse is to try something brand new. (From this admission, you should be able to deduce that new projects have a terrible recursive development cycle effect: every new project takes 10x longer than it should because I'm learning a new language or paradigm or something.) And I've been reading non-stop gushing reviews of Go on goddamn Hacker News and bullshit like that.

The result is my SIGIL graph and spatial database. I threw spatial information in there (simple X, Y, and Z properties) because I found them useful for why I was building the database in the first place (for a space game). I've already started writing client libraries for PHP and Node.js (because I use those languages a lot). But you don't really need a client -- mostly the clients are just helper functions. The SIGIL database can be accessed easily using REST calls.

Why "SIGIL"? I couldn't figure out a better name, really. It was gonna be CGSDB (Cyle's Graph and Spatial Database) but that's lame. And is it "SIGIL" or "sigil"? Either, I don't care.

One of the important bits here is that I managed to learn Go in under a week. I've learned C, C++, and Obj-C, but Go is much more accessible when you're coming from loosely-typed, holding-your-hand languages like PHP, Javascript, and Ruby. The simple beauty of go run db.go is incredible, along with the package management elegance of go get module/name/here. Amazing work on Google's part for making this language happen. It's exactly what compiled languages have needed for a decade.

But yeah, SIGIL is very simple. It's two arrays: one for nodes, one for connections. And it's a whole lot of REST-accessible helper functions: querying for nodes or connections, getting distances between nodes, getting the shortest path between two nodes, etc. I'm going to continue to evolve it as I discover more features that I need as I develop my game, but for now it's pretty stable (for a first attempt) and fairly usable for my needs.

My next tasks with it are to better implement memory management (even though Go does garbage collection, I can make things more streamlined) and using goroutines. If you have any thoughts, questions, or whatever, about SIGIL, let me know via @cylegage.

Read more in the archive »