1. Memory Management in Low Power Embedded Systems

    Power consumption for mobile systems designers have become a first class concern alongside performance and design. Typically the lowest power processors are 4 bit or 32khz processors with extremely lower power cycles. Memory transposition between program and SRAM on Arduino, or for example between the processor core and Icache on other systems, 50 to 80% of runtime power volatility can come from memory traffic caused by transfers between off-chip and on-chip memory.

    In order to reduce the impact of memory transfer its important to, from the beginning, focus on strategies that monitor and reduce memory traffic and power consumption. Some such strategies include:

    • Cache sizing
    • Loop transformations

    In Arduino for example a strategy to reduce memory transfer should also include proper memory management. The Arduino Uno has 32kb of Progmem (of which .5k is used for the bootloader), 2kb of SRAM, and 1KB of EEPROM memory. If you run out of memory the program may fail in unexpected ways, or behave strangely. These issues are difficult to diagnose without monitoring. As of Arduino 1.0 the F macro has become available. Which looks like this:

    WString.h:#define F(string_literal) (reinterpret_cast(PSTR(string_literal)))

    The F Macro stores C-style strings in Progmem and at runtime moves these strings into SRAM on an as-needed basis. Here’s a commit I made recently to the Adafruit NFCShield library to convert their Serial C-style strings (char arrays) to progmem strings:

    Rewriting the Adafruit NFCShield library to use the F Macro

    The issue with this approach is there are additional cycles required for this conversion; and these additional cycles are power consumptive. One byte of data is copied to SRAM at a time. A better approach is to offload as many calculations, responses, and display values to a remote progrma or server in low-power systems.

    Remember each character is a byte, plus the ‘\0’ terminator that is appended to every C-style string is an additional byte. Thus…

    char message[] “This is a message.”;
    Takes up 19 bytes of SRAM. With only 2,048 bytes it won’t take long to use up all available SRAM.

    There are important considerations with using progmem for strings. For example, progmem (Flash) and EEPROM are non-volatile (information persists after the power is turned off). SRAM is volatile (memory will be lost when power cycled); so if you need data to persist between power cycles, it needs to have a progmem or EEPROM memory address. Also, in progmem the strings cannot be modified; however, if copied to SRAM the strings can be modified.

    PROGMEM
    EEPROM
    F Macro

    Secondly, use the smallest data types to store the data you need. For example, an int takes up two bytes whiel a byte data type uses only one (but stores a smaller range of values). Whenever memory traffic occurs using the correct data type will use less memory resulting in less memory traffic and iherently less power consumption. Reducing oversized variables can offer a number of performance gains. Here is a list of variable sizing in bytes:

    boolean 1
    char 1
    unsigned char, byte, uint8_t 1
    int, short 2
    unsigned int, word, uint16_t 2
    long 4
    unsigned long, uint32_t 4
    float, double 4

    Optimizing SRAM

    Thirdly, using different transitors to reduce without consumption without performance loss is an option. Power gating allows certain sleep transistors to disable entire blocks of a circuit when not in use. See the more information section below for more information on power gating.

    Sleep transistor sizing

    Fourthly, loop transformations can allow two arrays to share the same memory space, reducing memory traffic and saving power consumption cycles due to memory allocation. Take array c[] and array w[]; in a loop intercahnge the number of memory reads would be reduced because array c[] and array w[] can share memory space.

    Loop Transformations

    Fifthly, cache sizing, the minimum energy data cache configuration that satisfies the area and cycles bound. Most current compiler optimizations focus on improving execution time. However power/energy consumption is also becoming an issue with widespread use of embedded systems. In summary, power aware computing involves reducing the switching activity of the Icache data bus between the processor core and Icache. Sample registers should be used to record the transition frequencies between register labels (encodings) of the instructions exected in consectuive cycles. The OS maintains a table of virtual address-to-physical address translations.

    malloc: allocates a given number of bytes and returns a pointer to them. Returns a null pointer if insufficient memory is available.

    free: takes a pointer to a segment of memory allocated by malloc, and returns it for later use by the program or the operating system.

    Compiler optimizations for low power systems
    Memory design and exploration for low-power embedded systems

    Happy building.

  2. REST API Design Antipatterns

    Let me start by saying I’ve built several REST APIs. Good API design falls into two categories (1) Clearly following good object oriented design principles and (2) clearly following the HTTP 1.1 specification.

    My intent is that by viewing antipatterns, examples of what not to do, you can come away with a good sense of what should be done, and I’ll even give a few good API Design examples later.

    Let’s take the basis of our antipattern URI. The example links do not resolve.

    Example 1
    http://www.mysite.com/api/users/jim_carnes/hobbies/sports/

    The path of your API should clearly delineate a hierarchical relationship between resources and objects.

    http://www.mysite.com/api/planets/earth/cities/san-francisco/people/verdi

    In this example /api is not a resource. It’s better to use a subdomain for api scope, a la:

    http://api.mysite.com/users/jim_carnes/hobbies/sports/

    The object “jim_carnes” should be hyphenated not underscored because underscores provide legibility concerns with underlines in text. The trailing slash “/” at the end of the URI should be completely removed. In HTTP protocol trailing slashes represent a distinct and separate resource than a non-trailing slash. Collection names should always be pluralized and object names should always be singularized. The above example breaks both rules as the collection “user” is singular and the object “sports” is plural.

    A better design:
    http://www.api.mysite.com/users/jim-carnes/hobbies/sport

    Example 2
    Controller actions that correspond with CRUD operations (Create, Read, Update, Delete) should always be handled with HTTP 1.1 protocol behavior and in the event that a controller action cannot be scoped to a CRUD operation it should be pluralized. CRUD function names should never be used in URIs. Here is an antipattern:

    http://www.api.mysite.com/users/delete?name=jim-carnes

    This URI is using the query portion of the URI to provide parameters for the delete action. The purpose of query parameters is to handle filtering and pagination because URIs should be shared and results based lists cannot be shared transparently without including filtering and pagination within the URI. A better way to set up this URI is to issue an HTTP DELETE method to the users collection with the name as a parameter in the request body.

    Here is an example of a request that would properly support this setup

    curl -xdelete -v http://www.api.mysite.com/users -d “name=jim-carnes”

    Proper mapping on HTTP verbs to URIs can be found all over the place, but here’s the short list for a users collection:

    Create: POST /users
    Read: GET /users/:id
    Update: PUT /users/:id
    Delete: DELETE /users/:id

    If and only if you have a collection action that falls outside of the scope of the above should you define your own URI endpoint. In EVERY case the URI endpoint should be a verb and follow the object in the URI path. Common examples of such actions are unsubscribe and subscribe. For example: http://www.api.mysite.com/8081/subscribe is a perfectly fine URI. Only use controller resources to map to actions that cannot be logically mapped to one of the standard methods.

    It should be mentioned the DELETE HTTP verb should only be used when an actual DELETE is being processed. In other words a GET request to the same collection with the /:id should return no objects immediately following a DELETE.

    Your API should support HEAD and OPTIONS HTTP verbs. HEAD should return only headers with an empty response body and OPTIONS should return the Allow header with all available verbs for that resource, i.e. “Allow: GET, POST” and an optional response body.

    Lastly, any request to a custom controller action such as /subscribe or /unsubscribe from above should ALWAYS use a POST request.

    Example 3

    Uppercase letters do not matter in the host (http://) and domain (www.mysite.com) portions of a URI (HTTP Protocol is not case sensitive).

    HTTP://www.MYSITE.com and http://www.mysite.com are equivalent.

    However, capitalization does matter for the path.

    http://www.api.mysite.com/User/Jim-Carnes/Documents/1.txt

    The above example is a different URI resource than this example:

    http://www.api.mysite.com/user/jim-carnes/documents/1.txt

    The server side implementation could error this resource. When designing your API always use lowercase letters for resources, objects, and actions. The lookup will be much easier to deal with and more transparent for developers consuming your API.

    Do not include file extensions in URIs. The above file extension should be made clear through the request content-type header, such as “Content-Type: application/json”. Supporting including file extensions in URIs creates dependencies on the request. For example an API that can respond to XML or JSON now needs two URIs instead of one. The URI should be kept clean and the content type determined in the request headers.

    http://www.api.mysite.com/user/jim-carnes/documents/1 should be able to return any content type that is accepted within the “Content-Type” response header. If it is not accepted the API should return a specific header status. I will do a follow up post on proper use of headers and header status codes.

    Examples of proper REST API Design patterns:

    Example 1
    http://www.api.mysite.com/users/jim-carnes/hobbies/sport

    Example 2
    curl -XDELETE -v http://www.api.mysite.com/users -d “name=jim-carnes”

    Example 3
    http://www.api.mysite.com/user/jim-carnes/documents/1


    Happy coding.

  3. MongoDB top 10 list

    Ever want to see a list of help commands in mongo? Just type db.help() that and other useful features are up next:

    When I first started learning MongoDB I didn’t know really where to begin. I started with ORMs built on top of Mongo like MongoId and Mongoose, but what I should have done is started with the CLI tools and database itself. 10Gen has great documentation on their website but I have a Cliff Notes version of some of the oughta knows and gotchas I’d like to share. First, what many developers don’t know is that when within the mongo shell you can execute any valid JavaScript. In fact, the mongo shell is a JavaScript shell. Try it:

    var x = 2
    x * x

    This is a top 10-style format (those always seem to be popular). Let’s get started.

    1. ensureIndex: to significantly improve your lookup times use db.collection.ensureIndex() on any lookup table because it’s faster than querying by a key without pointers being stored in memory. Some good examples of what to ensureIndex on are email, username, and slugs, for login and pretty URLs respectively. The _id key is automatically indexed by mongo.

    db.collection.ensureIndex( { orderDate: 1, zipcode: -1 } )

    The 1 assigns ascending ordering and the -1 assigns descending ordering to the key’s index.

    More on embedded documents later on this article, but know that you can also query and ensureIndex() on fields within embedded documents:

    db.collection.ensureIndex( { location.city: 1 } )

    This is great for looking up comments within posts by author_id for example.

    2. db.system: within every mongo database is a collection that starts with .system. This collection stores indexes on that database. This database is completely queryable just like any other mongo database. Mongo uses .system. to store other things too, like access privileges for users if they’re set. That’s why it’s a good idea to avoid prefixing collections with the word system as to avoid any conflicts with mongo.

    3. Queries without an OR clause are actually just an AND query: for example db.users.remove({email: {$exists: false}, gender: ‘f’, location: “San Francisco”}) will remove any user who’s email field is null or not present AND who is female AND lives in San Francisco. In SQL: SELECT * WHERE email IS null AND gender IS ‘f’ AND location IS ‘San Francisco’.

    4. When resetting your database use db.drop() instead of db.remove(). It’s more performant (about 1 millisecond versus compared with up to several seconds).

    5. Arrays are supported as first class objects in mongodb. Think of this as a replacement for many-to-many or many-to-one relationships. For example if a computer belonged to many people we could simply store them as an array without a join table: db.computers.insert({people: [ObjectId(“4d85c7039ab0fd70a117d730”), ObjectId(“4d85c7039ab0fd70a117d732”)]}).

    6. The _id field is the only field that is always returned without being explicitly excluded. To explicitly exclude it: db.users.find(null, {name: 1, _id: 0}), which would return all users’ names. 0 is exclusionary and 1 is inclusionary.

    7. It’s OK to denormalize data and have redundancy and embed large documents. It’s understandable coming from SQL that normalization of data is an important concern to improve database performance. In Mongo good database design can include denormalization. All of Hamlet is 200kb and all of War and Peace ePub is 1.3 MB. Mongo gives you 4MB per document. You could store a customers’ main contact information within the document and also have a separate record with that person in a users’ collection for example.

    Regarding embedding documents you could have an article with many comments embedded within and no joins. In most cases such a structure would be well within a documents’ limits.

    {title: “A great article”, comments: []}

    db.article.update( {title: “A great article }, { $push: { article.comments: {… } } );

    8. Created_at: Mongo automatically has a created_at timestamp built into the ObjectId it creates for you. Unless you are assigning your own _ids use Mongo’s ObjectId as a unique identifier and a timestamp with the .getTimestamp method. Two in one!

    db.users.findOne({name: “John Doe”}).getTimestamp()

    9. MapReduce: You can write real code to do your processing. Write two JS functions: (1) Mapping, which takes the inputting documents and creates a key-value pair, and (2) Reduce, which gets a key and the array of values it emitted for that key and runs some function on them. You can store your MapReduce into a mongo object and call it like so:

    var mapFunction = function() { … };
    var reduceFunction = function(key, values) { … };

    db.runCommand(
    {
    mapReduce: ‘orders’,
    map: mapFunction,
    reduce: reduceFunction,
    out: { merge: ‘map_reduce_results’, db: ‘test’ },
    query: { ord_date: { $gt: new Date(‘01/01/2012’) } }
    }
    )

    A great use case is an analytics dashboard, where you can avoid running complex queries in your application server and persist these calculated values in the database, then simply read and display them.

    10. It’s easy to dump and restore mongo databases. How easy?

    mongodump —db nameofdb —out nameoffile
    mongorestore nameoffile.bson

    Set up a daily cron task that dumps your database, then if you ever need to restore it you can do so quite easily.

    MongoDb has a number of other awesome features like max sizes for collections to purge old records which are great for temporarily persisting logs, journaling to perform truly atomic transactional saves, and geospatial indexing. Continue reading on Mongo’s documentation.

  4. How to define your JS functions descriptively

    Purpose and goals

    With JavaScript being such a flexible language used on the server and client side patterns (best practices) become important. After reading Stoyan Stefanov’s JavaScript Patterns and writing lots of JavaScript, I’ve learned to use descriptive instead of expressive functions. I will explain why as clearly and simply as I can.

    Function expression //anti-pattern


    var sayHello;

    sayHello; //undefined
    sayHello(); //undefined is not a function

    var sayHello = function() {
    console.log(“Hellllooo!!!”);
    };

    Function declaration // pattern


    sayHello(); // Hellllooo!!!

    function sayHello () {
    console.log(“Hellllooo!!!”);
    };

    In the function declaration the function is hoisted to the top of the heap, returns its result, and can be called before its function definition. This helps avoid tricky behavior.

    Constructors are functions that instantiate new objects. Think of them as a class. When the constructor is called a new object is created with the constructor’s properties.

    A common practice to define inheritance is to create constructors from other constructors. Not using a function declaration and calling the function’s constructor returns an anonymous function.

    var Person = function (name, age) {
    this.name = name;
    this.age = age;
    };

    function SanFranciscoPerson(name, age) {
    Person.call(this, name, age); //uses Person’s properties to create San FranciscoPerson
    this.location = “San Francisco”;
    this.constructor = Person; //sets this object’s constructor as Person
    };

    jim = new SanFranciscoPerson(“Jim”, 29);
    jim; //SanFranciscoPerson {name: “Jim”, age: 29, location: “San Francisco”, constructor: function}

    What’s this object’s constructor?

    jim.constructor;

    //function (name, age) {
    // this.name = name;
    // this.age = age;
    //}

    If we would have used a descriptive function definition we would have had the constructors’ name in the call to constructor.

    jim.constructor;

    //function Person (name, age) {
    // this.name = name;
    // this.age = age;
    //}

    Happy coding.

  5. How to get started with Amazon EC2

    I attended a roundtable discussion a few weeks ago. All Ruby on Rails developers. The talk was on ‘cloud infrastructure’. Long story short, all the devs love Heroku. I don’t blame them, I’ve deployed 7 apps of varying complexity on Heroku myself and it usually does the trick. But something about Heroku (I also looked at CloudFoundry) wasn’t entirely satisfying. 

    I started taking a serious look at AWS configuration this week. For something that seemed so difficult it’s actually easier than programming. There’s a few gotchas:

    1. The more you can learn about UNIX the better since you’ll be using a Linux distribution (built on UNIX).  If you’re using a Mac UNIX is powering your Mac, so similar commands can be executed in both environments (like ls, mkdir, mv).

    2. Terminology, here’s quick rundown: AMI (Amazon Machine Instance), it’s a fancy way of staying an image of a virtual machine. You pick one and it’s loaded onto an EC2 machine. So what’s EC2 (Elastic Compute Cloud) it’s basically a server. They come in 4 sizes (micro [free], small, medium, large). As you progress up they double in cost per hour and also roughly double in RAM and CPU. Amazon was smart. What happens if you want to decouple your app server from your OS? That’s where EBS come in (Elastic Block Storage). It’s a hard drive (in 1 GB increments) mounted on your EC2 instance. You can put your AMI directly on the instance itself (each instance comes with a native EBS volume) or on a separate EBS volume. 

    3. Basic understanding of SSH. You’ll be using this to configure your instance once it’s set up (don’t worry, I give you all the commands you need).

    So here are my tips for getting started with AWS. To keep this simple, I’m going to number the steps A-F (don’t worry, this isn’t graded!).

     

    Step A. Private Key and Certificate

    Generate your Private Key and Certificate. First, log into your Amazon AWS Console. Then visit this link. Click “Certificates” You’re going to download two files.  Put these both into ~/.ec2 directory. You’ll see why in Step B.

     

    Step B. Command Line Tools

    You’ll want these. They give you raw power from your keyboard to control billions of dollars worth of infrastructure. A few nonmnemonic keystrokes and servers will be instantiated, software will be installed, permissions will be created, on some other part of the planet.

    The best way to install Amazon CLI (command line interface) is through Homebrew. 

    brew install ec2-api-tools

    Great, but that’s not it yet. Now, we need some environment variables in your .bash_profile. There’s no magic here, you can get the same information by running:

    brew info ec2-api-tools

    So let’s add them.

    pico ~/.bash_profile #or use your editor of choice

    Add these to your .bash_profile:

    export JAVA_HOME=”$(/usr/libexec/java_home)”

    export EC2_PRIVATE_KEY=”$(/bin/ls “$HOME”/.ec2/-k-*.pem | /usr/bin/head -1)”

    export EC2_CERT=”$(/bin/ls “$HOME”/.ec2/cert-*.pem | /usr/bin/head -1)”

    export EC2_HOME=”/usr/local/Library/LinkedKegs/ec2-api-tools/jars”

    Notice your EC2_CERT and EC2_HOME variables are looking in ~/.ec2 for your -k and -cert- keys. Good thing we put them there in Step A.

     

    Step C. Getting familiar in the cockpit.

    You’re doing great. You created your keypair, installed CLI tools. Now you’ll need to learn some commands to set up your first EC2 instance. All the commands are available here: link

    It won’t take long for you to realize that there’s a pattern:

    "describe" is like "list" or "fetch"

    most areas have create actions, but for creating instances the command is “run”

    These are the four commands you’ll be using in this tutorial:

    ec2-run-instances #creates your instances

    ec2-describe-instances #lists your instances

    ec2-describe-keypairs #lists your keypairs

    ec2-describe-images # lists available AMIs

    Just take a look over each of them in the documentation. Read through the parameters each accepts. Then onward.

     

    Step D. A perfect pair

    What’s a keypair? Good question. This is a basic ssh public/private keypair. The way this works is like a lock-and-key. The private key sits on your machine, that’s the key. The public key sits on the server, that’s the lock. When you SSH you use one and only one private key. That key is used against the “lock” and if it opens the lock you’re permitted access. The two combined are referred to as a “keypair”.

    Ok. Now that that’s out of the way and the command line tools are installed and your .bash_profile is updated with the right credentials, you’re ready to prepare your first instance. 

    Make sure you’re running in a new shell or type source ~/.bash_profile

    Now type

    ec2-add-keypair my-keypair #my-keypair is an example name

     

    Step E. The key is in the ignition

    Almost there, just two steps left. Not all AMIs work right out the gate. They’re community driven so they have varying degrees of quality. I tried 8 separate AMIs before I found 2 that I liked. That being said, let’s get started:

    ec2-describe-images -a

    The -a option means “all”. You’ll see your screen flooded with a list of AMIs. The AMI name always starts with “ami-” and that’s what you’ll use in the next step. I prefer Debian since v7 Wheezy came out and supports 32-bit and 64-bit architecture without additional configuration, so I”ll use a “Wheezy” distribution (by the way, don’t bother with 32-bit AMIs, all EC2 instances support 64-bit architecture).

    This is what the image looks like:

    IMAGE ami-c1c0a9a8 379101102735/debian-wheezy-amd64-20130507 379101102735 available public x86_64 machine aki-88aa75e1 ebs paravirtual xen
    The next command I type will create the instance. Note: only micro instances (up to a certain usage) are free. Double note: Free instances need the AMI mounted on an EBS Volume, not all AMIs support EBS Volumes, so it’s trial and error. When you try to type the next command if it’s not supported CLI tools will let you know. Let’s go.
     
    ec2-run-instances ami-c1c0a9a8 -k my-keypair -t t1.micro
     
    -k: specifies the keypair, this is the keypair we created in Step E
     
    -t : specifies the instance type, in this case we’re selecting t1.micro
     
    If everything is good you should see the instance created:
     
    RESERVATION r-4a9bd32a 042199387813 default
    INSTANCE i-4c92c02d ami-c1c0a9a8 pending vergun 0 t1.micro 2013-05-25T14:59:28+0000 us-east-1b aki-88aa75e1 monitoring-disabled ebs paravirtual xen sg-51c96739 default false

     

    Step F. Bring it on home

    Before we can SSH in to our server we need to open SSH port 22 for incoming traffic.

    ec2-authorize default -p 22

    Let’s see what the address of our instances really are. This is what we’ll use to SSH.

    ec2-describe-instances

    Next to the AMI name you’ll see the address of the instance, for example: ec2-54-214-67-72.compute-1.amazonaws.com

    To SSH into the instance you’ll use the pattern ssh [options] [user]@[address]

    Some options that are handy

    -vvv : maximum verbosity (shows you what’s going on behind the scenes

    -i : lets you to specify the private_key from the keypair

    -l : allows you to specify which user you login as, for example: root

    ssh -vvv -i my-keypair root@ec2-54-214-67-72.compute-1.amazonaws.com

    And we’re in. You don’t have to specify the keypair if it’s been added to your keychain. For example:

    ssh-add my-keypair

    When the keypair is added to your chain, each failed SSH attempt your client will go through the keypair list and try that “key” against the servers “lock” to see if there’s a match.

    Last thoughts

    With just a laptop and EC2 you have the power to control billions of dollars worth of infrastructure sitting on a sofa. A few simple commands in your terminal and you are spinning up servers, attaching disc space, setting static IPs, installing software in North Virginia to Northern California. That’s it, now start configuring your server!