What happened to LightLane?

Back in July of 2009 I wrote a quick blog post about LightLane that concluded with the following remark:

Unfortunately, it isn’t for purchase yet.

I just happened to see the old post, so I did some digging to see what ever became of the LightLane. Googling turns up results from 2009 and no mention of actual product releases. A quick patent search shows that LightLane LLC did finally get their patent filed in December of 2011. I searched for similar products and found a number of them. The best of which seems to be: EasyAcc Bicycle Light with Parallel Beam. There are also a bunch of other similar products such as the X-Fire Taillight with Laser Lane Marker and Pellor Bicycle Bike Laser Tail Light.

I can only conclude that once LightLane LLC received their patent, they immediately solicited companies to license it from them. I find the whole story to be fascinating and hopefully I’ll be able to piece more of it together. It seems like I should be able to find something about the patent in public records. I’ll have to ask my more legally-inclined friends.

j j j

Mini E-Revo Brushed Review

Mini E-Revo Artsy Shot

I was always a remote controlled vehicle enthusiast when I was growing up. I had a variety of cheap electric RC cars, trucks, boats, a submarine, and even a beefy 1/8 scale nitro truck. When I went to college and graduate school I faltered from the hobby because I had neither much time, money, or space for radio controlled cars. However, once I started working full time, I found myself with the urge to pick it back up. However, I still don’t have as much space.

The Mini E-Revo Brushed caught my attention because it is small enough to run in my yard, but big enough to use “real” RC car parts like a 2.4GHz radio, oil dampened shocks, tires with foam inserts, and fully independent suspension. It even has waterproof electronics, which I have verified to be true.

First Impression

I was a little disappointed with my first run using the stock battery and charger. It comes with a 1200mah NiMH battery and a trickle charger that takes about 6 hours to fill the battery. The truck seemed quick and I could get it to wheelie by driving in reverse, then flooring it. All in all, I got a little bored of the truck after about a dozen runs. To compound the problem, the charge time meant I needed to wait a whole day between during the work week. A run with the stock battery only lasts about 10 to 15 minutes depending on how hard you drive it.

Aside from the poor runtime, the truck seems really well constructed. Durability was my primary concern because I favor the “bashing” style of driving (as opposed to racing). The Mini E-Revo doesn’t really have a reputation for being the most durable truck, but I would tend to disagree. The shocks lay in the body instead of on the towers which lowers the center of gravity and keeps them out of harms way when you crash. Everything is constructed from high quality plastics and the tires even come pre-glued. I subjected the Mini E-Revo to a number of crashes and everything has held up to the abuse except for the shocks which have begun to leak.

Post-LiPo Impression

After some mild disappointment with speed of the truck, I decided to go out on a limb and spend even more money on the truck. I picked of two 2S LiPo packs, a charger, and a parallel adapter. From what I read online, people seemed to suggest that going LiPo only made a mild difference in the speed of the truck. Perhaps they were referring to the top speed of the truck, because LiPos transformed my Mini E-Revo into a completely different thing! I remember when I first charged the pack and put it in, I pulled the trigger to 100% throttle like I normally would and the truck just did a flip in place! I was very happy with the LiPo upgrade because it really increased the level of skill needed to drive the truck and opened up a whole new world of trick possibilities: like jumping off a roof or back flips, jumps, and donuts.

The LiPo upgrade necessitated getting a few other things. I picked up some stiffer springs and heavier oil because the stock setup is excessively squishy and bouncy. The stock setup was fine with the poor acceleration and lower speeds I got with the stock battery, but I quickly found myself unable to land any reasonable jump with the LiPo packs.

Highly Recommended Upgrades

Part Purpose
Parallel Battery Adapter Connect two batteries at once for almost double runtime
Traxxas “Black” Springs Helps your Mini E-Revo handle big jumps
60 Weight Shock Oil Makes jumps more predictable
LiPo/NiMh Charger A balancing charger is necessary for multi-cell LiPo packs
Traxxas HC Connector Solder this to the wire leads on the charger
2S LiPo Pack A serious upgrade in power and runtime


The Mini E-Revo Brushed is an insanely quick, and reasonably affordable entry into the RC truck hobby. It doesn’t drive all that well through grass, so it’s better suited to pavement, sand, and dirt. It has a flexible chassis with a lot of after-market upgrades. The suspension setup is flexible enough to change the ride-height and convert the Mini E-Revo into a buggy or street car. The electronics truly are waterproof and I completely underestimated how cool and useful that would be until my Mini E-Revo went through numerous puddles, got hit by waves, and fell into a canal.

There is a brushless version for about $100 more. I think upgrading to the brushless system might be inevitable for me as any brushed motor has a limited lifetime. In contrast, brushless motors will last nearly forever as long as they don’t overheat.

I would still encourage anyone to start with the brushed version if they are even a little unsure that RC trucks will hold their interest for more than a few months.


j j j

A Side Project

I have finally gotten to the point in my life where I have time for a side project. I’m working with a coworker on a website that aims to rank the politicians of the United States from the sentiment of what people are writing about them on the internet. We are already in the process of adding celebrities and products are on the roadmap. Right now we are only ranking based on tweets, but have started gathering other information as well (tumblr, disqus comments, and news articles). The site, whatpplwant, is our stomping grounds for natural language processing experiments. I’ll be updating the blog there for content related to the project itself. Keep checking back there for major updates.

j j j

Porting MATLAB’s Discrete Gaussian Smoothing to C++

A couple years ago I was placed in charge of porting a project written in MATLAB to C++. The resulting product, which we call LoCA Chop internally, takes a set of meshes and prepares them to be suitable for Localized Components Analysis, a method developed by some fellow students at UC Davis. LoCA Chop accepts a set of input shapes. The assumption is that all of the shapes are different samples of the same structure. In my case I was applying it to a set of hippocampi. Each input sample does not necessarily contain the entire hippocampus; portions of either end may have been omitted because they didn’t show up in the brain scan. LoCA Chop’s job is to find the largest subset of the shape that exists in all input meshes.

I found it to be a very interesting problem, and had a lot of fun porting it to C++ because it forced me to understand the entire solution. It worked by maximizing an objective function which depended on a handful of MATLAB functions. This is where the headaches began. The method worked by optimizing a function which had no derivative, so I needed my implementation of the objective function to be as numerically similar as possible. This meant doing a lot of work to replicate MATLAB functions in C++. One of the most basic parts I had to port was a function which smoothed a vector of numbers:

function [smoothedVector] = gaussianSmooth1D(vector)
    % Construct blurring window.
    windowWidth = int16(7);
    halfWidth = windowWidth / 2;
    gaussFilter = gausswin(double(windowWidth));
    gaussFilter = gaussFilter / sum(gaussFilter); % Normalize.

    % Do the blur.
    smoothedVector = conv(vector, gaussFilter);
    smoothedVector = smoothedVector(halfWidth:end-halfWidth);

The corresponding C++ function takes a vector of doubles, smooths it, and returns it. It also turns out that MATLAB rounds the result of integer division (instead of truncating it) when you use the resulting number to do subset selection, so my C++ implementation ended up looking like this:

std::vector< double > smoothVector( std::vector vect, int windowWidth )
    std::vector gaussFilter;
    std::vector result;
    std::vector smoothedVector;

    gaussFilter = makeGaussianWindow( windowWidth );
    gaussFilter = normalize( gaussFilter );

    // Note the use of rounding here instead of truncating integer divsion
    int halfWidth = floor((double)windowWidth / 2.0 + 0.5);

    smoothedVector = convolve( vect, gaussFilter );
    result = std::vector(
        smoothedVector.begin() + halfWidth-1,
        smoothedVector.end() - halfWidth

    return result;

I found the implementation of gausswin to be a little contradictory as I needed to use regular truncating integer division inside instead of the rounding division from earlier that is used to select a subset of the smoothed vector.

// Replicates gausswin(N, Alpha) in MATLAB.
// See: http://www.mathworks.com/help/toolbox/signal/ref/gausswin.html
std::vector< double > makeGaussianWindow( int size, double alpha )
    std::vector< double > window( size, 0.0 );
    int halfWidth = size/2; // integer division (truncating)

    for( int i = 0; i < size; ++i )
        // w(n) = e^( -1/2 * ( alpha * n/(n/2) )^2 )
        // for n such that -N/2 <= n <= N/2

        int n = i - halfWidth;

        window[i] = pow( E_CONST, -0.5 * (
            ( alpha * n / ( (double) halfWidth ) ) *
            ( alpha * n / ( (double) halfWidth ) )

    return window;

It was the same story with the truncating division in convolve. I can only conclude that there must be something weird about the subset selection in MATLAB.

 * Output has same dimension has f
 * @param f - Some data set
 * @param g - The kernel to convolve with
std::vector< double > convolve( std::vector f, const std::vector& g )
    int halfKernelWidth = g.size() / 2; // truncating division

    // Pad f in matlab style
    f.insert( f.begin(), halfKernelWidth, 0.0 );
    f.insert( f.end(), 2*halfKernelWidth, 0.0 );

    //std::vector< double > result( f.size()+2*halfKernelWidth, 0.0 );
    std::vector< double > result( f.size(), 0.0 );

    // Set result[g.size()-1] to result[f.size()-1]
    for( int i = g.size() - 1; i < f.size(); ++ i )
        for( int j = i, k = 0; k < g.size(); --j, ++k )
            result[i] += f[j] * g[k];

    // Set result[0] through result[g.size()-2]
    for( int i = 0; i < g.size() - 1; ++i )
        for( int j = i, k = 0; j >= 0; --j, ++k )
            result[i] += f[j] * g[k];

    // more matlab-ism
    result.erase( result.begin(), result.begin()+3 );

    return result;

The implementation of normalize is trivial, but reproduced here for completeness.

std::vector< double > normalize( std::vector vect )
    double sum;
    std::vector< double > result;

    sum = 0.0;
    for( unsigned int i = 0; i < vect.size(); ++i )
        sum += vect[i];

    assert( sum != 0 );

    for( unsigned int i = 0; i < vect.size(); ++i )
        result.push_back( vect[i] / sum );

    return result;

That concludes the implementation. Hopefully someone will find this as useful as I did. I spent a fair amount of time ensuring that these methods fell within a pretty strict numerical tolerance (1e-9) of the MATLAB implementation.

j j j

Building VTK, ITK, ITPP, and Boost on Centos 6.5

I recently had to build some software I wrote over a year ago on a Centos 6.5 machine. The vast majority of my Linux experience has been with Ubuntu and Debian, so this was a whole new world for me: a totally different package manager with a slightly different directory layout. Notice that I use some older versions of the libraries; this is because I wanted to ensure 100% compatibility with my old code base, I’m sure the instructions are more or less unchanged for newer versions. Follow up in the comments if you notice any deviations.

Without further ado, let’s get a build environment set up and get these libraries built.

yum groupinstall "Development Tools"

# We are going to need a new version of cmake to build vtk, so remove the old one
yum remove cmake
yum install qt qt4 qt4-designer
wget http://pkgs.repoforge.org/cmake/cmake-2.8.8-1.el6.rfx.x86_64.rpm
yum install cmake-2.8.8-1.el6.rfx.x86_64.rpm
wget http://www.vtk.org/files/release/5.8/vtk-5.8.0.tar.gz
tar xzvf vtk-5.8.0.tar.gz
cd VTK
ccmake .
# Press c to configure
# Set type to "Release"
# Enable qt
# Press c
# Press g

# Compile with one process per core (including hyper-threading cores)
make -j `nproc`
make install
ln -s /usr/local/vtk-5.8 /usr/local/vtk

Building ITK is very similar, except it forces you to build it outside of the source directory.

wget http://downloads.sourceforge.net/project/itk/itk/4.1/InsightToolkit-4.1.0.tar.gz?r=http%3A%2F%2Fwww.itk.org%2FITK%2Fresources%2Flegacy_releases.html&ts=1387418924&use_mirror=iweb
tar zxvf InsightToolkit-4.1.0.tar.gz
mkdir itk-build
cd itk-build
ccmake ../InsightToolkit-4.1.0
# If you accidentally ran ccmake or cmake in the source folder you have to blow it away and remake it.
# Press c
# Disable examples
# Enable shared libs
# Release
# Press c
# Press g
make -j `nproc`
make install

IT++ is a mathematical signal processing library that I also need.

yum install blas lapack
yum install autoconf automake libtool
wget http://sourceforge.net/projects/itpp/files/itpp/4.2.0/itpp-4.2.tar.gz
tar xvzf itpp-4.2.tar.gz
cd itpp-4.2
./configure --without-fft --with-blas=/usr/lib64/libblas.so.3 --with-lapack=/usr/lib64/liblapack.so.3 CFLAGS=-fPIC CXXFLAGS=-fPIC CPPFLAGS=-fPIC
make -j `nproc`
make install

Of course every reasonable software project written in C++ depends on Boost. In my case, the version 1.41 that they package with Centos 6.5 is not adequate because the Boost Filesystem API changed in newer versions.

wget http://downloads.sourceforge.net/project/boost/boost/1.49.0/boost_1_49_0.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fboost%2Ffiles%2Fboost%2F1.49.0%2F&ts=1387504197&use_mirror=superb-dca2
tar -zxvf boost_1_49_0.tar.gz
cd boost_1_49_0/
./bootstrap.sh --prefix=/usr/local/boost
./bjam --layout=system install

There you go! Hopefully this post saves you some time, as it took a lot of trial and error for me to get the right paths and dependencies installed.

j j j