Penny Parrot – Amazon Price Tracker

If you are like me and do any significant amount of shopping on Amazon, then you have realized that Amazon updates the pricing of products regularly to match supply and demand. This means if you have some patience and don’t need to purchase an item immediately you can take advantage of this “dynamic pricing” as a buyer by waiting for the price to drop.

There are a handful of products out there to assist with timing purchases on Amazon to save money. I have spent some time looking at a new Amazon price tracker called Penny Parrot. It is a no-frills, modern looking, and ad-free tool to track prices on Amazon. The following screenhot shows an example of the price volatility of the i7-8700k:

Penny Parrot shows you the historical Amazon price, third party new, and third party used prices. This example shows the price of the i7-8700k.

Penny Parrot lets you create price drop notifications without registering an account, you just need to verify your email. You can watch the Amazon price, third party new price, and third party used price of an item. I tested it by creating an alert for the Amazon price of a popular product, the Bose QuietComfort 35 Series II. I was pleasantly surprised to receive an email a few days later notifying my the price had dropped nearly $50 below my goal price! See the following screenshot:

A sample price drop alert email from Penny Parrot

I created an account so that I could manage all of my price watches, which is quite a bit easier than using the unsubscribe links inside the emails. The management UI could use some work, but it appears to be under active development, so I’m sure there will be improvement in this area.

All in all, Penny Parrot looks nicer than the competitors out there and I really like the fact that they emphasize privacy and an ad-free experience. It appears to be actively developed, and will have browser plugins coming soon.

j j j

Mini E-Revo Brushed Review

Mini E-Revo Artsy Shot

I was always a remote controlled vehicle enthusiast when I was growing up. I had a variety of cheap electric RC cars, trucks, boats, a submarine, and even a beefy 1/8 scale nitro truck. When I went to college and graduate school I faltered from the hobby because I had neither much time, money, or space for radio controlled cars. However, once I started working full time, I found myself with the urge to pick it back up. However, I still don’t have as much space.

The Mini E-Revo Brushed caught my attention because it is small enough to run in my yard, but big enough to use “real” RC car parts like a 2.4GHz radio, oil dampened shocks, tires with foam inserts, and fully independent suspension. It even has waterproof electronics, which I have verified to be true.

First Impression

I was a little disappointed with my first run using the stock battery and charger. It comes with a 1200mah NiMH battery and a trickle charger that takes about 6 hours to fill the battery. The truck seemed quick and I could get it to wheelie by driving in reverse, then flooring it. All in all, I got a little bored of the truck after about a dozen runs. To compound the problem, the charge time meant I needed to wait a whole day between during the work week. A run with the stock battery only lasts about 10 to 15 minutes depending on how hard you drive it.

Aside from the poor runtime, the truck seems really well constructed. Durability was my primary concern because I favor the “bashing” style of driving (as opposed to racing). The Mini E-Revo doesn’t really have a reputation for being the most durable truck, but I would tend to disagree. The shocks lay in the body instead of on the towers which lowers the center of gravity and keeps them out of harms way when you crash. Everything is constructed from high quality plastics and the tires even come pre-glued. I subjected the Mini E-Revo to a number of crashes and everything has held up to the abuse except for the shocks which have begun to leak.

Post-LiPo Impression

After some mild disappointment with speed of the truck, I decided to go out on a limb and spend even more money on the truck. I picked of two 2S LiPo packs, a charger, and a parallel adapter. From what I read online, people seemed to suggest that going LiPo only made a mild difference in the speed of the truck. Perhaps they were referring to the top speed of the truck, because LiPos transformed my Mini E-Revo into a completely different thing! I remember when I first charged the pack and put it in, I pulled the trigger to 100% throttle like I normally would and the truck just did a flip in place! I was very happy with the LiPo upgrade because it really increased the level of skill needed to drive the truck and opened up a whole new world of trick possibilities: like jumping off a roof or back flips, jumps, and donuts.

The LiPo upgrade necessitated getting a few other things. I picked up some stiffer springs and heavier oil because the stock setup is excessively squishy and bouncy. The stock setup was fine with the poor acceleration and lower speeds I got with the stock battery, but I quickly found myself unable to land any reasonable jump with the LiPo packs.

Highly Recommended Upgrades

Part Purpose
Parallel Battery Adapter Connect two batteries at once for almost double runtime
Traxxas “Black” Springs Helps your Mini E-Revo handle big jumps
60 Weight Shock Oil Makes jumps more predictable
LiPo/NiMh Charger A balancing charger is necessary for multi-cell LiPo packs
Traxxas HC Connector Solder this to the wire leads on the charger
2S LiPo Pack A serious upgrade in power and runtime


The Mini E-Revo Brushed is an insanely quick, and reasonably affordable entry into the RC truck hobby. It doesn’t drive all that well through grass, so it’s better suited to pavement, sand, and dirt. It has a flexible chassis with a lot of after-market upgrades. The suspension setup is flexible enough to change the ride-height and convert the Mini E-Revo into a buggy or street car. The electronics truly are waterproof and I completely underestimated how cool and useful that would be until my Mini E-Revo went through numerous puddles, got hit by waves, and fell into a canal.

There is a brushless version for about $100 more. I think upgrading to the brushless system might be inevitable for me as any brushed motor has a limited lifetime. In contrast, brushless motors will last nearly forever as long as they don’t overheat.

I would still encourage anyone to start with the brushed version if they are even a little unsure that RC trucks will hold their interest for more than a few months.


j j j

What happened to LightLane?

Back in July of 2009 I wrote a quick blog post about LightLane that concluded with the following remark:

Unfortunately, it isn’t for purchase yet.

I just happened to see the old post, so I did some digging to see what ever became of the LightLane. Googling turns up results from 2009 and no mention of actual product releases. A quick patent search shows that LightLane LLC did finally get their patent filed in December of 2011. I searched for similar products and found a number of them. The best of which seems to be: EasyAcc Bicycle Light with Parallel Beam. There are also a bunch of other similar products such as the X-Fire Taillight with Laser Lane Marker and Pellor Bicycle Bike Laser Tail Light.

I can only conclude that once LightLane LLC received their patent, they immediately solicited companies to license it from them. I find the whole story to be fascinating and hopefully I’ll be able to piece more of it together. It seems like I should be able to find something about the patent in public records. I’ll have to ask my more legally-inclined friends.

j j j

A Side Project

I have finally gotten to the point in my life where I have time for a side project. I’m working with a coworker on a website that aims to rank the politicians of the United States from the sentiment of what people are writing about them on the internet. We are already in the process of adding celebrities and products are on the roadmap. Right now we are only ranking based on tweets, but have started gathering other information as well (tumblr, disqus comments, and news articles). The site, whatpplwant, is our stomping grounds for natural language processing experiments. I’ll be updating the blog there for content related to the project itself. Keep checking back there for major updates.

j j j

Porting MATLAB’s Discrete Gaussian Smoothing to C++

A couple years ago I was placed in charge of porting a project written in MATLAB to C++. The resulting product, which we call LoCA Chop internally, takes a set of meshes and prepares them to be suitable for Localized Components Analysis, a method developed by some fellow students at UC Davis. LoCA Chop accepts a set of input shapes. The assumption is that all of the shapes are different samples of the same structure. In my case I was applying it to a set of hippocampi. Each input sample does not necessarily contain the entire hippocampus; portions of either end may have been omitted because they didn’t show up in the brain scan. LoCA Chop’s job is to find the largest subset of the shape that exists in all input meshes.

I found it to be a very interesting problem, and had a lot of fun porting it to C++ because it forced me to understand the entire solution. It worked by maximizing an objective function which depended on a handful of MATLAB functions. This is where the headaches began. The method worked by optimizing a function which had no derivative, so I needed my implementation of the objective function to be as numerically similar as possible. This meant doing a lot of work to replicate MATLAB functions in C++. One of the most basic parts I had to port was a function which smoothed a vector of numbers:

function [smoothedVector] = gaussianSmooth1D(vector)
    % Construct blurring window.
    windowWidth = int16(7);
    halfWidth = windowWidth / 2;
    gaussFilter = gausswin(double(windowWidth));
    gaussFilter = gaussFilter / sum(gaussFilter); % Normalize.

    % Do the blur.
    smoothedVector = conv(vector, gaussFilter);
    smoothedVector = smoothedVector(halfWidth:end-halfWidth);

The corresponding C++ function takes a vector of doubles, smooths it, and returns it. It also turns out that MATLAB rounds the result of integer division (instead of truncating it) when you use the resulting number to do subset selection, so my C++ implementation ended up looking like this:

std::vector< double > smoothVector( std::vector vect, int windowWidth )
    std::vector gaussFilter;
    std::vector result;
    std::vector smoothedVector;

    gaussFilter = makeGaussianWindow( windowWidth );
    gaussFilter = normalize( gaussFilter );

    // Note the use of rounding here instead of truncating integer divsion
    int halfWidth = floor((double)windowWidth / 2.0 + 0.5);

    smoothedVector = convolve( vect, gaussFilter );
    result = std::vector(
        smoothedVector.begin() + halfWidth-1,
        smoothedVector.end() - halfWidth

    return result;

I found the implementation of gausswin to be a little contradictory as I needed to use regular truncating integer division inside instead of the rounding division from earlier that is used to select a subset of the smoothed vector.

// Replicates gausswin(N, Alpha) in MATLAB.
// See:
std::vector< double > makeGaussianWindow( int size, double alpha )
    std::vector< double > window( size, 0.0 );
    int halfWidth = size/2; // integer division (truncating)

    for( int i = 0; i < size; ++i )
        // w(n) = e^( -1/2 * ( alpha * n/(n/2) )^2 )
        // for n such that -N/2 <= n <= N/2

        int n = i - halfWidth;

        window[i] = pow( E_CONST, -0.5 * (
            ( alpha * n / ( (double) halfWidth ) ) *
            ( alpha * n / ( (double) halfWidth ) )

    return window;

It was the same story with the truncating division in convolve. I can only conclude that there must be something weird about the subset selection in MATLAB.

 * Output has same dimension has f
 * @param f - Some data set
 * @param g - The kernel to convolve with
std::vector< double > convolve( std::vector f, const std::vector& g )
    int halfKernelWidth = g.size() / 2; // truncating division

    // Pad f in matlab style
    f.insert( f.begin(), halfKernelWidth, 0.0 );
    f.insert( f.end(), 2*halfKernelWidth, 0.0 );

    //std::vector< double > result( f.size()+2*halfKernelWidth, 0.0 );
    std::vector< double > result( f.size(), 0.0 );

    // Set result[g.size()-1] to result[f.size()-1]
    for( int i = g.size() - 1; i < f.size(); ++ i )
        for( int j = i, k = 0; k < g.size(); --j, ++k )
            result[i] += f[j] * g[k];

    // Set result[0] through result[g.size()-2]
    for( int i = 0; i < g.size() - 1; ++i )
        for( int j = i, k = 0; j >= 0; --j, ++k )
            result[i] += f[j] * g[k];

    // more matlab-ism
    result.erase( result.begin(), result.begin()+3 );

    return result;

The implementation of normalize is trivial, but reproduced here for completeness.

std::vector< double > normalize( std::vector vect )
    double sum;
    std::vector< double > result;

    sum = 0.0;
    for( unsigned int i = 0; i < vect.size(); ++i )
        sum += vect[i];

    assert( sum != 0 );

    for( unsigned int i = 0; i < vect.size(); ++i )
        result.push_back( vect[i] / sum );

    return result;

That concludes the implementation. Hopefully someone will find this as useful as I did. I spent a fair amount of time ensuring that these methods fell within a pretty strict numerical tolerance (1e-9) of the MATLAB implementation.

j j j